You are on page 1of 222

-,f-y

KZast
astern
E conom y
^Tdition

T aguchi M ethods E xplained


PRACTICAL STEPS TO ROBUST DESIGN
R s. 175

TAGUCHI METHODS EXPLAINED


Practical Steps to Robust Design

Tapan P. Bagchi (Ph.D., Toronto)

This w ell-organized, com pact volume


introduces the reader to Taguchi Methods
a revolutionary approach innovated in Japan
to engineer quality and performance into
new products and manufacturing processes.
It explains on-the-job application of Taguchi
Methods to make products and processes
perform consistently on target, hence making
them insensitive to factors that are difficult
to control.

Designed for practising engineering managers


with responsibility in process performance,
quality control, and R&D, and for students of
engineering and process design, the text
provides all the essential tools for planning
and conducting prototype development and
tests which guarantee improved final field
performance of products and manufacturing
processes. Replete with examples, exercises,
and actual case studies, the book shows
how electronic circuit devicest mechanical
fabrication methods, and chemical and
metallurgical processes can be made robust
and stable to consistently provide on-target
performance, in spite of the presence of
'noise' raw material quality variations,
environmental changes, voltage fluctuations,
operator's inconsistency, and so on , all of
which are external factors that cannot be
economically controlled.

The book also shows the reader how to plan


reliable and efficient tests with physical
prototypes and computer models to evaluate
products and processes during development
to improve them. Finally, it explains state-of-
art methods to make even complex systems
robust, where design variables interact,
making conventional design optimization
methods difficult to apply.

(continued on back flap)


TAGUCHI METHODS EXPLAINED
Practical Steps to Robust Design

. *

TAPAN P. BAGCHI
Professor, Industrial and Management Engineering
Indian Institute o f Technology, Kanpur

Prentice^Hall of India u a OTnl


New Delhi-110001
1993
Rs. 175.00

TAGUCHI METHODS EXPLAINED : Practical Steps to Robust Design


by Tapan P/Bagchi
* *

..

PRENTICE-HALL INTERNATIONAL, INC., Englewood Cliffs.


PRENTICE-HALL INTERNATIONAL (UK) LIMITED, London.
PRENTICE-HALL OF AUSTRALIA PTY. LIMITED, Sydney.
PRENTICE-HALL CANADA, INC., Toronto.
PRENTICE-HALL HISPANOAMERICANA, S.A., Mexico.
PRENTICE-HALL OF JAPAN, INC., Tokyo.
SIMON & SCHUSTER ASIA PTE. LTD., Singapore.
EDITORA PRENTICE-HALL DO BRASIL, LTDA., Rio de Janeiro.

1993 by Prentice-Hall of India Private Limited, New Delhi. All rights reserved.
No part of this book may be reproduced in any form, by mimeograph or any other
means, without permission in writing from the publishers.

ISBN-0-87692-808-4

The export rights of this book are vested solely with the publisher.

Published by Prentice-Hall of India Private Limited, M-97, Connaught Circus,


New Delhi-110001 and Printed by Bhuvnesh Seth at Rajkamal Electric Press,
B-35/9, G.T. Karnal Road Industrial Area, Delhi-110033.
To
the Fond Memory of
Bhalokaku
Contents

Preface ix

1. What Are Taguchi Methods? 1 -1 7


1.1 The Road to Quality Starts at Design 1
1.2 Achieving QualityTaguchi* s Seven Points 2
1.3 Optimized Design Reduces R&D, Production, and
Lifetime Cost 3
1.4 Taguchis Definition of Quality 6
1.5 What Causes Performance to Vary? 9
1.6 Prevention by Quality Design 11
1.7 Steps in Designing Performance into a Product 12
1.8 Functional Design: The Traditional Focus 13
1.9 Parametric Design: The Engineering of Quality 14
1.10 Statistical Experiments Discover the Best Design
Reliably and Economically 16
E x e r c is e s 17

2- Handling Uncertainty 18-40


2.1 The Mystique of Probability 18
2.2 The Idea of a Random Variable 21
2.3 Some Useful Formulas 25
2.4 Hypothesis Testing: A Scientific Method to Validate or
Refute Speculations 27
2.5 Comparing Two Population Means Using Observed Data 30
2.6 Cause-Effect Models and Regression 32
2.7 Evaluating a Suspected Cause Factor 33
2.8 The F-Statistic 37
2.9 An Alternative Approach to Finding F: The Mean Sum of
Squares 39
E xercises 40

3L Design of Experiments 4 1 -6 0
3-1 Testing Factors One-at-a-Time is Unscientific 41
vi TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

3.2 The One-Factor Designed Experiment 44


3.3 ANOVA Helps Compare Variabilities 49
3.4 The F-Test Tells If Factor Effects are Statistically
Significant 53
3.5 Formulas for Sum of Squares and the F-Test 55
3.6 Summary 58
E x e r c ise s 59

The Foundation of Taguchi Methods: The Additive


Cause-Effect Model 6 1 -7 8
4.1 What is Additivity? 61
4.2 Why Achieving Additivity is So Important? 62
4.3 The Verification of Additivity 65
4.4 The Response Table: A Tool That Helps Find Main Effects
Quickly 65
4.5 Graphic Evaluation of Main Effects 68
4.6 Optimization of Response Level and Variability 70
4.7 Orthogonal Arrays vs. Classical Statistical Experiments 72
4.8 Summary 77
E x e r c ises 78

Optimization Using Signal-to-Noise Ratios 7 9 -8 9


5.1 Selecting Factors for Taguchi Experiments 79
5.2 To Seek Robustness One Should Measure Performance
by SIN Ratios 81
5.3 SIN Ratio in Optimization An Example 84
5.4 Not All Performance Characteristics Display Additivity 85
5.5 The OA as the Experiment Matrix 86
5.6 The Axiomatic Approach to Design 87
5.7 Summary 88
E x e r c ise s 88

6. Use of Orthogonal A rrays 90-106


6.1 What are Orthogonal Arrays? 90
6.2 OAs are Fractional Factorial Designs 92
6.3 Not All Factors Affect Performance the Same Way 94
6.4 Identifying Control and Noise Factors: The Ishikawa
Diagram 95
6.5 At What Levels Should One Study Each Factor? 97
CONTENTS Vii

6.6 Reaching the Optimized Design 99


6.7 Testing for Additivity 100
6.8 The Optimization Strategy 100
6.9 Taguchis Two Steps to On-Target Performance with
Minimum Variability 103
6.10 Summary 104
E x e x c ise s 104

7 . C am S tv iy 1: Process Optimization Optical Filter

7.1 H r ftocess far Manufacturing Optical Filters 107


12 Test Settings of Control Parameters and the OA 108
73 rVrformanre Measurements and the S/N Ratio 110
7.4 Minimizing logjo (s2), the Variability of Thickness 111
7.5 The Confirmation Experiment 111
7 j6 Adjusting Mean Crystal Thickness to Target 112

1 S c k d k g Orthogonal Arrays and Linear Graphs 114-122


8.1 Sizing up tbe Design Optimization Problem 114
8.2 Linear Graphs and Interactions 116
83 Modification of Standard Linear Graphs 118
8.4 Estimation of Factor Interactions Using OAs 119
8.5 Summary *
122
E x er c ise 122

9. Case Stady 2: Product Optimization Passive Network


F ile r Design 123-139
9.1 The Passive Network Filter 123
9.2 Formal Statement of the Design Problem 125
9.3 The Robust Design Formulation of the Problem 125
9.4 Data Analysis and Estimation of Effects 129
9_5 Effects of the Design Parameters 131
9.6 Discussion on Results 135
9.7 Filter Design Optimization by Advanced Methods 136

1#. A Direct Method to Achieve Robust Design 140-161


10.1 Re-Statement of the Multiple Objective Design
Optimization Problem 140
10.2 Target Performance Requirements as Explicit Constraints 141
VIII TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

10.3 Constraints Present in the Filter Design Problem 142


10.4 Seeking Pareto-Optimal Designs 143
10.5 Monte Carlo Evaluation of S/N Ratios 144
10.6 Can We Use C (or R2) as the Independent DP instead
of /?3? 146
10.7 Some Necessary Mathematical Tools 147
10.8 Developing a Multiple Regression Model 150
10.9 Rationale of the Constrained Robust Design Approach 153
10.10 Application of the Constrained Approach to Real Problems 155
10.11 Discussion of the Constrained Design Optimization
Approach 159

11. Loss Functions and Manufacturing Tolerances 162-171


11.1 Loss to Society is More Than Defective Goods 162
11.2 Determining Manufacturing Tolerances 164
11.3 Loss Functions for Mass-Produced Items 170
11.4 Summary 171
E x e r c ise s 171

12. Total Quality Management and Taguchi Methods 172-183


12.1 Why Total Quality Management? 172
12.2 What Really is Quality? 174
12.3 What is Control? 174
12.4 Quality Management Methods 174
12.5 The Business Impact of TQM 176
12.6 Control of Variability: Key to QA 177
12.7 How is Statistics Helpful? 178
12.8 Practical Details of Planning a Taguchi Project 179

Appendix A: Standard Normal, t, Chi-square, and F-Tables 185-190


Appendix B: Selected Orthogonal Arrays and Their Linear
Graphs 191-196

Glossary 197-202

References 203-204

Index 205-209
Preface

Taguchi methods are the most recent additions to the toolkit of design, process,
and manufacturing engineers, and Quality Assurance (QA) experts. In contrast to
Statistical Process Control (SPC), which attempts to control the factors that
adversely affect the quality of production, Taguchi methods focus on designthe
development of superior performance designs (of both products and manufacturing
processes) to deliver quality.
Taguchi methods lead to excellence in the selection and setting of product/
process design parameters and their tolerances. In the past decade, engineers have
applied these methods in over 500 automotive, electronics, information technology,
and process industries worldwide. These applications have reduced cracks in
castings, increased the life of drill bits, produced VLSI with fewer defects, speeded
up the response time of UNIX V, and even guided human resource management
systems design.
Taguchi methods systematically reveal the complex cause-effect relationships
between design parameters and performance. These in turn lead to building quality
performance into processes and products before actual production begins.
Taguchi methods have rapidly attained prominence because wherever they
have been applied, they have led to major reductions in product/process development
lead time. They have also helped in rapidly improving the manufacturability of
complex products and in the deployment of engineering expertise within an enterprise.
The First objective of Taguchi methods which are empirical is reducing
the variability in quality. A key premise of Taguchi methods is that society incurs
a loss any time a product whose performance is not on target gets shipped to a
customer This loss is measurable by the loss function, a quantity dependent on the
deviation of the products performance from its target performance. Loss functions
are directly usable in determining manufacturing tolerance limits.
Delivering a robust design is the second objective of Taguchi methods. Often
there are factors present in the environment on which the user of a product has little
or no control. The robust design procedure adjusts the design features of the product
such that the performance of the product remains unaffected by these factors. For
a process, the robust design procedure optimizes the process parameters such that
the quality of the product that the process delivers, stays on target, and is unaffected
by factors beyond control. Robust design minimizes variability (and thus the lifetime
cost of the product), while retaining the performance of the product on target.
Statistically designed experiments using orthogonal arrays and signal-to-noise
{SIN) ratios constitute the core of the robust design procedure.
This text provides the practising engineer an overview of the state-of-the-art
in Taguchi methods the methods for engineering superior and lasting performance
into products and processes.
X PREFACE

Chapters 1-3 introduce the reader to the basic ideas in the engineering of
quality, and the needed tools in probability and statistics. Chapter 4 presents the
additive cause-effect model, the foundation of the Taguchi methodology for design
optimization. Chapter 5 defines the signal-to-noise ratiothe key performance metric
that measures the robustness of a design. Chapter 6 describes the use of orthogonal
arrays (OAs), the experimental framework in which empirical studies to determine
the dependency of performance on design and environmental factors can be efficiently
done. Chapter 7 illustrates the use of these methods in reducing the sensitivity of
a manufacturing process to uncontrolled environmental factors. Chapter 8 provides
the guidelines for the selection of appropriate orthogonal arrays (OAs) for real-life
robust design problems. A case study in Chapter 9 shows how one optimizes a
product design. Chapter 10 presents a constrained optimization approach which
would be of assistance when the design parameter effects interact. Chapter 11
shows how Taguchi loss functions can be used in setting tolerances for
manufacturing. Chapter 12 places Taguchi methods in the general framework of
Total Quality Management (TQM) in an enterprise.
Throughout the text, examples and exercises have been provided for enabling
the reader to have a better grasp of the ideas presented. Besides, the fairly large
number of References should stimulate the student to delve deeper into the subject.
I am indebted to Jim Templeton, my doctoral guide and Professorfrom him
I had the privilege of imbibing much of my knowledge in applied probability. I
am also grateful to Birendra Sahay and Manjit Kalra, whose enormous confidence
in me led to the writing of this book. I wish to thank Mita Bagchi, my wife, and
Damayanti Singh, Rajesh Bhaduri and Ranjan Bhaduri whose comments and
suggestions have been of considerable assistance in the preparation of the manuscript.
The financial assistance provided by the Continuing Education Centre, Indian
Institute of Technology, Kanpur to partially compensate for the preparation of the
manuscript is gratefully acknowledged. Finally, this book could not have been
completed without the professionalism and dedication demonstrated by the
Publishers, Prentice-Hall of India, both during the editorial and production stages.
Any comments and suggestions for improving the contents would be warmly
appreciated.

Tapan P. Bagchi

%
What Are Taguchi Methods? i

1.1 THE ROAD TO QUALITY STARTS AT DESIGN


Quality implies delivering products and services that meet customers standards
and fulfill their needs and expectations. Quality has been traditionally assured
by Statistical Process Control (SPC) a collection of powerful statistical
methods facilitating the production of quality goods by intelligently controlling
the factors that affect a manufacturing process. SPC attempts to achieve quality
by reacting to deviations in the quality of what the manufacturing plant has
recently produced. In this chapter, however, we present an overview of a some
what different approach for assuring quality consisting essentially of certain
specially designed experimental investigations. Collectively known as the Taguchi
methods, these methods focus on improving the design of manufacturing processes
and products. A designer applies Taguchi methods off-line before production
begins. When applied to process design, Taguchi methods can help improve process
capability. These methods also reduce the sensitivity of the process to assignable
causes, substantially reducing thereby the on-line SPC effort required to keep the
quality of production on target.
The significance of beginning Quality Assurance (QA) with an improved
process or product design is not difficult to gauge. Experience suggests that nearly
80 per cent of the lifetime cost of a product becomes fixed once its design is
complete. Recent studies suggest that a superior product design ranks among the
foremost attributes of a successful enterprise [1]. The application of Taguchi methods
leads to superior performance designs known as robust designs.
Statistical experimentation and analysis methods have been known for over
the past 60 years [2, 3]. However, the Japanese appear to have been the firt to use
these methods formally in selecting the best settings of process/product design
parameters [4]. In the West, the most notable user of Taguchi methods has been
AT&T, U.S.A., whose product development efforts now incorporate parametric
optimization [5].
The foundation of the Taguchi methods is based on two premises:
L Society incurs a loss any time the performance of a product is not on
*

target. Taguchi has argued that any deviation from target performance results in
a loss to society. He has redefined the term quality to be the losses a product
imparts to society from the time it is shipped.
2. Product and process design requires a systematic development, progressing
stepwise through system design, parametric design, and finally, tolerance design.
Taguchi methods provide an efficient, experimentation-based framework to
achieve this.
1
2 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

The first premise suggests that whenever the performance of a product


deviates from its target performance, society suffers a loss. Such a loss has two
components: The manufacturer incurs a loss when he repairs or rectifies a returned
or rejected product not measuring up to its target performance. The consumer
incurs a loss in the form of inconvenience, monetary loss, or a hazardous consequence
of using the product.
The second premise forms the foundation of quality engineering, a discipline
that aims at engineering not only the function, but also quality performance into
products and processes.
Taguchis original work circulated mainly within his native country, Japan,
until the late 70s, when some translations became available in other countries. The
American Society for Quality Control published a review of Taguchis methods,
especially of off-line quality control, in 1985 [6 ], Since then, many engineers
outside Japan have also successfully applied these methods [7],
The Taguchi philosophy professes that the task of assuring quality must
begin with the engineering of quality product and process design optimization
for performance, quality, and cost. To be effective, it must be a team effort involving
marketing, Research and Development (R&D), production, and engineering. Quality
engineering must be completed before the product reaches its production stage.
One can often take countermeasures during process and product design.
Such coxmtcmeasAKes can eftecVw eVy assure that the product a manufacturing
process delivers will be on target and that, it will continue to perform on target

measures require weW-planned, systematic, and an essentially empirical investigation


during process/product desigp and development. For tivvs, ressoa, TagucYtt. c&Wed
this procedure off-line [8]; it precedes on-line Quality Control (QC) done during
manufacturing, using control charts and other reactive methods (see Section 12.4).

1.2 ACHIEVING QUALITY TAGUCHIS SEVEN POINTS


Achieving superior performance calls for an attitude that must continuously
search for incremental improvement. The Japanese call this kaizen. This trait is
different from the commonly applied method of relying only on new technologies
and innovations as the route to quality improvement.
The following seven points highlight the distinguishing features of Taguchis
approach (as different from the traditional approach) which is aimed at assuring
quality:
1. Taguchi defined the term quality as the deviation from on-target
performance, which appears at first to be a paradox. According to him, the quality
of a manufactured product is the total loss generated by that product to society
from the time it is shipped.
2. In a competitive economy, Continuous Quality Improvement (CQI) and
cost reduction are necessary for staying in business.
3. A CQI programme includes continuous reduction in the variation of
product performance characteristic in their target values.
WHAT ARE TAGUCHI METHODS? 3

4. Customers loss attributable to product performance variation is often


proportional to the square of the deviation of the performance characteristic from
its target value.
5. The final quality and cost (R&D, manufacturing, and operating) of a
manufactured product depend primarily on the engineering design of the product
and its manufacturing process.
6. Variation in product (or process) performance can be reduced by
exploiting the nonlinear effects of the product (or process) parameters on the
performance characteristics.
7. Statistically planned experiments can efficiently and reliably identify
the settings o f product and process param eters that reduce perform ance
variation.
One achieves kaizen by formally integrating design and R&D efforts with
actual production in order to get the process right and continually improve it. A
large number of design, process, and environmental factors are usually involved
in such a task. Consequently, there is no effective way of doing kaizen except by
the pervasive use of scientific methods. Statistically designed experiments, in
particular, can generate highly valuable insights about the behaviour of a process
or product, normally using only a surprisingly small number of experiments. The
consequence of superior performance is the superior fit of the manufacturing
process or product to its users requirements. Subsequently this reduces the
products lifetime cost of use.

1.3 OPTIMIZED DESIGN REDUCES R&D, PRODUCTION, AND


LIFETIME COST
Cost trade-offs in quality decisions are not new. This is how industry sometimes
justifies its QA programmes. Most managers believe that quality requires action
when quality-related operating costs which belong to one of the three following
categories go out of line:
Failure costs result from inferior quality products in the form of scrap, rejects,
repair, etc. Failure costs are also involved in the returns from customers, loss of
goodwill, or a plant failure causing loss of production, property, or life at the
customers site.
Appraisal costs are incurred while inspecting, appraising, and evaluating the
quality of the products one manufactures, or the materials, parts, and supplies one
receives.
Prevention costs are incurred when one attempts to prevent quality problems
from occurring by (a) engaging process control, optimization experiments and
studies; (b) training operators on correct procedures; and (c) conducting R&D to
produce close-to-target products.
A manufacturer often trades off one of these costs for another. Some
manufacturers choose not to invest on prevention, engaging instead a team of
technicians to do warranty service. When there is a monopoly, sometimes the
warranty service is also cut regardless of its effect on customers.
4 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

A large sum of money spent on appraisal can help screen out defective
products, preventing them from getting to customers. This is inspection-based
QA. As of today, most Very Large Scale Integration (VLSI) chips have to be
produced this way. It should be clear, however, that QA based on appraisal
is reactive and not preventive it takes action after production. If resources
can be directed to prevention instead, one increases the likelihood of preventing
defects and quality problems from developing. With preventive action, prevention
costs for an enterprise may rise, but failure costs are often greatly reduced [9].
Reduction of defective production directly cuts down in-house scraps and
rejects. This also reduces returns from customers and their dissatisfaction with
the product. Also, the producer projects a quality image, which often gives a
marketing edge. It may be possible, of course, to go overboard with quality if we
disregard real requirements. The ISO 9000 Standards document [10] as well as
QFD [34] also emphasize the value of establishing the customers real needs first.
Business economists suggest that the target for quality should be set at a level at
which the profit contribution of the product is most favourable (Fig. 1.1).

c
*
o
+ >
3 C o n trib u tio n
JD
C
o
o
CO
M a rke t value
o
O M a n u fa c tu rin g co st
Q> C o n trib u tio n
3
O
>
c
55
o
0)
t _

o / \
c In c re a s in g p re c is io n N
\

Fig. 1.1 Contribution and precision of design

In his writings Taguchi has stated that delivering a high quality product
at low cost involves engineering, economics, use of statistical methods, and an
appropriate management approach emphasizing continuous improvement. To
this end Taguchi has proposed a powerful preventive procedure that he calls
robust design. This procedure optimizes product and process designs such that
the final performance is on target and it has minimum variability about this target.
One major outcome of off-target performance, be it with ill-fitting shoes,
defective keyboards, or a low yielding chemical process, is the increase in the
lifetime cost of the product or process (see Table 1.1). We may classify this total
cost as the cost that the product/process imposes on society the producer, the
consumer, and others who may not even be its direct users as follows:
WHAT ARE TAGUCHI METHODS? 5

O perating cost: The costs of energy, consumables, maintenance, environ


mental control, inventory of spare parts, special skills needed to use the product,
etc. constitute the products operating cost. Generally, with robust design this cost
can be greatly reduced.
M anufacturing cost: Jigs, special machinery, raw and semi-finished materials,
skilled and unskilled labour, QC, scrap, rework, etc. constitute the manufacturing
cost. Again, with robust design, the requirements of special skills, raw materials,
special equipment, controlled environment, on-line QC effort, etc. can be substantially
reduced.
R&D cost: Engineering and laboratory resources, expert know-how, patents,
technical collaborations, prototype development, field trials, etc. constitute the R&D
cost of the product. R&D aims at producing drawings, specifications, and all other
information about technology, machinery, skills, materials, etc. needed to manufacture
products that meet customer requirements. The goal here is to develop, document
and deliver the capability for producing a product with the optimum performance
at lowest manufacturing and operating cost. Robust design can play a key role
in this effort too.
T A B L E X.l
I N I T I A L P R I C E vs. L I F E T I M E C O S T O F P R O D U C T S I N C O M M O N U S E *

Product Initial Price Lifetime Cost


($) ($)

Air Conditioners 200 665


Dishwasher 245 617
Electric Dryer 182 670
Gas Dryer 207 370
Freezer, 15 cu. ft. 165 793
Electric Range 175 766
Gas Range 180 330
Frost-Free Refrigerator 230 791
B&W Television 175 505
Colour Television 540 1086
Electric Typewriter 163 395
Vacuum Cleaner 89 171
Washing Machine 235 852
Industrial Process Equipment 75,000 432,182/yr**

* F.M. Gryna (1977): Quality Costs User vs. Manufacturer, Quality Progress, June,
pp. 10-13.
** Includes repairs (part, material and labour), contract labour, defective product produced
and lost production.

Generally, the producer incurs the R&D and manufacturing costs and then
passes these on to the consumer. In addition, the consumer incurs the operating
cost as he uses the product, especially when performance deviates from target.
The knowledge emerging from Taguchis work affirms that high quality
means lower operating cost and vice versa. Loss functions provide a means to
quantify this statement.
6 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

The robust design method the key QA procedure put forth by Taguchi
is a systematic method for keeping the producers costs low while delivering the
highest quality to the consumer. Concerning the manufacturing process, the focus
of robust design is to identify process setting regions that are most sensitive to
inherent process variation. As will be shown later, this eventually helps improve
the quality of what is produced by minimizing the effect of the causes of
variation without necessarily eliminating the causes.

1.4 TAGUCHIS DEFINITION OF QUALITY


What quality should a manufacturer aim to deliver? To resolve this fundamental
dilemma as the debate intensified, Juran and others [9] defined the quality of a
product to be its fitness for use as assessed by the customer. Taguchi has given
an improved definition of this: a product has the ideal quality when it delivers on
target performance each time its user uses the product, under all intended operating
conditions, and throughout its intended life [4]. This ideal quality serves as a
reference point even though it may not be possible to produce a product with
ideal quality.
A manufacturer should not think of quality except in terms of meeting customer
expectations, which may be specific and many. People using pencils, for example,
may desire durable points providing clear lines, and erasers that last until at least
half the pencil is used. Pencil chewers would additionally want that the paint be
lead-free!
The ideal quality is performance at target rather than within some specification
tolerance limits. This has been best shown by a study of customer preference
of colour TV sets manufactured using identical designs and tolerances, but with
different quality objectives. The Asahi newspaper reported this study on April 17,
1979 [5]. A Sony-U.S.A. factory aimed at producing sets within colour density
tolerance m 5. It produced virtually no sets outside this tolerance. A Sony-
Japan factory produced identical sets but it aimed directly at hitting the target
density m, resulting in a roughly normal distribution of densities with a standard
deviation 5/3 (see Fig. 1.2).
Careful customer preference studies showed that American customers who
bought these TVs preferred the sets made in Japan over those made in U.S.A.
Even if the fraction of sets falling outside the spec limits in U.S. production was
lower than that in the Japanese production, the proportion of Grade A sets
(those judged to do the best) from Japan was considerably higher and that of
Grade C sets considerably lower. Thus the average grade of sets made by
Sony-Japan was better than that by Sony-U.S.A. This reflected the higher
quality value of sets made by Sony-Japan.
At least two other major industry studies involving automobile manufacture
have led to identical conclusions [ 1 , 1 1 ].
Reflecting on experiences such as above, Taguchi suggested that a product
imparts a loss to society when its performance is not on target. This loss includes
any inconvenience, and monetary or other loss the customer incurs when he uses
the product. Taguchi proposed that manufacturers approach the ideal quality by
WHAT ARE TAGUCHI METHODS? 7

Sony -U .S .A

Sony - Ja p a n
C o lo u r
m -5 m m+ 5 d e n s ity

D B A B D Grade

Fig. 1.2 Distribution of colour density in television sets.


CSource: The Asahi, April 17, 1979.)

examining the total loss a product causes because of its functional variation from
this ideal quality and any harmful side effect the product causes. The primary goal
of robust design is to evaluate these losses and effects, and determine (a) process
conditions that would assure the product made is initially on target, and (b)
characteristics of a product, which would make its performance robust (insensitive)
to environmental and other factors not always in control at the site of use so that
performance remains on target during the products lifetime of use.
To enforce these notions Taguchi (re)-defined the quality of a product to be
the loss imparted to society from the time the product is shipped. Experts feel
this loss should also include societal loss during manufacturing [6 ],
The loss caused to a customer ranges from mere inconvenience to monetary
loss and physical harm. If Y is the performance characteristic measured on a
continuous scale when the ideal or target performance level is r, then, according
to Taguchi, the loss caused L (Y ) can be effectively modelled by a quadratic
function (Fig. 1.3)
L(Y) = k ( Y - t ) 2
Note here that the loss function relates quality to a monetary loss, not to a gut
ftTliag or other mere emotional reactions. As will be shown later, the quadratic
Io b ta c tk m provides the necessary information (through signal-to-noise ratios)
Id achieve effective quality improvement. Loss functions also show why it is not
good o r a g h for products to be within specification limits. Parts and components
that most fit together to function are best made at their nominal (or the midpoint
specification) dimensions than merely within their respective specification
tolerances [11].
When performance varies, one determines the average loss to customers
by statistically averaging the quadratic loss. The average loss is proportional
to the mean squared error of Y about its target value r, found as follows: If one
8 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

L oss

P erfo rm a n c e characteristic
Fig. 1.3 The relationship between quality loss and performance deviation
from target.

produces n units of a product giving performances y2, ^ 3, . - . , yn respectively,


then the average loss caused by these units because of their not being exactly
on target r is

^ - [ L ^ ) + L (y2) + ... + L(y)]= [(?! - r f + (y2 - t ) 2 + ... + {yn - x )2]

/ a n 1 2
= k Ol-T) + O'

where fj, = E y-Jn and a 2 - Z (yi - fJ)2/(n - 1). Thus the average loss, caused by
variability, has two components:
1. The average performance (jj)f being different from the target r, contributes
the loss k(ii - r)2.
2. Loss k a 2 results from the performance {#} of the individual items being
different from their own average \x.
Thus the fundamental measure of variability is the mean squared error of Y
(about the target t), and not the variance a 2 alone. Interestingly, it may be noted
that ideal performance requires perfection in both accuracy (implying that jx be
equal to r) as well as precision (implying that a 2 be zero).
A high quality product performs near the target performance value consistently
throughout the life span of the product.
WHAT ARE TAGUCHI METHODS? 9

Whenever available, a quantitative model that describes how the performance


of a product or process design depends on the various design parameters is of great
help in the optimization of designs. This dependency may become evident by
invoking scientific and engineering principles, or by conducting experiments
with a physical prototype.
In the trial-and-error method of experimentation, intuition rather than a
systematic procedure guides what levels of variable settings one should try. This
approach appeals to many investigators for its apparent simplicity [12]. In this
approach, chance plays an important role to deliver the optimized design. The next
popular approach is the one-variable-at-a-time experimental search to find the
optimum setting. This method too is simple, but (a) the one-at-a-time approach is
inefficient when the number of variables are many and (b) it can miss detection
of critical interactions among design variables [12].
By sharp contrast to trial-and-error approach, statistical design o f experi
ments is a systematic method for setting up experimental investigations. Several
factors can be varied in these experiments at one time. This procedure yields
the maximum amount of information about the effect of several variables and
their interactions while using the minimum number of experiments. In a
statistically designed experiment, one varies the levels of the independent input
variables from trial to trial in a systematic fashion. A matrix of level settings
defines these settings such that maximum information can be generated from
a minimum number of trials. Moreover, some special statistical experiments
require mere simple arithmetical calculations to yield sufficiently precise and
reliable information.
Classical statistical experiments, called fu ll factotial designs, require trials
under all combinations of factors. Taguchi has shown that if one runs orthogonally
designed experiments instead, many product and process designs can be optimized
economically and effectively, and with surprising efficiency.
Taguchis robust design experiments for most part use only orthogonal arrays
(OAs) rather than full factorial designs. Orthogonally designed parametric
optimization experiments act as an efficient distillation mechanism that identifies
and separates the effect each significant design or environmental factor has on
performance. This in turn leads to products that (a) deliver on-target performance
and (b) show minimum sensitivity to noise or uncontrolled environmental
factors.

1.5 WHAT CAUSES PERFORMANCE TO VARY?


Variation of a products quality performance arises due to (a) environmental
factors; (b) unit-to-unit variation in material, workmanship, manufacturing
methods, etc.; and (c) aging or deterioration (see Table 1.2). The Taguchi
approach focusses on minimizing variations in performance by determining the
~vital few conditions of manufacture from the trivial many, economically and
efficiently, such that when one finally manufactures the product, it is highly
probable that it is, and remains on, target. Robust design aims specifically at
determining product features such that performance becomes insensitive to the
10 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

environmental and other factors that the customer would perhaps not be able to
or wish to control.

T A B L E 1.2
F A C T O R S A F F E C T IN G P R O D U C T A N D P R O C E S S P E R F O R M A N C E

Product Perform ance Process Perform ance


Outer Noise
Consumers usage conditions Ambient temperature
Low temperature Humidity
High temperature Dust
Solar radiation Incoming material
Shock Operator performance
Vibration Voltage and frequency
Humidity Batch-to-batch variation
Dust
Inner Noise
Deterioration of parts Machinery aging
Deterioration o f material Tool wear
Oxidation Shift in control
Between Products
Occurrence o f piece-to-piece Occurrence of process-to-
variation when the pieces are process variation when the
supposed to be the same processes are supposed to
be the same
Controllable Factors
All design parameters such as All process design parameters
dimension, material, configu All process setting parameters
ration, packaging, etc.

Most real-life manufacturing processes lead to unit-to-unit variation in


production and to what is produced not being always on target. Such variations
may be caused by raw material differences, operators errors and inconsistencies,
and factors such as vibration, temperature changes, humidity, etc. When one
produces items in a batch, batch-to-batch process setting differences also introduce
variation in product performance. In addition, manufacturing processes have a
tendency to drift, causing off-target production as time passes.
The first step toward robust process design is the tentative identification of
all the above mentioned factors. Such a step, to be effective, requires contributions
from technology experts, workers, designers, marketers, and even customers. One
then includes the factors found in the statistical experiments so that their effects
(individual, or interactive) may be estimated and countermeasured, if necessary.
The challenges in product design are similar. The opening/closing of a
refrigerator door, the amount of food kept in it, and the initial temperature of
food, variation in ambient temperature and power supply voltage fluctuation are
environmental factors that can effect a refrigerators performance. For a solar
cooker, all but the last aspect might be important. One requires engineering and
operational experience with the product and sound scientific judgment to ensure
that all relevant factors are included in robust product design studies. Only then
experiments to optimize the design may be planned.
WHAT ARE TAGUCHt METHODS? 11

An efficient tool for locating and identifying the potential factors that
may affect product or process performance is the Ishikawa Cause-Effect diagram
(Fig. 1.4).

Operator Machine
tired
amplitude weight
s ^ y s i c a l ^ _____
cutting
c o n d itio n ^ lmlarge L light
nervous ^----------------* g rin d e r
training
delay
awareness
inadequate ro ta tio n a l
frequency
Mony
grinding
time return contaminants cracks
crock V Short does not join mixed in
removal P block
pressure
solution 7/
crack /
/ 'fruitsr
i l t e r fluid
/
Tiuia
7
contaminant L grinding
long
removing depleted shaving
fluid grinding remains time

hord mixing particles


ratio
V agent
V * large
\
lathe
pitch
bonding
filled in

Material Method

Fig. 1.4 Cause and effect diagram for potential causes leading to cracks
during contact lens grinding.

Engineers sometimes use screening experiments to review a large number


of potentially important factors to separate the key factors. In such experiments
the objective is to identify input factors having the largest impact on the process
the vital few from the trivial many. Taguchi experiments can be used to optimize
and confirm the settings of these vital factors.

1.6 PREVENTION BY QUALITY DESIGN


Next to quality, manufacturing cost is a primary attribute of a product. However,
it may appear impossible or at best difficult to reduce manufacturing cost while
one is seeking on-target performance plus low variability. Somewhat surprisingly,
Taguchi methods deliberately and consciously seek designs that use inexpensive
components and parts and yet deliver on-target performance. The premise of this
approach is that 80% of a lifetime cost (Table 1.1) of a product is fixed in its
design stage. If the design calls for steel instead of plastic, then manufacturing can
only aim at the remaining 20% (mostly labour) by seeking productivity during
production. It is very important, therefore, that besides those affecting performance,
the design engineer identifies aspects that have a significant bearing on the cost
and manufacturability of the product, and then through statistical experiments
>ets these also at optimal levels.
12 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

One is often unaware of the dependency of the output (performance) and


the input (design and environmental factors) even if the technology is familiar
and the manufacturing plant has made the product many times over. For instance,
it is possible for forgings to show cracks even after a plant has made thousands of
them. In practice, one does not generally know the effect of all the control factors
that can be manipulated by the product/process designer. Also, one is often unaware
of the noise factors that are uncontrollable but present during production or in the
environment in which the product is used. However, achieving robust quality design
requires that one finds out these effects systematically and countermeasures them.
The Japanese discovered in 1953 that the most effective solution of quality
problems is during product and process design. In that year, the Ina Tile Company
used statistical experiments to reduce successfully finished tile size variability by
a factor of 10 [4, 5]. Many investigations have now confirmed that it is too late
to start thinking about quality control when the product is coming out of the
reactors or exiting the production line. The remedy here, as proposed by Taguchi,
is a three-step approach to correctly designing the product. These steps must
precede production to maximize the products chances of delivering on-target
performance with minimum variability.

1.7 STEPS IN DESIGNING PERFORMANCE INTO A PRODUCT


Designing with the objective of building quality into a product involves three
steps [4]:
1. System (or concept or functional) design. This is the first step in
design and it uses technical knowledge to reach the initial design of the product that
delivers the basic, desired functional performance. Several different types of
circuits or chemical reactions or mechanisms may be investigated, for instance, to
arrive at a functional audio amplifier, a synthetic lubricant or a braking device.
The technology of a special field often plays a major role in this step to reach the
functional design the initial, acceptable settings of the design parameters.
2. P aram eter design. In this step, one finds the optimum settings of the
design parameters. To achieve this, one fabricates or develops a physical or
mathematical prototype of the product based on the functional design (from step 1)
and subjects this prototype to efficient statistical experiments. This gives the
parameter values at which performance is optimum. Two types of experiments are
conducted here: The first aims at identifying process parameter values or settings
such that the product made by the process performs on target. The second aims at
the type of experiments determining the effects of the uncontrolled, environmental,
and other product design parameters to find design parameter settings such that
performance suffers minimal deviation from target (i.e., it is robust) when one
actually uses the product in the field. Parameter design identifies the optimum
nominal values of the design parameters.
3. Tolerance design. Here, one determines the tolerances on the product
design parameters, considering the loss that would be caused to society should the
performance of the product deviate from the target.
WHAT ARE TAGUCHI METHODS? 13

In the functional design, one develops a prototype design (physical or


mathematical) by applying scientific and engineering knowledge. From this effort
one produces a basic design that broadly meets the customers requirements.
Functional design is a highly creative step in which the designers experience and
creativity play a key role. Good judgment used in functional design can reduce
both the sensitivity of the product to environmental noise and its manufacturing
cost.
In parameter design, one conducts extensive empirical investigation to
systematically identify the best settings of (a) process parameters that would
yield a product that meets (the customers) performance requirement and (b) the
design parameters of the product such that the products performance will be
robust (stay near the target performance) while the product is in actual field use.
Parameter design uses orthogonal arrays and statistical experiments to determine
parameter settings that deliver on-target performance as also minimum variability
for the products quality characteristics.
In tolerance design, one determines manufacturing tolerances that minimize
the products lifetime and manufacturing costs. The special device used here for
expressing costs and losses is the Taguchi Loss Function, mentioned earlier. The
objective in tolerance design is to achieve a judicious trade-off between (a) the
quality loss attributable to performance variation and (b) any increase in the
products manufacturing cost.
The loss function philosophy acknowledges that society (consumers, manu
facturers, and those affected indirectly by the product) incurs a loss with the product
whenever the products performance deviates from its expected target performance.
Thus, it is not enough for a product to meet specifications. Its performance must
be as close to the target as possible. The loss function-based approach to robust
design (through measures known as signal-to-noise ratios) also reduces problems in
the field and is thus a preventive quality assurance step. As will be explained later,
a third major advantage of aiming at on-target production (rather than only meeting
specifications) is the reduction of catastrophic stack-up of deviations [1].
Loss functions help in bringing the customer requirement orientation into a
plant. They also eliminate inequitable assignment Of manufacturing tolerances
between departments making parts that should fit and function together. Each
department then views the department following it as its customer and sets its own
manufacturing tolerances using the loss function. Chapter 10 discusses these
techniques. In this manner the manufacturing organization makes tolerance
adjustments in whichever departments they are most economical to make, resulting
in the reduction of the total manufacturing cost per unit [8].

1.8 FUNCTIONAL DESIGN: THE TRADITIONAL FOCUS


Functional design ideally creates a prototype process or product that delivers
functional performance. Sometimes a product has to meet more than one Functional
Requirement (FR) [13]. This requires research into concepts, technologies, and
specialized fields. Many innovations occur at this stage and the core of this effort
is concept design.
14 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

A functional design sometimes produces a mathematical formula; by


using it, performance can be expressed as an explicit function of the (values of)
design parameters. For instance, developing a mathematical representation of the
functional design of passive filter type devices is a common activity in electrical
engineering. By using the Kirchhoff current law, the transfer function (V0/Vs) for
the circuit shown in Fig. 9.1 may be obtained as

o g 3

V, ( * 2 + Rg)(R s + u 3) + R 3RS + (R 2 + R g)R 3RsCs'

where s' is the Laplace variable. From this transfer function, the filter cutoff
frequency coc and the galvanometer full-scale deflection D may be found respectively
as

( * 2 + * ,) ( * , + *3) + *3
fl, = ------------------------------------------------------------------------------------------------------------------------------------------

2n(R 2 + R )R 3RSC

Vs
g s
D=
sen G s c n [ ( R 2 ^ R g ) ( R s ^ R 3) + R s R , ]

The design parameters (DPs) that the designer is free to specify are R2, R$, and C.
Another design example, from chemical engineering, illustrates a similar
functional process model also a mathematical relationship between the design
parameters and performance. Many chemical processes apply mechanical
agitation to promote contacting of gases with liquids to encourage reaction. Based
on reaction engineering principles, the relationship between the utilization of
the reacting gas and the two key controllable process variables may be given by
Utilization (%) = K (mixing HP/1000 g)A (superficial velocity)5
As will be illustrated through a case study in Chapter 9, such mathematical
models can be as useful as physical prototypes in achieving a robust design.
Traditionally, product and process design receive maximum attention
during functional design. Most engineering disciplines expound the translation of
scientific concepts to their applications so that the designer is able to develop the
functional design. Refinements to this initial design by trial and error may be
attempted on the shop floor combined possibly with limited field testing of the
prototype. True optimization of the design, however, is rarely thus achieved or
attempted [12].
The Taguchi philosophy sharply contrasts with this traditional approach to
design. Taguchi has contended that, besides building function into a product, its
design should engineer quality also. In his own words: quality is a virtue of design.

1.9 PARAMETRIC DESIGN: THE ENGINEERING OF QUALITY


A quality product, during its period of use, should have no functional variation. The
losses caused by it to society by repairs, returns, fixes, adjustment, etc., and by its
WHAT ARE TAGUCHI METHODS? 15

harmful side effects are designed to be small. During its design, one takes
countermeasures to assure this objective. The use of Taguchi methods makes it
possible that measures may be taken at the product design stage itself to achieve
(a) a manufacturing process that delivers products on target and (b) a product that
has robust performance and continues to perform near its target performance.
As already stated, the performance of a robust product is minimally affected
by environmental conditions in the field, or by the extent of use (aging) or due to
item-to-item variation during manufacturing.
Besides, robust product design aims at the selection of parts, materials,
components, and nominal operating conditions so that the product will be
producible at minimum cost.
The three steps involved in robust design are:
1. Planning the statistical experiments is the first step and includes
identification of the products main function(s), what side effects the product may
have, and factor(s) constituting failure. This planning step spells out the quality
characteristic Y to be observed, the control factors {0i, fy, #3}> the observable
noise factors {wi, w2, vv3}, and the levels at which they will be set during the
various test runs (experiments). It also states which orthogonal design will be
employed (see Fig. 1.5) to conduct the statistical experiments and how the observed
data {yj. y2, >3, . ..} will be analyzed.

Observed Computed
p e rfo rm a n c e performance
D esign m a t r i x Noise m a trix characteristic statistic

R
U (Control fa c to rs ) (Noise fa c to rs )
M

| d, 02 03 W1 W2 w 3

1 1

f 1 1 \ - y,
1 2 2 - y2
2 1 2 2 Z (0),

c
2 1 2 - y3
3 1 3 3
2 2 1 y4
2 t 2
The outer orthogonal array made up of observable
2 2 3 noise f a c t o r levels; each noise factor has two
distinct levels.
2 3 1

3 % 3
1 \ 1 y33
1 2 2 y34
3 2 t Z (0)9
2 1 2 y35
9 3 3 2 2 2 t ^36
The inner orthogonal array constructed using the
different design factor treatments; three treatments
for each factor are available.

Fig . 1-5 A parameter design experiment plan.


16 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

2. Actual conducting of the experiments.


3. Analysis of the experimental observations to determine the optimum
settings for the control factors, to predict the products performance at these
settings, and to conduct the validation experiments for confirming the optimized
design and making plans for future actions. Taguchi has recommended that one
should analyze this, using a specially transformed form of the performance
characteristic Y, known as the signal-to-noise ratio {Z(0)j}, see Section 4.2, rather
than using the observed responses {y*} directly.
One conducts all the required experiments needed, guided by the principles
of design of experiments (see Chapter 3). This assures that the conclusions reached
are valid, reliable, and reproducible.
Briefly stated, the novel idea behind parametric design is to minimize the
effect of the natural and uncontrolled variation in the noise factors, by choosing
the settings of the control factors judiciously to exploit the interactions between
control and noise factors, rather than by reaching for high precision and expensive
parts, components and materials and plant control schemes. Such possibility was
perceived by Taguchi before anyone else.

1.10 STATISTICAL EXPERIMENTS DISCOVER THE BEST DESIGN


RELIABLY AND ECONOMICALLY
M any engineers hesitate to use statistical form ulas or analysis in their work.
Sound decisions about quality, how ever, require that one obtains the
appropriate data (in the m ost efficient way) and analyzes it correctly. That
is precisely w hat statistics helps us to do. In particular, when several
factors influence product or process perform ance, statistically designed
experim ents are able to separate reliably the vital fe w factors that have the
m ost effect on perform ance from the trivial m any. This separation results in
m athem atical models that m ake true product and process design optim ization
possible.
Also, statistical experiments produce the supporting data for verifying
some hypothesis about a response (dependent) variable usually with the smallest
number of individual experiments. An example can best illustrate this point. If
four types of tyre materials (Mu M2, M3, and M4) are available, four vehicle types
(Vi, V2, V3, and V4) are present, and four types of terrains (7\, T2j T3, and T4) exist
on which the vehicles will be used, then the total number of ways to combine
these factors to study them is 43 or 64.
At first, 64 may appear to be the number of tests with the different vehicles,
terrains etc. that one must run. However, if prior knowledge suggests that tyre
wear is unaffected by which tyre material is used on which vehicle, and on which
terrain, (i.e., there are no interactions (Section 3.1)) and the objective is to identify
the main effect of materials and the effects of changing vehicle type and the
driving terrain, one will need to run only 16 (Latin-square designed) statistical
experiments (Fig. 1.6) to grade the materials based on wear. This is a substantial
saving of effort.
WHAT ARE TAGUCHI METHODS? 17

V e h i c le " T y p e " ^

_<

>
ro<
V4

ro
<D
Q.
>

i
T, M, M2 m3 m4
C
o
W This e x p erim ent will
<D
t2 m2 m3 m4 M, be run by driving vehicle
V3 f i t t e d w ith t y r e s made
i of m a t e r i a l M 4 - o n
t e r r a i n T2
t3 m3 m4 M, m2

t4 m4 m, m2 m3

Fig. 1.6 The Latin square design. (Sixteen experiments can evaluate the effect of
tyre material on wear when four different vehicle types and four different
terrains are involved.)

EXERCISES
1. Choose a product that you use at your desk. Show how you, as a customer,
expect this product to perform.
2. Which aspects of the performance of the product you have selected are
quantifiable? Which aspects are qualitative? Which aspects of performance relate
respectively to the primary function, operability, long term performance, and
maintainability of the product?
3. Identify the major attributes (the design parameters) of this product fixed by its
designer. For each of these attributes, identify the choices of the designer (e.g.,
different materials, finish, source of power, and weight). Also identify the noise
factors (the factors in the environment in which you will use the product) on
which you, the user of the product, have little control.
4. Reflecting on your experience with the actual performance of this product,
enumerate the experiments that you would have liked to conduct with a prototype
of this product, to help find the optimum choices for the key design parameters.
(Taguchi methods provide, reliably and economically, the techniques for testing
several such factors together, besides showing their individual effects.)
1 m I Handling Uncertainty
m

2.1 THE MYSTIQUE OF PROBABILITY


The probability of an outcome or event in a situation is the relative likelihood
that the event will occur. Statistics is the science and art of classifying and
organizing data in order to draw useful deductions or inferences.
The complex environment of modern life requires intelligent interpretation
of all types of statistical data and uncertainties and the use of probability judg
ments. In the business world we apply such judgments in deciding what products
to make, how much to make, which stock to invest in, and even which career to
pursue. Most successful enterprises use statistical data to gauge markets, design
products, evaluate new ventures, monitor production, and plan resource deployment.
Probability studies help us understand and analyze uncertain situations and events.
Consideration of probability and statistics is critical, for example, in setting
insurance premiums. Generally speaking, one uses probability to describe the likeli
hood of anticipated outcomes. Probability expresses our belief of the odds of a
certain event occurring whenever the prognosis is not clear.
By examining the survival rates of 18-year old youdi vs. 85-year old senior
citizens, one may be able to estimate the respective probabilities and life expectancy.
One may even test the hypothesis (a speculation) that the 18-year old youth is ten
times as likely to survive the next 10 years than the 65-year old retired person.
Here we shall examine statistical data collected from observed events and then use
this data to draw inferences about the future events.
In statistical terminology, one calls the complete set of all observations about
a chosen characteristic of interest (e.g., all business graduates in the state, or all
chokes made with ceramic cores) a population. Anything smaller than this
complete set is called a sample. A random sample is a sample obtained in such a
way that all possible elements making up the population are equally likely to be
selected into the sample.
Observations obtained from a sample provide almost all stalisdcal data. Twenty
five 18-year olds randomly chosen from the different parts of a city would form
such a sample of the much larger full population of all 18-year olds living in that
city. A statistical population may not always be large; however, it must contain
every person, thing, or product of interest to the sampler. A sample, on the other
hand, is a representative subset of the population, easier to handle, count, and
observe. Nonetheless, the sample must represent the typical characteristics of the
larger set, the population of interest.
In statistical work, an arbitrary sample drawn from die population does not
usually suffice. One cannot validly judge the characteristics of 18-year old youths
by surveying only college freshmen. To represent the population, the sample must
18
HANDLING UNCERTAINTY 19

be a random sample, in which each member of the population is equally likely


to be selected into the sample. To draw valid inferences about a population,
therefore, it is imperative that one draws the samples randomly from the larger
population, with minimum chance of any bias in the observations.
Descriptive statistics provide some simple methods for describing the
characteristics of a population or sample. In practice, however, one finds rather
few direct uses for these methods. Typically, descriptive statistical information
contains the familiar frequency graphs, averages, median, spread of data, etc.
The sample average indicates a fundamental characteristic of a population.
It is an estimate of the population average. However, the population or sample
average fails to reveal the underlying variety in that population how the
individual entities in the population differ from the average or from each other.
This variability is often of considerable interest.
One way to differentiate individuals is to examine every individual. Another
is to study the distribution of individual characteristics. Often it is enough to
measure or estimate the spread of the distribution, which may be done by finding
the range (the difference between the largest and the smallest data values). Yet
another method is to measure the difference of each data value from the average,
and then take the average of these differences. A better procedure still to measure
spread is to calculate the standard deviation or the variance of the data, since some
differences may be positive and some may be negative. The standard deviation is
the square root of the average of the squares of these differences from the average.
Variance is defined as the square of the standard deviation.
The following formula gives the sample average (or sample mean m, also
sometimes called xbar):

1 n
m = X X: ( 2 . 1. 1)
n ,-=i

where n is the sample size and {x,, i= 1, 2, n] are the n individual measure
ments obtained from the sample. If the population average is /i, and the size of the
population is N, then one defines the population standard deviation, a, by

a = (Xj-iif/N (2.1.2)

which is an indicator of the extent of variation (member-to-member difference)


present among the individual members. If a is small, most of the characteristics
would be close to /i. On the other hand, if a is large, considerable variability
would be present.
The sample standard deviation s provides an estimate for or. The expression
for s is

s = ^ L ( x i - m )2l{n - 1) (2.1.3)

where m is again the sample average calculated from the n sample measurements
x u x2, x3, . . . , x n by Eq. (2.1.1).
20 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

The average and the standard deviation together constitute the most
common (though not the complete set o f) parameters for describing the
statistical character of a sample or a population which possesses some features
that vary from one item to another.
Since real populations may be often too large to study by examining every
member of the population, we may never know the true value of jx or a. Instead,
one bases many practical statistical methods on the study of samples, using *bar
and s, which estimate fi and a respectively.

EXAMPLE 2.1: Estimation o f sample mean (xbar) and variance (s2). In order
to produce an estimate of the total number of words in a book, someone selected
randomly 25 lines from randomly opened pages in that book and the words
appearing in each of these lines were counted as shown on Table 2.1. Determine
xbar and s2.
T A B L E 2.1
x B A R A N D a2 F R O M W O R D C O U N T S P R O D U C E D B Y R A N D O M S A M P L IN G

Word Count Deviation Square o f Deviation


*i (jt, - xbar) (x/ - xbar)2

6 3.08 9.4864
16 - 6 .9 2 47.8864
18 - 8 .9 2 79.5664
5 4.08 16.6464
12 - 2 .9 2 8.5264
8 1.08 1.1664
11 - 1 .9 2 3.6864
10 - 0 .9 2 0.8464
9 0.08 0.0064
17 - 7 .9 2 62.7264
11 - 1.92 3.6864
2 7.08 50.1264
8 1.08 1.1664
12 - 2 .9 2 8.5264
5 4.08 16.6464
13 - 3 .9 2 15.3664
9 0.08 0.0064
4 5.08 25.8064
4 5.08 25.8064
2 7.08 50.1264
5 4.08 16.6464
6 3.08 9.4864
16 - 6 .9 2 47.8864
14 - 4 .9 2 24.2064
4 5.08 25.8064
Total 227 ' 0.00 551.8399

The solution uses Eqs. (2.1.1) and (2.1.3). The calculated sample average, xbar, is
9.08. The sum of squares of deviations when divided by (25 - 1) produces the
estimate of s2 as 22.9933.
HANDLING UNCERTAINTY 21

2.2 THE IDEA OF A RANDOM VARIABLE


What will be the outcome when someone throws a pair of dice on the table? What
will be the temperature at mid-day tomorrow? What is the age of the individual
who next emerges from the lift? How many goals will the home team score in
the next game? All such variables will take numerical values by chance. Each of
them is a random variable. A random variable, formally, is a function, taking
unique numerical values determined by the outcome of an uncertain event. If a
random variable X can take a value x, then its probability distribution p[x] gives the
probability of X taking the value x The probability distribution of the random
variable representing a fair coin toss is 50-50, that is, there is a 50% chance that
the toss will produce the head, and a 50% chance that it will show the tail.
Among the most important and useful probability distributions in statistics
is the normal distribution. A random variable X that can take values ranging
from -oo to + and has a probability distribution given by

1
f [x] = e x p { - [ ( x - / i ) / c r ] 2/2} (2.2.1)
a^{2n)

is said to be normally distributed. The normal distribution, when plotted,


displays the well-known bell shape (Fig. 2.1). The normal distribution is some
what idealized, but it forms a good model of many real life processes and events.
Equation (2.2.1) may be used to calculate the probabilities of a normally
distributed random variable. This result is readily available in many statistical text
books. The normal distribution has been extensively studied. Besides, it possesses
some very useful properties which are applicable in the modelling of uncertainty
in economics, engineering, psychology, agriculture, medicine, and business
management.
If a random variable X is distributed normally, then any linear function of X,
such as a + bX (where a and b are arbitrary constants), is also normally distributed.
In particular, if X is distributed normally with mean \i and standard deviation o
(this in short hand one writes as X ~ N[/i, cr]), then the random variable Z,
where Z - (X - fj)/a is distributed normally with mean of zero (0) and standard
deviation 1. Z is called the standard normal variate or the random variable with
the standard normal distribution, briefly written as Z - iV[0, 1].

2.2.1 What is Sampling?


The concepts of probability and statistics can be better understood by distinguishing
more clearly between a population and a sample. A population consists of all the
data and individual characteristics that may be observed. A sample is the data
that one actually observes. One applies probability analysis to deduce the data
(samples or sample observations) which are likely to be obtained from a defined
population. Statistics or statistical analysis provides useful methods in deducing
the characteristics of the population based on the sample (data) that one has
collected, by observing a few (i.e., not all) items or individuals belonging to the
population.
22 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

(a)

35 40 45 50 55

(b)

Fig. 2.1 The normal probability distribution: (a) Probability density function
of the N[fu <7] distribution; (b) three normal probability distributions
with mean equal to 45 and standard deviations of 2, 4, and 6.5,
respectively; and (c) three normal probability distributions with equal
standard deviations and means equal to 10, 23, and 36, respectively.

Sampling consists of the procedures for collecting observations from the


population in order to make valid inferences about the population under study. In
a random sample, each member of the population is equally likely to be selected
for observation. The chances of a given member being selected does not depend
on any previous selection or observation made on the population. Note that a
random variable is any mathematical variable whose value is uncertain. A
random sample, on the other hand, is a series of observations or outcomes of
experiments obtained under the strict assumptions of constant probabilities (all
members in the population have the same probability of being selected into the
HANDLING UNCERTAINTY 23

sample) and independence (each observation in the sample is independent of


what one observes in any other observation).

2.2.2 Parameters, Estimators and Statistics


Sampling is an economical way of estimating the characteristics of the underlying
and, usually, considerably large population. By inspecting samples of products
coming off a production line, the proportion of defective products in the total
production may be estimated. However, before one uses sampling in estimation,
one must answer some important questions such as: How truly would the estimates
produced by sampling reflect the character of the underlying population? And,
therefore, how many items should one sample?
A sample statistic (a suitable summary of the data collected in observations
taken from the population) can often act as the estimator for a population
parameter, such as the population mean fj. We call the specific value of the
estimator, which we calculate from sample or observed data, an estimate of fx.
Because there is clearly an uncertainty about which items will be randomly
picked in sampling, the sample mean m, a statistic defined by Eq. (2.1.1), is a
random variable.
A highly regarded and useful theorem in statistics is the central limit
theorem. According to this theorem, no matter what the underlying distribution
is, the sample mean (m) will tend to have a normal distribution, or

m ~ N [fa cr/Vn]
where fi is the population mean, <7 the population standard deviation, and n the
size of the sample, m is an unbiased estimator of /a and the sample standard
deviation s is an unbiased estimator of population standard deviation a. One
calls an estimator unbiased if its expected value is equal to the parameter one is
estimating. The expected value of a random variable is the long term average value
of the variable that would be approached if one observed the value {x,} of the
variable X a large number of times.
Admittedly, in working with an estimate such as m (or s), one must have
some sense of correctness: How fa r is the real parameter fi (or a) from its estimated
value? Statistics provides the answer here as a confidence interval and a related
probability statement.
One does not know the true value of ji. But, since m - N[fa <r/Vn], one
knows from the property of the normal distribution that there is a 0.95 proba
bility that fi lies in the interval

m - 1.96(<r/Vw) < /i < m + 1.96(al^Jh)


This may also be written as

Prob [m - 1.96(cr/Vn) < ji < m + 1.96 ( c /V n )] = 0.95 (2.2.2)


Equation (2.2.2) is the general equation for a 95% confidence interval for /j,
implying that we are 95% confident that, when we know m and a, the unknown
population mean parameter (//) lies in this interval. Similar confidence intervals
24 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

may be given for estimates of other population parameters also. The spread
(1.96(0*/ Vw)) of the estimate produced is called margin of sampling error; here
it is within about 2 standard deviations on either side of the sample mean m
Generally, the larger the sample size n, the narrower the margin of error.

2.2.3 The f-Statistic


Note that in the confidence interval statement given above for /i, we used the
population parameter a. If a is not known and has to be estimated, an additional
uncertainty is introduced. Also, the confidence interval for fx given by Eq. (2.2.2)
must be widened when one substitutes a by its estimate s, in order to retain the
95% confidence. W.S. Gosset in 1908 found that the reliability of the estimate
m can be improved if, instead of stating the confidence interval using the standard
normal variate Z (= (m - fi)l(af-Jn))7 one uses the t-statistic, which uses not a
but s (the estimate of o), given by

/ - ULzJL (2.2.3)
si^fn

Notice that t is a statistic calculated from sampled data; the evaluation of t


requires first determining m and s from the observations. The distribution of t is
similar in shape to N[0, 1] except that it is slightly wider (Fig. 2.2), and it
depends upon sample size n.

Fig. 2.2 A r-distribution with 10 degrees of freedom and the


standard normal probability distribution.

The distribution of t depends not only on n, but also on how many


population parameters one has estimated from the sample data in order to
calculate t. Since m (an estimate of the population parameter /i) must be estimated
from the sample data in order to determine the statistic t from Eq. (2.2.2), t is
dependent on a fundamental parameter v, known as the degree o f freedom (dof).
The dof of a statistic shows the number of independent or freely obtained
observations employed in calculating that statistic. One defines dof as
HANDLING UNCERTAINTY 25

dof = No. of total observations used in calculating the statistic


- No. of parameters estimated from the observed data,
also used in calculating the statistic
since m must be estimated from the observations before finding the statistic t,
which has (n - 1) degrees of freedom.
As sample size n , the distribution of t approaches W[0, 1]. Most
textbooks on statistics provide probability tables for the /-distribution for various
degrees of freedom.

EXAM PLE 2.2: Using the data of Example 2.1, establish a 95% confidence
interval for /z, the average count of words per line.
Solution: Since one does not know the standard deviation a, it must be estimated
by s. Since s2 = 22.9933, one obtains s = 4.795. The 95% confidence interval for
V using f*. 1, 0.025 is given by

Prob [m - /_li0.025 is/yfn) < /i < m + tn_l0 025 (s/V n)] = 0.95

Since n = 25, m (= xbar from Example 2.1) = 9.08, and *24,0.025 (from Appendix A)
= 2.064, one obtains
7.1006 < fx< 11.0594
as the 95% confidence interval for fi, the average words/page.

2.3 SOME USEFUL FORMULAS


A random variable is some function of the outcome of an experiment. For instance,
when we toss two dice, the sum of the two dice is a random variable. Because the
value of a random variable is dependent on an experimental outcome influenced
by some factors that we control and some that we do not, we often assign
probabilities to the possible values of the random variable. A random variable may
take continuous values, or discrete values.
The cumulative distribution function or simply the distribution function
F(b) of a random variable X denotes the probability that the random variable X,
when observed, takes on a value less than or equal to b, i.e.

F(b) = Prob [ X<b] (2.3.1)


F(b) is nondecreasing: if a < b, then F(a) < F(b). Also,

lim F(b) = 1, lim F(b) = 0 (2.3.2)


00

One finds the expected value E[X] (also known as the average) of a random
variable X, if X is discrete, having a probability density function p(x)> given
by

E[X] = xp(x)
x :p ( x )> 0
26 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

If X is continuous, then

[X ] = | xp(x) dx
X

If a and b are constants, then


E [aX + b] = aE[X] + b (2.3.3)
If [X(] is finite for all then

E[Z X (] = I [X,.] (2.3.4)

This provides us a way of finding the expected value of the sum of several
random variables.
Two random variables X and Y are said to be independent if the knowledge
of the value of one does not change the distribution of the other. Many random
variables in real life are independent of each other while many others are not.
Mathematically,

are independent if for all set of real values A h A2, . . An,


n
P[Xj e A x, x2 e Av X3 e A Xn e A] = n P[X. e A ;] (2.3.5)

The variance of a random variable X shows the extent of dispersion in the values
of X, measured about the average E[X]. The. variance is
Var [X] = E[(X - E[X])2] (2.3.6)
Notice that variance is the expected value of the square of the deviation of X from
its average E[X\. For normally distributed random variables, the square root of
variance, known as the standard deviation, forms an important parameter
describing the variability in the distribution of these random variables.
If a and b are constants, then

Var faX + b] = a2 Var [X] (2.3.7)

If there exist two random variables X and Y, then one expresses their influencing
each others values* by their covariance, given by

Cov [XY] = E[(X - E(X))(Y - E(Y))] (2.3.8)

If the random variables are independent of each other, then

Cov [XY\ = 0

Note, however, that Cov [XY] = 0 does not imply that X and Y are independent.
In general,

Var [X + Y] = Var [X] + Var [F] + 2 Cov [XY]


HANDLING UNCERTAINTY 27

M X x, X2, X3, . . X are pairwise independent, then

n
Var x x, = L Var [ x l (2.3.9)
i=1 i= 1

This formula helps us in finding the sum of the variance of independent random
variables.
Equation (2.3.9) leads to another useful result. Suppose Xb X2, X3, . . . , Xn
ire independent and identically distributed random variables with identical
variance o2. Then the variance of the sample mean Xbar (= EX,-/n) may be found
as follows:

1 71
Var [Xbar] - Var - I X.
n i = i 1

Using Eq. (2.3.7), we get


n
Var [Xbar] = \ Var z xf
n i=l

Using Eq. (2.3.9), we obtain


n 1 n
1 (2.3.10)
Var [Xbar] = Z Var X; 2
Z C7 = a In
n i= l n ii

Some other useful formulas are as follows:


If one takes two samples of sizes ri\ and n2 respectively from two normally
distributed populations and estimates their respective averages and variances as
i, jtbar2 and s2, and s\ , then their weighted or pooled variance is expressed as
n n
I (x, - Jtbar,)2 + L ( x , - xbar2) 2
i=i ___________ ;=i _________ (2.3.11)
a pooled + n2 - 2

Tbe pooled variance is useful, for instance, in obtaining the confidence interval
Sbr the difference between /it and fi2, the two respective population means.

2.4 HYPOTHESIS TESTING: A SCIENTIFIC METHOD TO VALIDATE


OR REFUTE SPECULATIONS
^Vitamin C prevents colds and aspirin reduces the risk of heart attacks are
assertions originally stated as speculations or hypotheses that the experts tested
tg statistical inference (reaching some conclusions using statistical reasoning).
#*astical inferencing uses data to confirm or refute a hypothesis. Such reasoning
ose product or process performance data to test whether Design X results in
tee which is superior to Design Y. All such tests start as the expression
belief or opinion, or theory, known as the hypothesis.
Sometimes it mav be impossible to determine if the theory is true or false.
28 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

However, if a sufficient amount of real data can be obtained and statistically


analyzed, then a statement of the following type may be postulated: Based on the
available data, we are 95% certain that this theory is true. Rather than saying
that a theory is 95% true, generally the investigator calculates the following
conditional probability in such tests:
Prob [observed data Itheory is true]
This statement says that, i f the theory is true, then the probability of the data
being as observed is so much. The test procedure used here (known as hypothesis
testing) is an attempted proof by deducing that if the theory (e.g., aspirin reduces
heart attacks) were true, then a sample of data is likely to look like this and
unlikely to look like that. Thus, a theory will be rejected if it can be shown
statistically that the data we observe would be very unlikely to occur if the theory
were in fact true. Again, one cannot be 100% sure here about the conclusion drawn
because one has based the deductions on only the sampled observations and it is
possible that what one observed occurred by chance (the influence of other factors)
even if the theory were true.
The general framework for testing a hypothesis (the speculation or theory)
that needs to be either verified or refuted is as follows: At the outset one states the
two competing theories. One labels the original speculation (the first theory) the
null hypothesis or H0, while one labels the second the alternative hypothesis, H x.
An example of a pair of null and alternative hypotheses would be
H0: Fraction defective in production is 75%
Hi'. Fraction defective is 85%.
Given the two hypotheses expressed as above, one proceeds to collect randomly
a sufficient amount of relevant data. The randomness here is rather significant.
If one is testing a hypothesis about average product quality, then one would
collect data in such a manner as to provide a representative sample of actual
production without probable bias in the process. (For instance, if the overall
product quality is important, one should not collect data only on the morning
shift.) Next, this data would be analyzed statistically to help decide whether to
reject the null hypothesis.

2.4.1 Type I and Type II Errors


Owing to the limits imposed by practical feasibility and cost, an unlimited amount
of data cannot be examined in hypothesis testing. Not all items produced by a
factory can be tested or appraised. One bases the test instead on sampled data.
Thus, there is some chance that the sample will not truly represent the total
production; sampling may even lead to a false conclusion.
Two types of errors are possible in testing a hypothesis using sampled data.
A Type I error attributable to sampling rejects a null hypothesis when it is in fact
true. A Type 11 error, on the other hand, does not reject a null hypothesis that is
false. Both are undesirable. Hence, in statistical hypothesis testing we constantly
attempt to device tests that would lead to low probabilities of Type I and Type II
errors.
HANDLING UNCERTAINTY 29

In statistical tests the probabilities of committing these two errors, a and /?,
are labelled as
a = Prob [Type I error in testing]
= Prob [reject H0\H0 is true]
fi = Prob [Type II error in testing]
= Prob [do not reject H01H0 is false]
respectively. The probability of Type I error, a, is called the significance level of
a statistical test. It is common to limit this probability to 5%, which is equivalent
to being wrong one in twenty applications of the test by drawing samples.

2.4.2 Meaning of Statistically Significant Observations


In hypothesis testing, if the observed data lead to the rejection of the null
hypothesis, then the observations are known as statistically significant. The term
statistically significant implies that the observed difference between the sample
statistic (the data summary calculated from the observations) and its expected
value if the theory were true is large enough to cause the investigator to reject
the null hypothesis. In plain words, the observed difference is too large to be
plausibly attributed to chance alone, and therefore it is statistically significant in
the sense that the improbability of what we observe persuades us to reject H 0.
To summarize, a hypothesis test seeks answer to the question: If the null
hypothesis is true, what is the probability (based on the assumption that //0 is correct)
that a random sample will yield a statistic whose value is far from its expected value?
On calculation if we find this probability to be very small, but we actually observe
such a far away* statistic, then we reject the null hypothesis. The difference between
the observed and the expected values of the statistic here is statistically significant.
If, on the other hand, the projected probability based on H0 is not small, we
do not reject H0, because the observed difference between the statistic and its
expected value may be due to the appreciable chance that exists.

2.4.3 Three Common Statistics Used in Hypothesis Testing


As already mentioned, hypothesis testing uses observed data to accept or refute
a speculation, theory, or hypothesis. Instead of directly using all the original
data, however, it is common practice to summarize the data in some manner that
suffices the test procedure. One calls such a summary calculated from raw
observed data a statistic, as already mentioned.
The Z-statistic introduced in Section 2.2 is a common statistic used in
hypothesis testing, defined as
observed sample mean - (expected m eanl//0)
statistic standard deviation of the sample mean
_ Xbar - jx
a /'Jn
where /i and a are the known population mean and standard deviation respectively,
n the size of the sample of observations taken, and Xbar the sample average.
30 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

The Z-statistic may be used in testing whether the population mean of data
values {xi, x2, jc3, . . . } is equal to some speculated quantity jx, if the characteristic
X of the population is distributed normally. Also, as mentioned in Section 2.2,
even if one does not know cr, one may still perform a test for the population
mean, using the ^-statistic instead of Z; t is then defined as
Xbar - /x
1 ~ s l^ n
where s, the calculated sample standard deviation, replaces a, Xbar being the
sample mean given by Eq. (2.1.1).
In Section 2.8 we shall introduce another important statistic known as
the F-statistic that is highly useful in testing the equality of variance using
observed data and also in reaching robust designs.

2.5 COMPARING TWO POPULATION MEANS USING OBSERVED


DATA
Two groups of individuals brokers and investors predicted the New York
Stock Exchanges Dow Jones Industrial Average as of June 30, 1991. Their
predictions were as follows:

Number Forecast Standard Deviation (a)


Participating Average (Xbar;) (known)

Brokers 41 3050 125


Investors 75 2700 80

Based on the above data, can one deduce that the difference in the averages forecast
(Xbar! - Xbar2) by the two groups is statistically significant? In order to test this
theory*, we propose the following hypotheses:

* 0 . /^brokers = /^investors

^ 1 * /^brokers ^ /^investors

The null hypothesis H0 states that there is no difference between the brokers and
the investors forecasts. H Q may be re-phrased as the statement The parameter
(^brokers ~ investors) *s zero. The test procedure will try to determine if the
observed difference (Xbarj - Xbar2) is statistically significant or insignificant,
if we assume H0 to be true.
The acceptability of H0 (or that the two population means /inkers and /Westons
are equal) can be tested by examining the difference between the respective
estimates, the sample means Xbar! and Xbar2. This test (using Xbar! and Xbar2) is
elementary, since by the central limit theorem, both Xbar! and Xbar2 are distributed
normally when sample sizes (nj and n2) are reasonably large. Thus

Xbari - N[/jb a x i^fn


Xbar2 ~ JV[/i2, o2l-Jn 2]
HANDLING UNCERTAINTY 31

In this test one makes use of a standard result in statistics. If samples are indepen
dent, then the difference between the sample means is also distributed normally,
with an expected mean equal to (Xbar2 - Xbar2) and a standard deviation equal
to the square root of the sum of the two respective variances of Xbarx and Xbar2.
Therefore,

Xbart - Xbar2 - N \ j i x - + a \ l n 2)\

since (Xbar* - Xbar2) is distributed normally, one may now check to see if the
observed difference (Xbar! - Xbar2) is a high probability or a low probability
event. Further, since the normal distribution governs the probabilities and the
needed standard deviations a* and a2 are known, one uses here a Z-test. In the
Z-test one calculates a Z-statistic as

z = ( X b a r i - X b a ^ ) - ^ , - ^ ) 5

y [ G \ / n x + a \ l n 2\

Using the data from the Dow Jones example, if we hypothesize that /^brokers =
Mmvestors then we find

7 _ (3050 - 2700) - 0 _ 350


V(381.1 + 85.4) 21.6

it may be verified from a Z-table that the observed 350 point forecast difference
is more than three standard deviations away from zero (see Appendix A at the
end of the book). A reference to the Z-table also points out that the Z-value
calculated above (16.2) has only a < 0.01 probability of occurrence. It is improbable
itiat such a high Z value occurred only by chance and therefore it must be
concluded that the forecasts made by the two groups are different.
If the samples are small and one assumes that both samples are random
samples from normal distributions with the same but unknown standard deviation,
then the /-statistic should be used to assess the difference between two means.
The statistic used will be

(Xbar, - Xbar~) - ( / / , - u~)


1 = ------- ~1-------- 2 1/2 (2-5-2)
{[sp ! n x + sp / n 2]}

2
where sp is a pooled estimate (see Eq. (2.3.11)) of the variance determined as

4 = + (-2 - ( 2 .5 3 )

p nx+ n2 - 2

*ith (/ii + n2 - 2) degrees of freedom.


In the Dow Jones example, the pooled variance s2 (if the two standard
Jeviations shown were estimated from the observations) equals 9636.84 and the
.-value under the null hypothesis the averages are equal is 3.565, significantly
32 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

larger than the 1.98 cutoff for a two-tailed 5% /-test with (41 + 7 5 - 2 ) or 114
degrees of freedom (see Appendix A). Again, therefore, we cannot accept the
speculation H0 that the two forecasts are not different.
One need not restrict tests of hypothesis to speculating about average only.
One may also verify, for example, whether a set of sample data is distributed
normally (with certain mean and variance). The statistic used here is called the
chi-square statistic.
The chi-square distribution (Fig. 2.3), with m degrees of freedom, is the
distribution of the sum of m independent, squared standard normal variables,
z? + z 2 + z 32 + ... + z 2.

f ( X 2)

Fig. 2.3 Three chi-square distributions.

2.6 CAUSE-EFFECT MODELS AND REGRESSION


Engineers, social and physical scientists, economists, and many others often use
models (or mathematical representations) to explain various phenomena. These
explanations are usually cause-effect theories in which the models embody the
assumed causal relationship. The investigator here hypothesizes that changes in
the explanatory (or independent) variable (the cause) produce corresponding
changes in the dependent variable (the effect).
Simple cause-effect models may express the investigators speculation in the
form
Y = a + px (2 .6 . 1)
HANDLING UNCERTAINTY 33

where Y is the dependent variable (such as consumer spending) and X is the


quantifiable explanatory or independent variable (such as the level of income),
and a and /3 are the two parameters of this simple linear model. Statistics provides
a large body of powerful methods that can use observed data to establish and
quantify such relationships. Linear cause-effect models such as those shown by
Eq. (2.6.1) are often quite satisfactory as long as one does not extrapolate the
relationship to situations where the validity of the relationship is unknown.
Since one often develops cause-effect models by using actual observations to
estimate the parameters (a, /?, etc.), it is possible that the quality (and value) of the
model will be affected by random (uncertain) inaccuracy in the measurements of
Y and X, the ignored influences of, and relationships to, factors (besides X) not
considered in Eq. (2.6.1), and by sampling error (as one is not observing the
full population of related Y and X values). Therefore, one modifies the basic
model of Eq. (2.6.1) to include an error term , a random variable representing
the influence of all other factors (Zb Z2, etc.) not included in this model. Thus
we have
Y = a + px + e (2.6.2)
As we shall soon see, Taguchi has used cause-effect models in a very profitable
way to arrive at robust product and process designs. Cause-effect representations
such as those shown by Eq. (2.6.2) and some considerably more complex
form the basis of countless decisions made by people in various fields such as
physical and social sciences, medicine, engineering and business, every day.
However, some assumptions are inherent in the model formulation of Eq. (2.6.2).
The expected value of is zero, and its standard deviation is or, which does not
vary from observation (Fj, X,) to observation (Yp Xj), Further, the values of , are
independent of X and of each other. Such assumptions permit estimation of a and
i and hence the relationship between Y and X. Known as regression, the develop
ment of such models using observed {(*,*, yj)} data constitutes a significant part
of engineering and scientific experimentation. Once developed, regression models
enable the investigator to predict the value of the dependent variable ( Y) given a
value of the independent variable (X). A caution, however, must be exercised in
the use of regression models. Since almost any set of data values of one variable
may be theoretically regressed against an equal number of data values of some
other variable by applying certain mathematical formulas, experts urge that for
soch regression to have meaning, one should establish a cause-effect relationship
between the variables in question by ANOVA or some other similar method
tefore one attempts model building by regression.

2.7 EVALUATING A SUSPECTED CAUSE FACTOR


A manufacturing engineer may have some reason to suspect that a certain equip
ment material, procedure, or person would make a significant difference, for instance,
m the quality of a product which the plant produces. To confirm or deny such
t>picions, one needs procedures that reliably analyze the observed data. Such
analysis should separate the effect of the factor of interest from all other
34 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

influences and variabilities. The factor of interest here may be quantifiable (i.e.,
measurable in numbers) or merely observable as a condition (morning shift vs.
afternoon shift, or steel vs. copper, etc.).
There is a question here of sufficiency of data. Obviously, a single secretary
typing only business letters cannot confirm whether wordprocessor A is easier
to use than wordprocessor B. In order to establish that A is easier or harder to
use (than B), one would be well-advised to test several secretaries on several
different machines, working on various assignments.
In hypothesis testing, when the response (the output of the process one is
studying) is observed or the performance measurements are taken, one must
ensure that data collected would allow the investigator to compare, for instance,
machine-to-machine, person-to-person, assignment-to-assignment and word-
processor-to-wordprocessor variations in performance. These factors can affect
the secretarys experience, other than any effect that changing the wordprocessor
alone may produce.
In making such a comparison, one must also decide what summaries
(statistics) would be calculated from the observed data, and what tests should be
applied on these statistics. Fisher [2] in 1926 established that the analysis o f
variance (ANOVA) procedure provides one of the best procedures to conduct such
comparisons.
Why is it necessary that several different machines, assignments, and
secretaries be involved in such tests? Perhaps one feels that such elaboration adds
needlessly to the complexity of the study and is perhaps wasteful of time and
resources. If the same person is going to use always the same machine and type
only business memos, one may perhaps get away by doing the convenient
investigation. However, after the investigator has made his recommendation, the
typing assignments would perhaps differ some requiring text work, numbers,
columns, and tables, or even flow charts. It would be desirable then to use a method
of comparison that is valid under less restrictive and perhaps more realistic conditions.
Further, as we shall see later, some influencing factors might be beyond the
investigators control. One here needs randomizing, a procedure that attempts to
average out the influence of the uncontrolled factors on all observations.
The scientific approach of evaluating or comparing the effects of various
factors uses statistically designed experiments, a systematic procedure of drawing
observations after setting the factors at certain desired levels, and then analyzes
the observed data using ANOVA procedure.

2.7.1 An Illustration of the ANOVA Method


Suppose one speculates that crops grow better with the application of the Miragro
brand fertilizer. This speculation could be stated as the null hypothesis H0 (see
Section 2.3): Miragro is better than plain water. To test the acceptability of this
hypothesis, one plans an investigation. The investigator decides to measure plant
growth as the height achieved in inches after 12 weeks of planting healthy
seedlings, with and without the application of Miragro.
The other factors that also could influence growth (or the lack of it) are soil
HANDLING UNCERTAINTY 35

quality, amount of sunlight, seed quality, moisture, etc. To reach a valid conclusion
in this investigation, therefore, one would have to neutralize these influences by
randomizing the plant growth trials with respect to these factors.
If this randomizing is without any plan or logic, it is possible that by the
luck of the draw, most plain water-fed plants would end up growing, for instance,
in shade. In order to avoid this, some deliberate balancing would have to be
planned. If 16 plants are to be grown, eight would be given plain water, while the
other eight would be given Miragro. However, randomizing would decide which
plants would receive Miragro, and which plain water, regardless of where one
plants them.
Suppose that one obtains the following height measurements after 12 weeks
of planting, beginning with 16 equally healthy seedlings:

Treatm ent
Miragro Plain Water

26 25
28 27
30 29
33 30
22 21
24 23
26 25
27 24

Sample Mean 27 25.5


Variance 10.25 8

The calculated means and variances under the two treatments immediately show
t a t the mean heights of the plants grown under the two treatments differ. Also,
obce the considerable difference in height from plant to plant under each of
the two treatments. This suggests that one cannot be certain that the fertilizer
treatment caused the difference in means, and not chance (chance here includes
all the factors the investigator did not or could not control).
The observed difference between the two sample means, 27 and 25.5 inches,
under the two treatments could be either because of a true difference influenced
by these two treatments, or the large variance of a single distribution of plant
heights under various influences. Therefore, to probe the hypothesis H0 further,
we begin by assuming that the effects of plain water and Miragro are unequal and
set up two simple cause-effect models:
Height with plain water: y = Pi +
Height with Miragro: y = Pi +
In these models the parameter /J, is the expected effect on height (caused by
Miragro or by water), and is the unexplained deviation or error, a random
(chance-influenced) variable representing the influence of all other uncontrolled
factors (sunshine, moisture, soil condition, etc.).
36 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

Com paring M iragro with plain water. In reality, if Miragro makes no


difference in growth over plain water, then f}x and fi2 would be equal and the
observed difference between average heights (27 - 25.5) would be only attributable
to the random error e.
How can the investigator determine whether there exists a difference
between Miragro and plant water treatment? The key to this question is the plant-
to-plant height variations. If the plants vary very little within a given treatment,
then (27 - 25.5) is persuasive enough to conclude that Miragro affects growth
over plain water. On the other hand, if the heights within a given treatment vary
considerably from plant to plant, the (27 - 25.5) difference noted is not persuasive
enough. Therefore, we proceed next to compare the observed common variance
across all plants grown to the observed difference in sample means under the
two treatments.
The overall pooled or common variance ( a 2) of plant heights reflects the
variable influence of all factors controlled and uncontrolled that influence
plant growth. This common variance may be estimated by the formula

variance variance
across plants + across plants
with Miragro with plain water
a 2 = ------------------------ -------------------------- (2.7.1.)

= (10.25 + 8)/2
= 9.125

The reader should verify Eq. (2.7.1) using Eq. (2.5.3). With this common
variance of individual plant heights known, we can, given sample size as n, next
estimate the variance of sample averages. This will equal (a 2/n), (see Eq. (2.3.10)).
In the present example, the averages 27 and 25.5 are sample averages, each with
sample size 8. Therefore, the variance of the sample averages is
9 125
= 1.140625 (2.7.2)

Since two sample means (27 and 25.5) were estimated with their average being
26.25, one could directly calculate the variance of sample means, using the
definition of variance (Eq. 2.3.6), as

. (27 - 26.25)2 + (25.5 - 26.25)2


Variance of sample means = --------------------------------- (2 7 3)
2 -1
= 1.125
One may now analyze the two variances calculated from Eq. (2.7.2) and
Eq. (2.7.3) for the sample means. The observed plant-to-plant variance of 9.125
across all plants implies a variance of 9.125/8 ( = 1.140625) for the sample means
with sample size of 8. If mean growth did get affected by Miragro, then that would
cause the two sample means to be significantly different from each other or, in
HANDLING UNCERTAINTY 37

other words, to result in a directly calculated variance of sample means greater


than 1.140625, and not 1.125 as calculated above. Since 1.125 < 1.140625, this
suggests that the observations obtained do not support acceptance of the
hypothesis (H0) that Miragro application is better than plain water.
To summarize, in the foregoing discussion, we obtained two estimates of the
variance of mean plant height, the first from the overall variance of the plants
grown, and the second based directly on the average heights of plants grown
under Miragro and under plain water treatment. If Miragro treatment affected
mean plant height more (or less) than did plain water, the average heights under
the two treatments would differ and thus produce a larger variance of mean plant
height than that found by the overall variance of the plants grown. Thus, we base
the test of the hypothesis here on a comparison o f variances of sample means
observed under different conditions.

2.8 THE F-STATISTIC


In Section 2.5 the Z-, the t-, and the chi-square statistics have been described.
Another useful data summary, known as the F-statistic, is calculated as the ratio
o f observed satnple variances. The F-statistic is particularly helpful in the comparison
of variances, as attempted above in the Miragro fertilizer example.
In the Miragro fertilizer example, we first estimated the variance of plant
height across all plants, a 2, by averaging the two variances in the two 8-plant
samples. Dividing this by n ( = 8, the sample size), we produced one estimate of
the variance of sample means. Next, we computed the observed variance of sample
means directly from the two sample means (27 and 25.5) estimated under the two
treatment conditions. We are now in a position to calculate the F-statistic for this
problem, which is the ratio of the two variance estimates for the sample means:
P_ directly calculated variance of sample means
(average variance of individual plant heights In)

A calculated ratio (of variance estimates) near 1.0, as one would expect, would
suggest that all the plant growth data came from a single large population,
suggesting that application of Miragro fertilizer made no difference. If, on the
other hand, the ratio (F) is much larger than 1.0, this suggests that the variance of
sample means, directly calculated using the observed sample means obtained at
different treatments, is large. This may also suggest that the sample means
obtained under different treatment conditions (here Miragro feeding and plain
water) differ considerably from each other, or are too large to be explained away
by the sampling variation from one plant to the next.
In the Miragro fertilizer example, F = 0.986, which is not a convincing
evidence that adding Miragro makes a difference.
The general character of the F-statistic is as follows: The distribution of the
F-statistic depends on two factors: the number of distinct treatments ( k \ and the
lumber of observations (n) in each treatment. In the above example, k is 2 and
i is 8. The numerator of the F-statistic is the directly calculated variance of the
sample mean, determined from the k sample means obtained under k different
38 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

treatments. The numerator has (k - 1) degrees of freedom, one dof being used in
estimating the average of sample means for the variance of sample means to be
calculated. The denominator, the estimate of the variance of sample means for a
sample size n based on the averaged variance of individual observations, uses a
total of k x n observations. The calculation of the mean of k sample variances (in
the k treatments) from these kn observations, however, requires that one first calculate
k sample means (one each in the k treatments used). Thus, the denominator of F
will have k(n - 1 ) degrees of freedom.
The two degrees of freedom (that of the numerator and the denominator)
determine the exact distribution of the F-statistic. Various F-distributions appear
on Fig. 2.4. One should remember that an F value near 1.0 indicates that the
effects of the treatments do not differ. On the other hand, if the F- statistic is
significantly larger, it would suggest that the mean treatment effects vary
significantly from each other.

(a)

(b)

Fig. 2.4 The F-distribution: (a) With critical values F 0.975 and F 0 025; and
(b) three F-distributions with different degrees of freedom.
HANDLING UNCERTAINTY 39

2 .9 AN ALTERNATIVE APPROACH TO FINDING F : THE MEAN


SUM OF SQUARES
As shown above, the F-statistic forms an important basis in the test of hypothesis
about means and variances using experimental data. The F-statistic can be found
by an alternative method that is arithmeticlly simpler, because only certain squares
rather than actual variances have to be calculated in this second approach. This
1 hinges on the fact first pointed out by Fisher [2] in 1926 that the total
o f squares of deviations of individual observations {F,} from their grand
(Fbar) is equal to two parts. That is, let there be k treatments (Treatment 1,
Treatment 2, . . . , Treatment k) in an experiment, with n observations obtained
wah each of these different treatments. The total set of observations are then
F;. Y \ F?. . . . , _ ij, Ykn - k times n in number. Let the average of the effects
m. Tieacnent 1 be F barj( = (Y\ + Y2 + . . . + Yn)/n), the average of the effects
m Treatment 2 be Fbar2( = (F+1 + Yn+2 + . .. + F2n)/n), and so on.
The grand mean (average of all observations), Fbar, is given by

Fi + Yr> + ... + yv
Fbar = ----- -2------------ ^
kn
If one now calculates the total sum of squares of the deviation of each obser-
Yj from Fbar, one obtains
kn ~
Total sum of squares - L (F- - Fbar) (2.9.1)
;=i 7
Now. for treatment i, which consists of n observations {F;S j = ( ( / - 1 )n + 1),
7 - Fbar)2 in treatment i may be
ii - 1>i + 2), ((/ - l)/z + 3), . . m}, the term (F,-
as follows:
(Yj - Fbar)2 = [(Yj - Fbar,-) + (Fbar,- - Fbar)]2
= (Yj - Fbar,)2 + (Fbar,- - Fbar)2 + 2(F; - Fbar,) (Fbar, - Fbar)
Now
m
I (F. - Fbar.) (Fbar. - Fbar)
_-= : - I , n + l J

in in
I r(y b a r,. - ybar) - I ybar, (ybar,. - ybar)
; = (i-l)n+l

= *11)31: (ybar,- - ybar) - n ybar,- (ybar,- - ybar) = 0

since Fbar,- = <Y( + Y(^ l)n+2 + . . . + Yin)/n. Therefore,

Total sum of squares of deviations

= I (Y. - Kbar)2
3

= 1 (F- - Fbar() 2 + I (Fbar. - F bar)2 (2.9.2)


;=1
40 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

The first term on the right-hand side of Eq. (2.9.2) is the sum of squares of
deviations of the individual observations {>},;' = 1, 2 , . . . , kn} about the respective
treatment means {7bar,, i - 1, 2, 3, . . . , k}. The second term is the sum of the
squares of the difference of each Fbar, (at treatment level /) from the grand mean
average, ybar.
Recall that the total sum of squares is a measure of variation among the
individual observations {K; }. The above decomposition shows that this total
variation is the sum of (a) how much each observation varies about the mean
i

of each treatment and (b) how much the average value of Y varies from one
treatment to the next. This important result can also be expressed as
Total variation in observations = variation within treatments
+ variation between treatments (2.9.3)
The purpose of decomposing the observation-to-observation variation in an
experiment as mentioned above is to clarify that the effect observed, Y, varies for
the following reasons:
1. Each (controlled) treatment may have a different effect on Y.
2. For any given treatment, there are other uncontrolled factors that also
affect Y and cause it to vary about its expected value.
The uncontrolled factors lead to the within treatment variability in observed data.
If there is no difference in the effect attributable to the different treatments, the total
variation observed would only equal the within-treatment variability. If there is a
treatment-to-treatment difference, the total variation observed and then quantified
by the total sum of squares will be significantly larger than the within-treatment
sum of squares. This, as we shall see in Section 3.4, may be detected by the F-test.

EXERCISES
1. An additional set of 25 lines was randomly picked from the book used in
Example 2.1, with the data summarized as follows:
Word Count o f 25 Randomly Selected Lines
9 7 11 5 13
14 13 8 5 2
11 12 4 2 13
8 14 7 13 9
14 2 13 7 10

Combine the above data with the data of Table 2.1 and confirm that the new 95%
confidence interval for ji using the 50 random observations will be the narrower interval
7.8131 < / i < 10.3069
[Hint: Use r(0.025,49) from Appendix A.]
2. Conduct an F-test to accept or refute the hypothesis that the variance of the
two sets of data presented in Table 2.1 and Exercise 1 above are equal. If there
are 435 pages in the book in question, give estimates for the total word count
for this book and the variance of this count.
Design of Experiments
3.1 TESTING FACTORS ONE-AT-A-TIME IS UNSCIENTIFIC
P)

Disputes over why quality is lacking or why a factory cant produce acceptable
goods often last for months and even years. Even experts sometimes dont seem
to agree on the remedy a switch over of material, loading methods, operator
skills, tools, or QA practices. For want of irrefutable evidence, the blame may
subsequently fall on manufacturing, R&D, the design office, suppliers and even
the customer.
This chapter elaborates the F-test a highly precise data analysis method
that ranks among the best known methods for empirically exploring what factors
influence which features. Establishing the existence of cause-effect relationships
scientifically is pivotal in resolving disputes and questions such as those cited
above and guiding later decisions and actions. As we shall see, the F-test plays
a key role in identifying design features that have significant influence on
performance and robustness.
In the study of physical processes aimed at predicting the course of these
processes, one often explores cause-effect relationships using regression analysis.
Strictly speaking, however, regression should be attempted only after one has
established the presence of a cause-effect relationship, and the variables involved
are measurable. When one has not already established the cause-effect relationship,
or when the variables are functional or all influenced by a third factor, regression
or correlation studies can be misleading. Further, regression is decidedly not useful
when the independent factors are attributive (e.g., steel vs. plastic). By contrast,
precise and reliable insight into any cause-effect relationships existing in such
cases can be obtained from statistically designed experiments.
Design is defined as the selection of parameters and specification of features
that would help the creation of a product or process with a pre-defined, expected
performance. When complete, a design improves our capability to fulfill needs
through the creation of physical or informational structures, including products,
machines, organizations, and even software. Except in the most trivial cases, however,
the designer faces the joint optimization of all design features keeping in view
objective aspects that may include functionality, manufacturability, maintain
ability, serviceability and reliability. Often this cannot be done in one step because
design as a process involves a continual interplay between the characteristics the
design should deliver and how this is to be achieved. Producing a robust design,
in particular, is a complex task. As mentioned in Chapter 1, robust design aims at
finding parameter settings which would ensure that performance is on target,
mimizing simultaneously the influence of any adverse factors (the noise) that the
po&xrt user may be unable to control economically or eliminate. Robust design
41
42 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

aims specifically at delivering quality performance throughout the lifetime of


service and use of the product.
Finding how the various design and environmental factors affect performance
can become extremely difficult for the designer. Sometimes the direct application
of scientific and engineering knowledge can lead to (mathematical) cause-effect
models, and when such a model is available, the designer may attempt a direct
optimization. When this is not possible, well-controlled experimentation with the
suspect factors may enable one to evaluate empirically how performance depends
on these various factors. Experimentation here would be a systematic, learning
process the act of observing to improve the understanding of certain physical
processes.
In physical, social, and sometimes in behavioural sciences, experimentation
is a common experience. Use of litmus paper to test acidity might be perhaps the
first scientific experiment to which the reader was exposed. Observing reaction
times of pilots under contrived stress, or polling viewers to understand their TV
viewing habits and preferences are also experiments. In the improvement of quality
also, controlled trials and tests with prototype products and processes followed by
scientific analysis of the outcomes may produce valuable information. Such
experiments may explore the effect of material choices, design features, or process
conditions on the performance of products and processes. Experimentation is
perhaps the only way for finding the average breaking strength of new mixes of
concrete or for confirming that certain curing temperature and time are the best
settings in moulding plastic parts from polypropylene resins.
As mentioned above, statistically designed experiments are among the best
known approaches for empirically discovering cause-effect relationships. These
experiments also require the smallest number of trials, thereby providing the best
economy.
Statistical experiments are certainly not mere observations of an uncontrolled,
random process. Rather, these are well-planned, controlled experiments in which
certain factors are systematically set and modified and their effect on the results
(the response) observed. Statistical experimental designs specify the procedure of
drawing a sample (certain special observations) with the intention of reaching a
decision (about whether certain factors cause a certain effect or that they do not
cause the effect).
Statistical experiments provide many advantages over the popular one-
factor-at-a-time studies for the following reasons:
1. Statistical experiments secure a larger amount of appropriate data (than
do experiments conducted otherwise) for drawing sound conclusions about cause-
effect relationships.
2. The data from a statistical experiment yield more information per
observation. Statistical experiments routinely allow all-at-once experimentation,
yet their precise data analysis procedure is able to separate the individual factor
effects and any interaction effect due to the different factors influencing each
others effects. Interactions cannot be uncovered by one-factor-at-a-time experiments.
3. In statistical experiments using Orthogonal Arrays or OAs (many design
DESIGN OF EXPERIMENTS 43

optimization experiments are of this type), the data are obtained in a form that
makes the prediction of the output for some specified settings of the input variables
easy. Furthermore, OAs greatly simplify the estimation of individual factor effects
even when several factors are varied simultaneously.
The study of interaction is clearly one area in which statistical experiments
continue to be the only procedure known to us. An illustration of the significance
of interaction effects is provided by the lithograph printing example [5] in
Table 3.1.
T A B L E 3.1
L I T H O G R A P H P R IN T IN G E X P E R IM E N T A L D A T A

Exposure Development
Experiment Time Time Yield (%)

1 Low Low 40
2 High Low 75
3 Low High 75
4 High High 40

The table above shows the typical observed effects of exposure and develop
ment times on yield (per cent of prints in acceptable range) in lithography. Note
the large fall in yield when one sets both exposure and development times high.
Such effect (an interaction between exposure time and development time) could
be at most suspected but not established by varying only one of these factors at a
time. If the study involves more factors, interactions would be untraceable in one-
factor-at-a-time experiments.
Statistical experiments consist of several well-planned individual experi
ments conducted together. The setting up of a statistical experiment (also known
as designing) involves several steps such as the following:
1. Selection of responses (performance characteristics of interest) that will
be observed
2. Identification of the factors (the independent or influencing conditions)
to be studied
3. The different treatm ents (or levels) at which these factors will be set in
the different individual experiments
4. Consideration of blocks (the observable noise factors that may influence
die experiments as a source of error or variability).
In the lithography example above, yield % is the response, and exposure time
aod development time are the process design or influencing factors. Each of these
factors has two possible treatment levels (high and low) at which the
ikbographer would set them as needed. Non-uniformity of processing temperature
m i that of the concentration of chemicals would constitute the noise factors here.
Before the investigator plans statistical experiments, he must clearly know
ctojective of conducting the experiments. The clarity in this objective is of
c e o n o o s value. For example, when one states the experiments objective as Select
44 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

the optimal values for resistance R l and inductance L2 in the design of a power
conditioner unit to minimize sensitivity to input voltage and frequency variations,
it has the required clarity. The domain in which the results of a set of designed
experiments are applicable is called the influence space. It is important that one
makes this influence space sufficiently wide by selecting well-spread factor settings
without the concern for off-quality production during the conduct of the
experimental investigation. During such experimentation, the investigator should
uncover the range of the input variable over which performance improves, as also
the range of input settings over which performance deteriorates. Only then appropriate
countermeasures can be identified and devised.
The elements of this domain on which the experiments are conducted are
called experimental units. The experimental units are the objects, prototypes,
mathematical models, or materials to which the investigator applies the different
experimental conditions and then observes the response.
In statistical experimentation, one distributes the experimental units randomly
in the backdrop of noise factors to represent correctly the character of the overall
influence space. This minimizes the chances of any biasing effect caused by the
uncontrolled factors. For example, in testing the productive utility of fertilizers,
one takes care to distribute the planting of seedling so that the effects of sun/shade,
soil differences, depth of tilling, planting, etc. average out. These are the factors
that the investigator does not control during the trials.
In design optimization experiments as proposed by Taguchi, the investigator
changes the settings of the parameters under study from trial to trial in a systematic
manner. Special matrices, called OAs (Fig. 1.5), guide these settings. OAs are
matrices that specify the exact combinations of factor treatments with which one
conducts the different experimental trials. It is common to symbolically represent
or code the distinct levels of each design or noise parameter by (1, 0, -1), or
(1, 2, 3), etc. (Fig. 1.5) to distinguish the different combinations of parameter
settings from each other. The foremost reason for using OAs rather than other
possible arrangements in robust design experiments is that OAs allow rapid estimation
of the individual factor (also known as main) effects, without the fear of distortion
of results by the effect of other factors.
In design optimization, one uses OAs to the maximum extent possible to
achieve efficiency and economy. Orthogonal arrays also simplify the simultaneous
study of the effects of several parameters.

3.2 THE ONE-FACTOR DESIGNED EXPERIMENT


It rarely suffices to study how a certain response variable depends on only one
independent variable. However, sometimes one may have strong reasons to determine
the effect of only a single influencing factor on a process. One treats here all
other factors process conditions, operator actions, machine used, etc. as noise
or uncontrollable. Such a study constitutes a one-factor investigation. After he has
reliably understood the influence of the single chosen factor, the investigator may
expand the scope of the investigation to include some other factors, and conduct
further experiments.
DESIGN O F EXPERIMENTS 45

Even the one-factor statistical investigation has a formal, statistical


structure. The investigator starts with some stated hypothesis or speculation (e.g.,
material choice has no influence on quality). The investigator then proceeds to
obtain experimental evidence to confirm or reject this hypothesis.
Since most real processes involve more than one influencing factor, the one-
factor statistical experiment is not a very common practice. The purpose of our
discussing the one-factor experiment in some detail here is to describe the steps
and the methods involved in experimental data analysis. These methods apply to
the larger and more complex experiments also. The data analysis procedure uses
ANOVA and the F-test, mentioned in Sections 2.7 and 2.8. One plans one-factor
statistical experiments only when there is sufficient reason to study the effect of
only one independent factor; one treats here all remaining factors as uncontrolled
or noise. One uses blocking and randomization here to minimize bias effects due
to the factors not controlled, for otherwise factors like time of day, humidity, the
load on the system, etc. may affect the conclusions. Randomization with respect to
the uncontrolled factors is one important reason that the one-factor designed
experiment is fundamentally different from the change-one-factor-at-a-time mode
of experimentation. The other key difference arises from the application of
ANOVA the manner in which the investigator analyses the observed data.
Sensing any effect (or signal) due to the factor under study in the
presence of uncontrolled disturbances is the primary challenge in the one-factor
experimental design. Besides randomization, replication is a common technique
employed in one-factor design. Replication implies the repeating of experiments
under identical controlled conditions, aimed at averaging out any extreme effect of
the uncontrolled factors on a single experimental run. The goal of randomization
and replication is to attempt to spread the uncontrolled disturbances evenly over
the different observations. In statistical terminology such an arrangement is
called the completely randomized one-factor experiment.
The investigator contemplating a one-factor experiment starts his work by
speculating or hypothesizing that the influence of the factor to be studied can be
described by a simple cause-effect model. He assumes that if he sets the independent
factor at k different levels with n, replicate observations of the response taken
when the independent factor is set at its /th level the effect on the response
variable Y can be modelled simply as
Yij = i- 1 , 2 , . . k; j = 1, 2, ..., ttj (3.2.1)
where Y\ j is the observed value of Y in the yth run of replication when the independent
factor was set at its /th level, /i, is the mean effect on Y of the independent
factor being set at this level. $ designates an independent, normally distributed
random variable with zero mean and a 2 variance, representing the random
contribution in Yy of all the uncontrolled factors. Thus one speculates the effects
on Y due to the different treatments to be fa, fa, fa, . . . .
The one-factor model cannot, however, be set up arbitrarily. In adopting
tlus model the investigator assumes that the effect of all uncontrolled factors
aifonnly affect each of the observations and cause random disturbances {/, } that
representable by a 2. This effect is shown in Eq. (3.2.1), which links the
46 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

observations {Fy} to the influence (//,-) of the control factor; Eq. (3.2.1) also links
{Yij} to [Eij], the effect of the uncontrolled factors.
The statistical analysis of the results of the one-factor investigation depends
strongly on three assumptions: linearity, additivity, and separability of effects of
the control factor and the uncontrolled factors. Only under these assumptions the
simple model of Eq. (3.2.1) would be a valid description of the factor-response
relationship.
The one-factor statistical investigation can be useful in situations such as
the following: A bridge designer may speculate that the material chosen (steel,
Alloy X, or Alloy Y) to fabricate a structural beam has an influence on the beams
deflection characteristics under a standard load, independent of other factors.
Subsequently, the deflections observed of prototype beams built using these materials
may be used to (a) establish whether material choice has any influence on deflection,
and (b) identify the material with the maximum or minimum flexibility.
As mentioned earlier, for the conclusions to be valid, it is critical, even in
this apparently straightforward investigation, that one randomizes the runs and
their replications with respect to the uncontrolled factors.
When one uses k treatment levels (say k different materials to construct the
beam), it is common to summarize the observed data in a table such as Table 3.2,
which shows that the investigator obtained a, replicated observations at Treatment
/, and a total of nx + n2 + n$ + . . . + n*(= AO observations.

TABLE 3.2
OBSERVATIONS AND SAMPLE MEANS FO R A ONE-FACTOR EXPERIMENT

Factor Level Observations Sample Mean

i yn,y12,...,y ybar1= "i y1}


1 ;=i

n ,

Y i\ Y w Ybar- 1 Y-. \
;=i

Ykv yk2> Y**k Fbarj, = E Yki


i

Often the focus of the one-factor statistical experiment is on determining


whether it is reasonable to accept that the average effect //* caused by level i is
identical for all the factor level settings (z = 1, 2, 3, . . . , k). Alternatively, one
may hypothesize that the effect is different at least at one level, which causes the
response at this level to be noticeably different from the overall average. Note
that one uses the terms level and treatment interchangeably; both represent the
distinct levels at which one sets the factor in control.
DESIGN OF EXPERIMENTS 47

The observation averages rbar1# Fbar2, etc. shown under Sample Mean in
Table 3.1 estimate the treatment effects i = 1, 2, 3, To determine now
whether the treatment effects are unequal, one would statistically compare the two
following sources that may cause the {Tbar,-} averages to differ from each other
(see Eq. 2.9.3):
1. Within-factor (also called within-treatment) variability.
2. Between-factor (also called between-treatment) variability.
If between-treatment variability is (statistically speaking) larger than what one
expects from variation that occurs within a typical treatment when one replicates
observations, one would question whether the effects / = 1 , 2 , 3 , . . . , *, are all
the same. Perhaps the reader can see that the approach here parallels the ideas
that led to the illustration of ANOVA in Section 2.7.
One key measure of variability in a set of observations is how far a. single
observation deviates from its expected average. For a group of observations, one
determines variability collectively by summing up the squares of the differences
of the individual observations from the average. One calls this sum the sum of
squares of deviations or more explicitly the error sum of squares. This quantity
measures the experimental error (resulting from the influence of uncontrolled
factors) in replicating or repeating observations (/z,* times) when treatment i is
held constant.
One computes the experimental error, which reflects the variability caused
by all factors not in control or not deliberately set, as the sum of squares of the
deviation of individual observations from their respective expected averages.
Thus, if the observations resulting from replicating the experiment at treatment
i are j = 1, 2, 3, . . . , nh and their average is Ybaih then the experimental
error accumulated by replicated runs at treatment level i is

(Yy - Khar)2
j =i

The average variability among the observations is called the mean sum of squares
or mean square error. The mean square error at treatment i is

T ^i - 1' K )2

In the above, the quantity (n, - 1) is called the degree offreedom (or dof) of the
mean sum of squares at treatment i. The dof acknowledges that of the ,observations
obtained, if one calculates a statistic (Khar,) using these data values, then this
statistic (Fbar,) and (n, - 1) observations together can determine the value of the
one remaining (i.e., the ti, th) observation.

EXAMPLE 3.1: A one-factor investigation the beam deflection problem.


An investigator constructed several beams of identical geometry using each of
the three available materials, steel, Alloy X, and Alloy Y (thus k = 3). The
deflections {Ytj} observed under a standard load were as in Table 3.3.
48 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS T O ROBUST DESIGN

TABLE 3.3
BEAM DEFLECTION TEST RESULTS

Observations {5^} Sum of Squares


(Deflection measurements of the Difference
with beams made of each of Individual
Material material, under standard Observations
(0 load, in 1/1000 in.) Ybaij from Fbar,

Steel 82 86 79 83 85 84 86 87 84 48
Alloy X 74 82 78 75 76 77 77 40
Alloy Y 79 79 77 78 82 79 79 14

What can one infer from these test results? Note the following salient aspects of
these observations:
1. The number of observations (i.e., replications done by building several
beams and measuring their deflection under standard load) for the three different
materials (i.e., the treatments) is unequal. This is an important fact about one-
factor investigations in general. In these investigations it is not necessary that an
equal number of observations be obtained for each treatment.
2. It is not readily apparent from the average deflections {Fbar7} calculated
for each material type that under standard load a material-to-material difference
in flexibility (manifested by deflection) exists.
3. One cannot yet comment on the Sum of Square values (a measure of
variability from the respective expected average) in the table, for these contain
contributions from an unequal number of data points (replications).
With the help of these observations, an important data summary (statistic)
can be calculated. If one pools all the three Sum of Squares and then averages
them by dividing by the total dof for this pooled sum (as done to define the
estimated pooled variance in Eq. (2.5.3)), one produces the overall observation-
to-observation (or error) variability, known as the within treatment variability
(see Section 2.6). This variability equals

I k n
2
Mean SSeiTor = - Z (r - /bar,)

Mean SSenor, often called the Mean Sum of Squares, and as already mentioned,
reflects the typical observation-to-observation variability when any particular
treatment is held constant and replicate observations are made. In the beam deflection
example, replications are made by fabricating several identical beams using the
same material and observing their respective deflections under the standard load.
Note that the averaging to get Mean SStTT0Tuses all N observations and it spans
across each of the k treatments used.
The other variability, the one that is closer to the objective of the one-factor
experimental investigation, manifests the impact on the observations caused by
different treatments. One calculates this variability as the between-treatment sum
of squares. One determines the between-treatment sum of squares by setting a
DESIGN O F EXPERIMENTS 49

reference average value equal to the grand average of all observations, or ybar,
1 k nt
ybar - E E y
f=1 ;= 1 j

Recall that we have used a total of k treatments here and nxrepresents the number
of observations obtained at treatment level i. A total of N observations were
originally obtained. One finds the between-treatment sum of squares as
^treatment = i (^ba^ - ybar)2 + rt2(ybar2 - ybar)2
+ . . . + a*(ybar* - ybar)2
Since one uses up one dof in the k treatments in calculating the grand average ybar,
one calculates the mean between-treatment sum of squares statistic as

Mean X nf (ybar* - ybar)2


K 1 ;=i

Mean SStreatment reflects the treatment-to-treatment variability, each treatment being


a distinct level at which one has set the factor whose effect on Y is being investi
gated. Thus the above procedure leads to the estimation of two average variabilities,
the mean within-treatment variability (Mean SSQTr0T) and the mean treatment-to-
treatment or between-treatment variability, Mean SStreatmem*
One may now use the data of the beam deflection experiment to find these
two variabilities (variation among deflections caused by material-to-material
differences, and replication of the experiment with the same material). One finds that
ybar = 80.4
48 + 40 + 14 _
Mean SScrror
(8 - 1) + (6 - 1) + (6 - 1) ~
8(84 - 80.4)2 + 6(77 - 80.4)2 + 6(79 - 80.4)2 ,
Mean .S;S'treatment
3-1
By these calculations we have separated the two sources of variabilities (within-
and between-treatment variability) the prime objective in statistically designing
the one-factor experiment to study treatment effects. In the numerical example
above, the average material-to-material (between treatment) variability appears
to be large (it equals 92.4) when compared with observation-to-observation or
within-treatment variability (which equals 6 .0 ).
Without further analysis, though, one cannot yet say that the effect due to
materials is significant (see Section 2.4) in the backdrop of the noise (observation-
to-observation variability). Such analysis requires a statistical comparison of the
two types of variabilities, within- and between-treatment. The comparison
procedure to be used here is again ANOVA, introduced in Section 2.7.

3.3 ANOVA HELPS COMPARE VARIABILITIES


We elaborate some aspects of the ANOVA procedure in this section. As described
in the section above, two types of variations may be present in the one-factor
50 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

experimental data, namely, the between- and the within-treatment variability. The
purpose of ANOVA, which one performs with the mean sums of squares, is to
separate and then compare such variabilities. Also, as we will see later, ANOVA
applies not only to one-factor investigations, but also to multiple factor studies.
This is a considerable capability for a test because variabilities may be caused by
one or several independently influencing factors, and by their interaction.
Recall first that if one squares the deviation of each observed data {Yjj}
from the grand average and then sums these squares to a total, one ends up with
the result (due to Fisher [13]) derived in Section 2.9:
Total Sum of Squares = Sum of Squares due to error
+ Sum of Squares due to treatment

This decomposition of total variability shows that in one-factor experiments


there is no other variability in the data {Yfj} except those caused by within-treat
ment variability (the errors in repeating or replicating the observations) and
hetween-treatment variability. The k different treatments at which the investigator
sets the factor under investigation cause the between-treatment variability.
One may now use a standard result from probability theory to assist in the
study of these variabilities. If random variables Xh X2, X 3, . . . , Xk are distributed
normally (each with variance a 2), then the quantity

(X; - Xbar f t a 2
1= 1

represents the sum of k standard normal variates and therefore has a chi-square
distribution, Xbar being XX,/fc. As mentioned in Section 2.5, the chi-square
distribution is also a standard probability distribution like the normal distribution.
The chi-square distribution has only one parameter, its dof. With the squares of
deviations of k observations {X,-, i = 1, 2, 3 , . . . , k) from their mean Xbar summed,
the chi-square variable written above will have (k - 1) degrees of freedom.
In the one-factor investigation, if the mean effects fix, //2, Hi * IM-> *>
due to the k different treatments are all equal, then the total N observations taken
in the experiments would all belong to the same normal population with
variance <7 2, because due to randomization the influence of the uncontrolled
factors may be assumed to be identical in all the observations. This suggests that
the quantity (a random variable)

Total Sum of Squares

will also have a chi-square distribution, with (N - 1) dof. (In this context, the
reader may review the material in Section 2.9.)
The quantity Error Sum of Squares (or the sum of squares of deviations or
errors caused by the uncontrolled factors) computed as
DESIGN O F EXPERIMENTS 51

is the sum of k independent sums of squares of deviations (from the respective


expected averages). Because the chi-square random variable is the sum of
squares of standard normal random variables (Section 2.5), the error sum of
squares above when divided by a 2 also becomes a chi-square random variable,
with (N - k) degrees of freedom. (The k averages {Fbar i = 1, 2, 3, . . . , k] use
up k degrees of freedom here.)
Now, since Eq. (2.9.3) leads to
Total Sum of Squares = Error Sum of Squares + SStreatment
we have

Total Sum of Squares _ Error Sum of Squares +


_
t ^treatment
(7 O <7

Hence, SStreatment^ 2 is a chi-square variable with (k - 1) degrees of freedom. This


helps in establishing whether a change in treatment level has an effect on (i.e.,
whether it causes a large deviation of) average response ft. The F-Test (discussed
in the next section) applies here.
It is customary to present the various computations of sums of squares, etc. in
a compact tabular form, known as the ANOVA table which is illustrated in Table 3.4.

TABLE 3.4
THE ANOVA TABLE

Source of Sum of Squares Degrees of Mean Sum of


Variability (SS) Freedom Squares

treatment
Treatment Z ni(Yboxi - 7 bar)2 k- 1
i =1 k- 1

terror
Error E E (YuJ - Ybai'j2 N -k
1=1 7= 1 N -k

Total E E (K. - Fbar) 2 N- 1


i=1 7=1
>

In constructing the ANOVA table, one often uses a computer. In manual


operation, one may simplify the calculations by using the following two formulas
provided by algebra:
tc /7*
Total Sum of Squares = Z (YJ- - Fbar)2
i=i j =l 3

k ni k ni
= I Z K, 2 - [ Z I Y-f/N
/=1 7= 1 i = 1 7= 1 ( 3 .3 .1 )

ni -) ni o
t [ E [E I

^treatment = * ^(Ybat, - Y b v f = E ^ -------- ( 3 .3 .2 )


;=i i= 1 ni iV
52 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

Looking at the imposing appearance of Eqs. (3.3.1) and (3.3.2), one might wonder
if this is any simplification! However, the truth is that both these final relations
(the Total Sum of Squares and SStreatment) involve only the squares of observed
values {Y/,}, and the squares of certain sums of these observations both being
easier to calculate manually. Thus, the use of Eqs. (3.3.1) and (3.3.2) avoids
calculation of the N individual deviations {(Fy - Fbar,)2} and their squares
directly, by hand. The following example uses this modified procedure to evaluate
^treatm ent-
EXAMPLE 3.3: Sum of squares for beam deflection. Returning to the beam
deflection problem, one finds that k = 3, nx= 8 , n2 = n3 = 6 , and thus N = 20. Also,
k j
Z L Yu = 82 + 86 + ... + 82 + 79 = 1608
i=1 ; =1 J

X y ..2 = 822 + 8 6 2 + ... + 792 = 129,570


i j
Z 7^ = 82+ 86 + ... + 87 = 672

X y = 74 + 82 + ... + 77 = 462

X y = 79 + 79 + ... + 79 = 474
j
Therefore,

Total Sum of Squares = 129,570 - = 286.8

_ 6722 , 4622 , 4742 16082


^treatm ent 8 6 6 20

SSertor = 286.8 - 184.8 = 102.0

One may now show the ANOVA table as in Table 3.5.

TABLE 3.5
ANOVA FO R THE BEAM DEFLECTION EXAM PLE

Source of Sum of Degrees Mean Sum


Variability Squares (SS) of Freedom of Squares

Treatment 184.8 2 92.4


(different materials)
Error 102.0 17 6.0
Total 286.8 19

Notice that the mean sum of squares of deviations (92.4) caused by changing
materials to construct the beams is considerably larger than the average or mean
variability that occurs because of experimental error (6 .0 ) from measurements
repeated with the same material.
DESIGN O F EXPERIMENTS 53

One may be tempted to conclude at this point that the effect due to materials
is significant (over background noise). The proper analysis procedure to apply here,
however, is the F-Test, described in Section 3.4.

3.4 THE F-TEST TELLS IF FACTOR EFFECTS ARE STATISTICALLY


SIGNIFICANT
The purpose of the F-test is to appraise the different treatment effects represented
by Mb M2>--->M* the response model (3.2.1) to see if it is reasonable to assume
that there is no difference due to treatments. If this is the case with the beams, we
would find that deflection is independent of beam material. The test begins by
speculating (hypothesizing) with H 0 that

Ml = 1*2 = M3 = * = M*
against the alternative hypothesis (Hx) that at least one effect {,*} is different.
The F-test is so precise that it is able to detect even if only one fii is different
from the overall average effect ji.
Recall that we estimated the experimental error (the average observation-
to-observation variability at a given treatment level or the within-treatment
variability) by the mean square error (Mean SSerror). Mean 5Serror reflects the
influence of the uncontrolled factors and thus estimates a 2 (the variance of {y}y
defined in Section 3.2). This is true irrespective of whether changing the treatments
has an effect on response Y or not.
The between-treatment mean sum of squares also would be an estimate of
(because it will not be different from) the experimental error variance a 2 provided
all treatment effects (jiu ji 2>M3> >M*) are the same. However, if these effects
are different, the between-treatment mean sum of squares would be affected by
this difference also (these are the differences among fi2, M3> . * , M*) and,
therefore, be generally much larger than a 2.
If the treatment effects fih jx2, M3>etc. are each different from the average
overall effect of treatments (i.e., //), then it can be shown that

I n.-Oi; - H )2
Expected Mean S,5 treatinent = cr2 + 1 k l ---

We mentioned in the section above that both S S ^ ^ J a 2 and SSen0T/a 2 are chi-
square variables. The fact that the ratio of two chi-square random variables is a
random variable that has the F-distribution (Section 2.8) enables us to determine if
the effect on Y varies with treatment. Thus the F-test answers the question: Based
on the experimental observations would it be reasonable to assume that

Ml = M2 ~ M3 = * = Mfc

or is it that at least one treatment effect is different? One might guess that since the
ratio of variances is the basis of the F-test, this test may enable one to compare the
variability caused by treatment effects to the noise or within-treatment variability.
The special way in which one sets up the one-factor statistical experiment causes the
54 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

noise variability to occur only because of the error in replicating an experiment when
a given factor treatment (e.g., the material used in the beam) is held unchanged.
We are not aware of any other arrangement of experimenting with a factor
and observing the response to evaluate the effect of the factor that exceeds the
precision of the F-test either in theory or in practice. The F-test procedure
transforms observed data suitably to calculate the experimentally realized value of
a (standard) F-variable, called the tests F-statistic. If this value turns out to be a
rare F-value (one that will be observed or realized only rarely, i.e., with low
probability), then one concludes that the initial hypothesis that treatment effects are
all equal is not tenable.
Mathematically speaking, if ^ = n 2 = * . = M*(= ft), then the quantity
Mean SStreatmenl
Mean SSm0I

will have an F-distribution with (k - 1) and (N - k) degrees of freedom. This


result is based on the mathematical definition of the F-distribution and it leads
to the following test procedure: If Fcalc > F ( a ; k - 1, N - k\ then accept the
hypothesis that treatment effects are significant, i.e., some /i, may be different
from the rest. On the other hand, if

FcaIc < F (a ; k - 1, N - k)

then there is not enough evidence in the observed data to reject the hypothesis that

//i = /i2 = . . . = /i*

In the above test, F (a ; k - 1, N - k) is upper a percentage point' of the F-


distribution with (k - 1, N - k) degrees of freedom (Section 2.8). The percentage
a is the probability of the test suggesting that at least one of the $ s is different,
when they are all equal (a Type I error, see Section 2.4). One selects this
probability to be 5% or 1% by convention, thus limiting the mistaken chance of
rejecting the hypothesis (H0) that treatment differences have no effect to 5 or 1.
An alternative test also usable here calculates the actual probability

Prob [F(k - 1, N - k ) > FcaIc]

where Fcak is the observed F-ratio, Mean SStreatment/Mean SSerror. This is the
probability of obtaining a realization from the F(k - 1, N - k) distribution that is
at least as large as the observed F-ratio. If this probability is < a, the test suggests
that one should accept the alternative hypothesis Hi that at least the effect of one
treatment is not equal to that of the other treatments. Otherwise, one accepts the
null hypothesis H 0 that there is no effect because of treatment changes, i.e.,

Mi = fh = = Ik = V

EXAMPLE 3.3: The F-test for the beam deflection data. In the beam design
problem, the F-ratio calculated, i.e., Fca]c is 92.4/6.0 = 15.4. The critical value of
F with a = 0.05 is F(0.05; 2, 17) = 3.59 (Appendix A). The critical value with
a = 0.01 is 6.11. This strongly suggests that one should reject the hypothesis
DESIGN O F EXPERIMENTS 55

that //] = fi2 = H3, or that the material with which the beam is fabricated does
not affect deflection under the standard load.
It should be noted that the one- or single-factor ANOVA model assumes that
errors are all independent and normally distributed with an identical variance
(a 2) because of identical influence of the uncontrolled factors on each treatment
group. It is possible to check whether the residuals {Cy} are distributed as N[0, a],
by calculating and examining these residuals. The residuals will be the differences
between the observations Yy and their group average Ybar, at the respective treatment
setting /, written as {y ( = Yy - ybar,)}. No regularity or patterns in a plot of the
residuals should appear. Rather, if the influence of the uncontrolled factors is
uniform and proper randomization has occurred, the residuals should display a
random scatter about zero.

3.5 FORMULAS FOR SUM OF SQUARES AND THE F-TEST


This section provides the formulas (without proof) for an experiment aimed at
investigating the main effects of Five 2-level factors (A, B, C, D , and E) and two
2-factor interactions (A x C and B x C ) . The L 8 OA (Appendix B) guided the
trials, without replicating any of the experiments. Table 3.6 shows the column
assignments and the observations [yif i = 1, 2, 3, . . . , 8 }, with each factor set
at one of the two levels coded as ul or 2 as shown.

TABLE 3.6
AN EXPERIMENTAL DESIGN TO INVESTIGATE FIVE MAIN FACTOR EFFECTS AND
TW O 2-FACTOR INTERACTIONS

A C Ax C B D BxC E Factor
Assigned
1 3 4 5 6 1 < Lg Column

Experiment Observation

1 1 1 1 1 1 1 1 y\
2 1 1 1 2 2 2 2 yi
3 1 2 2 1 1 2 2 ya
4 1 2 2 2 2 1 1
5 2 1 2 1 2 1 2 ys
6 2 1 2 2 1 2 1 >6
7 "> 2 1 1 2 2 1 yi
8 0 2 1 2 1 1 2 ys

Let the following terms represent the quantities as indicated:


Ai = X observations (where one sets Factor A at 1)
= yi + y2 + )?3 + ,)'4
NAi - number of observations with Factor A set at 1
Abarj = Average of observation with Factor A set at 1
= A xINai
= ()i + yi + y^ + yd/*
56 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

Similarly, one defines Abar2, ZJbarj, /?bar2, Cbarj, etc. One may evaluate the main
factor effects or the main effect dependencies as follows:
Average Effect^ = Abar2 - Abarx
Average Effect^ = Z?bar2 - Bbaxx
Average Effect^ = Cbar2 - Cbarx
Average Effect^ = >bar2 - Dbarj
Average Effect^ = bar2 - Ebari
The two-factor interactions are calculated as follows: Let
AjCi = Sum of observations with factor A set at 1
and factor C set at 1
= yi + ?2
A\C2 - Sum of observations with factor A set at 1
and factor C set at 2
= ^3 + y4
A2Cx = Sum of observations with factor A set at 2
and factor C set at 1
= >>5 + >>6
A2C2 = Sum of observations with factor A set at 2
and factor C set at 2
= yi + Js
Then, if we define A,C) bar as A{C-}12, we have
Interaction^ = [(AiC\ bar + A2C2 bar) - (AXC2 bar + A2C\bar)]/4
To carry out ANOVA of the observations, the sums of squares of certain
deviations are required. One determines these sums of squares as follows: Let
r = L all observations = yx + y2 + y3 + y* + ys + y<$ + yi + y
The correction factor (CF) is defined as
CF = T2IN
where N = total number of observations obtained. The Total Sum of Squares (Sr) is

i=1
The Factor Sums of Squares are

Sa = [Aj]2/NA1 + [A2?IN a2 - CF
SB = m 2/NBl + [B2]2/NB2 - CF
Sc = [C tflN a + IC2]2/Nc2 - CF
SD = m 2INm + [D2]2/Nm - CF
SE = [Etf/Nn + [E2]2!Ne2 - CF
DESIGN O F EXPERIMENTS 57

Hence,
SA = [A,]2/Nm + [A2f lN A2 - CF

= O i + yi + y* + yd2/* + 0's + ^ + yi + ys)2/4

- (yi + yi + y?, + y* + y$ + y6 + yi + y*)2/%

And the Interaction Sums of Squares are

Sj\xc - [ A lC i + A2C2]2I(Naici + A ja2C2) + [A\Q^^ ^ 2 ^ | ] 2/ ( ^ a 2 c i + NAia ) - CF

where NAiq = number of observations with factor A set at level i and factor C set
at level j. Substituting the appropriate quantities, we obtain

Saxc = (Vi + V2 + V7 + y8)2/(2 + 2) + 0 3 + }4 + >s + y6)2/(2 + 2) - C F

Similarly,

SB*c = (yi + )5 + + )s)2/(2 + 2) + (y2 + y3 + ye + y ifK 1 + 2 ) - CF

We can find the Sum of Squares for Error (Se) as follows:

Se = Sf SA Sg Sc Sd Sg SAxC Sbxc
We then determine the respective dof. The total dof f T is given by
fT = total number of observations - 1, or (N - 1).
The other dof are as follows:
fA = (number of distinct levels of A) - 1
= 2 - 1=1

fB = (number of distinct levels of B) - 1


= 2- 1 = 1

fc = (number of distinct levels of Q - 1

= 2- 1 = 1

fD = (number of distinct levels of D) - 1


= 2 - 1=1

fE = (number of distinct levels of E) - 1


= 2- 1 = 1

/ax C = Ia X fc ~ 1 X 1 = 1

I bxc = / b x / c = 1 x 1 = 1

Hie dof for error (the influence of uncontrolled factors influencing the response)
0&
3Y be found as
m

te rro r = /total ~ (fA + ,/s + fc + fa + I e + /a x C + /fix e )

for the data given in Table 3.6 equals


S 1) 1 1 1 1 1 1 1 = 0
58 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

The Mean Sums of Squares are given by the general formula


Mean Sum of Square = Sum of Squares/dof
Accordingly,

Mean SSA = Sum of Squares^/dof^


= SAlfA = {[Ax]2INm + [A2]2INa1 - CF }lfA
Mean SSAxC - Sum of Squares^xc/dof^c
Therefore,

Sa*c ~ {[yi + };2 + y? + y8]2/(2 + 2) + [y3 + y4 + y5 + y$]2l(2 + 2) - CF}f\


One uses the Mean Sum of Squares in the evaluation of significance of the factor
and interaction effects on the response y. The F-test accomplishes this.
We should point out that the F-test requires evaluation of the F-statistic,
determined as the ratio
Mean SSfactar
F-statistic = -- -5K2SL
Mean SSmor

The Mean Sum of Squareserror may be evaluated if d o f ^ > 0. This is always


possible if one replicates some or all the experiments. However, if one obtains
only one observation per experimental setting (as in the 5-factor example in
Table 3.6), dofeTTOr may equal 0, making it impossible to find the numerator of
the F-statistic. In such cases one pools certain sums of squares, as follows.
If there are reasons to believe that certain main factors and interactions
have no or little effect on the response y, then the sums of squares of these
factors and interactions, and the corresponding dofs are pooled to construct
the Error Sum of Squares, Sey and the dof for error, / error. For instance, if factors
A and D have little effect on y and if the interaction A x C may be ignored, then
$e = St - SB - Sc - SE - SBxC
/error = h Ob +fc + / e +I bxc)

This provides

Mean SSeTTOr = SJJerror

for substitution into the formula for the F-statistic given above

3.6 SUMMARY
A key objective in Taguchi methods is to uncover how the various design parameters
and environmental factors affect the ultimate performance of the product or
process being designed. Performance may be affected not only individually by the
design parameters and by some factors in the environment, but also by possible
interactions among design factors and the interaction between the design and
DESIGN OF EXPERIMENTS 59

environmental factors. Specifically, a robust design cannot be evolved without


uncovering these effects and interactions [32].
The traditional vary one factor at a time experiments are intrinsically
incapable of uncovering interaction among factors. Statistically designed
experiments conducted with a prototype product or process constitute the only
sound and scientific approach available today to study such phenomena.
Statistical experiments are well-planned, both with respect to the combination
of settings of the different independent factors at which the experimental trials
have to be run, and with respect to the manner in which the response data (the
outcome of these experiments) are analyzed. The objective of this well-planned
effort is to uncover (a) which design and environmental factors significantly
affect performance; and (b) what countermeasures can be devised to minimize the
effect of the adverse factors and conditions such that the final performance will be
least sensitive particularly to factors that the user of the product/process is unable
to economically or physically control.
ANOVA and the F-test provide some of the mathematical machinery needed here.
EXERCISES
1. Design optimization experiments to help prolong router bits in printed circuit
board manufacture conducted by the AT&T Company reported the following
average router life data [14, p. 194]:
x-y feed (in/nun) stack height (in)

Speed 60 80 3/16 1/4


30.000 rpm 5.75 1.375 3.875 3.25
40.000 rpm 9.75 6.875 13.25 5.5625

Develop appropriate graphical displays to evaluate the extent of interaction


between (a) speed and x-y feed, and (b) speed and stack height.
2. Mixing synthetic fibre with cotton is being contemplated for producing tarpaulin
material with increased tensile strength. Designers speculate that the strength of the
cloth is affected by the percentage of cotton in die fibre. Twenty-five experiments
were randomly conducted with % cotton mix in fibre as shown. Perform die
appropriate ANOVA and comment on the acceptability of the designers suggestions.
Develop a cause-effect diagram showing the various factors other than
cotton (%) that might affect the results and discuss why repetition of the trials is
necessary here.
TABLE E 3.1
RESULTS OF CLOTH STRENGTH TESTS________________
% Cotton in Tensile Strength of Cloth
Fibre (lb/sq. in)
15 8 8 16 11 10
20 13 18 12 19 19
25 15 18 19 20 19
30 20 26 23 19 23
35 8 10 12 15 11
60 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

3. Three different nozzle designs are available for assembling fire extinguishers.
Five test runs made using each nozzle type with discharge velocities under
identical inlet conditions produced observations as shown in E 3.2. Confirm diat
at the significance level a = 0.05, the performance difference among the nozzle
designs cannot be ignored.

TABLE E 3.2
RESULTS OF NOZZLE DISCHARGE TESTS

Nozzle Design Discharge Velocity (cm/sec)

A 97.6 98.2 97.4 98.4 98.8


B 98.5 97.4 98.0 97.2 97.8
C 98.0 97.0 96.6 96.8 98.0

What factors might affect the above observations? Describe a scheme for
randomizing the experimental trials.
The Foundation of Taguchi Methods:
The Additive Cause-Effect Model
4.1 WHAT IS ADDITIVITY?
An experienced plant engineer is hardly surprised at finding that product or
process performance Y depends on several different influencing parameters P, Q,
R, S, etc. These dependencies, in general, can be quite complicated. As a result,
the empirical studies to determine them can become large and even difficult to
run. Fortunately, as pointed out by Taguchi, in many practical situations these
studies can be restricted to the main-ejfect dependencies (Section 3.5). In these
cases the dependencies are additive and can be satisfactorily represented by what
one calls the additive (or main factor) cause-effect model. The additive model
has the form
y = M + Pi + Qj + rk + si + e (4.1.1)
$

where fi is the mean value of y in the region of experiment, piy q j, etc. are the
individual or main effects of the influencing factors P, Q, etc., and is an error
term.
The term main effect designates the effect on the response y that one can
trace to a single process or design parameter (DP), such as P. In an additive model
such as the one given by Eq. (4.1.1), one assumes that interaction effects are
absent. In this model, pt represents the portion of the deviation of y (or the effect
on y) caused by setting the factor P at treatment Pit qj that due to the factor Q at
Qj, and rk that due to setting R at Rk is rk, and so on. The term represents the
combined errors resulting from the additive approximation (i.e., the omission of
interactions) and the limited repeatability of an experiment run with experimental
factor P set at Pv Q at Qj, R at Rh and S at /. Repeated experiments usually show
some variability, which reflects the influence of factors the investigator does not
control.
The additivity assumption also implies that the individual effects of the
factors Py Qt R, etc. on performance Y are separable. Under this assumption the
effect of each factor can be linear, quadratic, or of higher order, but the additive
model assumes that there exist no cross product effects (interactions) among the
individual factors. (Recall the instance of interaction of effects seen between
exposure time and development time in the lithography example, Table 3.1.)
If we assume that the respective effects (a and /}) of two influencing factors
A and B on the response variable Y are additive, we are then effectively saying
that the model
Yij( = l^ij + ij) = ji + ctj + j3j + eg (4.1.2)
represents the total effect of the factors A and B on Y. Note again that this
representation assumes that there is no interaction between factors A and B, i.e.,
61
62 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

the effect of factor A does not depend on the level-of factor B and vice versa.
Interactions make the effects of the individual factors non-additive. If at any
time n }j is different from ( jli + a, + /3;), where a, and fy are the individual (or the
main) effects of the respective factors, then one says that the additivity (or
separability) of main factor effects does not hold, and the effects interact. The
chemical process model shown below provides an example of an interaction
among the two process factors:

Utilization (%) = K (mixing HP/1000 g)L (superficial velocity)**

For this process, the effect on the response variable Utilization (%) is multiplicative
rather than additive. Here, the effect of mixing HP/1000 g depends on the level
of the second process factor, superficial velocity, and vice versa. This effect may
be modelled by

ldtJ = (4.1.3)

Sometimes one is able to convert the multiplicative (or some other non-additive)
model into an additive model by mathematically transforming the response Y
into log [F], or 1IY, or -/F, etc. Such a conversion greatly helps in planning and
running multi-factor experiments using OAs. (We shall see in the next section
that OAs impart much efficiency and economy to statistical experiments.) The
presence of additivity also simplifies the analysis of experimental data. The
transformation that would convert the above chemical process model (which
involves the interaction of factors mixing HP per 1000 g and superficial velocity)
is the taking of logarithms on both sides. This gives

log (% utilization) = log (K) + L log (HP per 1000 g)


+ M log (superficial velocity)

The model equation (4.1.3) then becomes additive, and is written equivalently as

Hy = /J + or, + Pj (4.1.4)

To remind the reader, because the interaction terms are absent in it, one often
calls the additive model the main effects model.

4.2 WHY ACHIEVING ADDITIVITY IS SO IMPORTANT?


In Taguchis robust design procedure, one earnestly seeks to confine to the main
effects model or, equivalently, the additivity of effects. This permits use of
certain special partial factorial designs and simple arithmetic, as we see below,
in reaching the optimum settings for each of the product or process design
parameters. Additivity of effects also leads to a major reduction in the number of
experiments that need to be run. These benefits of additivity may be visualized as
follows:
Suppose that a designer wishes to investigate whether four potential design
factors, P, Q, R, and S, have an influence on performance Y. Also suppose that
the designer has the choice of setting each factor at any one of three distinct
THE FOUNDATION OF TAGUCHI METHODS: THE ADDITIVE CAUSE-EFFECT MODEL 63

treatment levels. If the respective effects of these factors are additive, then
performance Y may be modelled by
Y = fi + pi + qj + rk + Si + (4.2.1)
Since this model contains no interaction terms, it is an additive or main factor
model. Now, since each of the four factors (P, Q, R, and S) may be set at three
distinct treatment levels, there will be 34 or 81 ways of combining these different
treatments. It may then appear that to investigate the effects of the four factors,
one has to run each one of these 81 experiments. We now show that if
additivity of main effects is present, then only a small subset (shown in Table 4.1)
of the possible 81 experiments needs to be run to evaluate the effect of the four
design factors. This subset is called the orthogonal matrix experiment.

TABLE 4.1
AN ORTHOGONAL M ATRIX EXPERIMENT AND ITS RESULTS

The Orthogonal
Matrix of Treatments
Experiment P Q R s By Additivity Assumption, yt

1 Pi Qi . R, Si y\= f1 + Pi + + ri + si + \
2 Pi Qi Ri Si yi = + Pi + #2 + r2 + s2 + E2
3 Pi e 3 Ri S3 M + Pl + #3 + r3 + s3 + 3
4 Pi Qi r2 S3 y4 = ju + pi + qi + r2 + S3 + 4
5 Pi Qi r3 Si ys = M + P2 + #2 + r3 + sl + 5

6 Pi 23 Si ye = f1 + P2 + 93 + rl + s2 + 6
7 Pi Qi r3 S2 yi - M + P3 + Ql + r3 + ^2 + 1
8 P3 Qi *1 S3 = M + P3 + 92 + rx + 53 + g
9 P3 Qi Ri Si yg = H + Pl + q3 + r2 + S1 +

Table 4.1 contains an example of an orthogonal matrix of treatments. Note


the two special aspects of the nine experiments shown in Table 4.1:
1. The total number of experiments to be run above equals 3 x 3 or 9
only a fraction of 81. The number 9 reflects the total number of combinations
possible of the three levels of any two factors among P, Q, R, and S. Note also that
no experiment here is a repeat of any other experiment. (A question, nonetheless,
remains: W ill these nine experiments suffice?)
2. The combination of the treatments of the four factors in any of the nine
experiments is not arbitrary. One constructs these combinations carefully in
order to permit quick estimation of each factors main effect, if such effect exists,
from observations {yx, y2, y$, y ^.
We now show how one may rapidly estimate the effects of factors P, Q, R,
and S from the observations {yr-}. In the additive model (4.1.1), fj, represents the
overall mean value of y ^n the region of experimentation in which one varies the
64 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

factors P, Q, R, and S. Further, p h p 2 and p$ are the deviations of y from ji caused


by factor settings (treatments) Ph P2 and P3, respectively. Then, since each factor
has its own (positive or negative) effect on y and one assumes the factor effects to
be additive and hence separable from the overall mean p and from each other, one
must have

Pi + Pi + p 3 = 0 (4.2.2)
Similarly,

<7i + Qi + #3 0 (4.2.3)
rj + r2 + r3 = 0 (4.2.4)

+ S2 + S$ == 0 (4.2.5)
Therefore, to find the effect of setting P at P 3 on Y, one simply computes an
arithmetic average of certain {y,}, as follows: First, note what happens when one
adds the three observations (y7, y8> and y9); in which the P treatment equals P^
and then averages them.

(yi + Js + ^ 9)^3 = [ft + Pi + qi + r3 + $2 + 7 + I* + P3 + Qi + n


+ 53 + g + /Z + p$ + <?3"+ ^2 ^1 "*^9]/3
= (3/i + 3/?3) / 3 + (qx + q2 + q3)l 3 + (r3 + rx + r2) / 3
+ (s2 + .S3 + Si)/3 + ( 7 + 8 + 9) / 3
= fi + j?3 + ( 7 + g + 9)/3 (4.2.6)

It is not difficult to see that this equation equals


/j, + [estimated effect of setting P = P3]
+ the average error in one experimental trial
This deduction underscores an important point. If one plans and conducts the
experiments using the special matrix in Table 4.1 as the guide, one is able to
estimate the factor effects by performing only certain simple averaging arithmetic
on {y,}. The caveat is that one may use such simple analysis only when additivity
as well as separability holds, and when one runs the experiments using treatment
combinations planned in accordance with the orthogonal matrix of treatments
(Table 4.1). The special matrix shown in Table 4.1 is called the L 9 orthogonal
matrix or the L 9 OA. Also note that the magnitude of the average error term
(7 + 8 + 9)/3 in Eq. (4.2.6) above. If the average variance for the error , in a
single experiment is (<7e)2, then the average error (7 + 8 + 9)/3 will have the
variance (1/3) (ct^)2, see Section 2.3. Thus, in addition to simplifying the arithmetic,
the orthogonal experiment scheme (Table 4.1) reduces the variance in estimating
the effect of factor P over the variance of the error in a single experiment by a
factor of 3.
Similarly, by selecting certain other observations from {yi}> we would be
able to estimate the effect of each of the three other factors Q, R, and S. If the
additivity assumption is not valid, however, then the error terms {,-} will not be
THE FOUNDATION OF TAGUCHI METHODS: THE ADDITIVE CAUSE-EFFECT MODEL 65

independent of each other and will not be random variables with zero mean
and (a e)2 variance.
As an alternative to the orthogonal matrix for planning experimental
studies, one may consider the full factorial statistical design.

4.2.1 The Full Factorial Design


The general approach to empirically uncovering the effects of the factors P, Q, R,
and S would require running of all possible combinations of the three levels for
each of the four factors, or running 34 or 81 total experiments. This will be a full
factorial experimental design (see an example in Fig. 6.3). When the additivity
assumption holds, however, the arrangement with the L 9 matrix (which uses only
nine experiments) will be sufficient for estimating the main factor effects. This is
decidedly a major reduction (from 81 experiments to 9) in the total experimental
effort.

4.3 THE VERIFICATION OF ADDITIVITY


One may not know in advance whether the additivity of main effects holds in a
given investigation. Additivity implies that the main effect model (4.1.1) is an
adequate representation of how response y depends on factors P, Q> R and S. One
practical approach recommended by Taguchi to verify this is to run a verification
experiment with treatments set at known (usually the optimum) values and
observing the outcome. By doing the verification experiment one compares the
observed value of the response variable and the predicted response based on the
main effect model, and thus verifies if the additive model is adequate. A close
agreement of the observed and predicted responses suggests that the additivity
assumption is a reasonable one.
Once the investigator has established additivity, he may use the model to
predict the effect of the independent factors on the response Y for any treatment
combination Piy Qj, Rh and S/, etc. within the influence space. If the verification
fails, the experiments should be expanded to include two-factor or higher
order interactions using a larger orthogonal or some other experimental
design.

4.4 THE RESPONSE TABLE: A TOOL THAT HELPS FIND MAIN


EFFECTS QUICKLY
A manual procedure is available that quickly completes the calculation of
effects from the orthogonally designed experimental observations. This method
uses a special tabular format, known as the Response Table [15], for recording
and manipulating the observed data. We illustrate this method with a 3-factor
design example where the investigator uses only two treatments for each
factor.
In the design example shown in Table 4.2, the investigation requires only
eight experiments, guided by the Lg OA of Appendix B (given at the end of the
66 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

book). (In Chapters 6 and 8 the method for selecting the right OA for a given
problem is discussed.)
Recall that the effect of some factor A on the response y is the average
change in the response it produces when the setting of factor A goes from its
low level (symbolically represented by -1) to its high level (1)- Suppose
now that the factors A, B, and C produced the responses {y,} in experiments
run with the different treatments of A, B, and C as shown in Table 4.2. For
each experiment (represented by a row in the table) the symbols 1 and - 1
show the particular coded combination of treatments used in that experiment.

TABLE 4.2
THE L8 ORTHOGONAL ARRAY

Treatments for Observed


Experiment A B C Response

1 1 J 1 yi
2 1 1 1 yi
3 1 1 ~1 ys
4 1 1 1 y4
5 1 1 1 ys '

6 1 1 I ^6
7 1 1 1 yi
8 1 1 1 78

The response table method assumes additivity of factor effects at the


outset. Hence, in order to estimate the effect, for instance, of factor A on response
Y, one would first add together the four responses at treatment 1, the high
setting of factor A. As shown in Section 4.2, because of orthogonality, such summing
(which produces y$ + y$ + yi + y8) cancels out the effects of factors B and C, and
accumulates only the effect of setting A at 1 (and noise). Therefore, by dividing
this sum by 4, one may find the average value of 7, the response, at treatment for
A = 1.
Let Abarx represent the average \ys + y$ + yi + y%]/4. Similarly, let Abar2
represent the average value of response Y calculated for the low treatment, -1.
The effect of factor A on Y is then (Abarx - Abar2), which is equal to

>>5 + + yi + y% y\ + % + % + ^

Similarly, we find that


y3 + ?4 + y? + y8 y\+ % + ^ + %
Effect of B - tfbar! - bar2 =

Effect of C = Cbarj - Cbar2 =

The response table allows the above computations to be quickly completed, by


THE FOUNDATION OF TAGUCHI METHODS: THE ADDITIVE CAUSE-EFFECT MODEL 67

hand. As one may observe, the layout of the response table (Table 4.3) is quite
straightforward. For most OAs such a table may be constructed [15],
Note that the response table shown (Table 4.3) includes a Random Order
column. This column is the reminder to the investigator that he must randomize
the experimental trials to minimize any biases in results that may develop if the
trials are run in some systematic order, such as Trial 1 to Trial 8 in a sequence [11].
Such biases are due to uncontrolled factors. For example, the ambient temperature
may rise as the experiments are run, or that Operator X runs the first few experi
ments, followed by the later ones by Operator Y.

TABLE 4.3
THE RESPONSE TABLE FOR A THREE-FACTOR EXPERIMENT

Random Observed B
Trial
Order Response
+1 -1 +1 +1 -1

1 y,1 y,1 y y1

y; y; y, y,

y y y y

y. y. y. y.

y, y, y, y,

y, y, y, y,

y y y y

8 y8 y8 y8 y8
T o ta l (sum of observations in columns above goes here)
No. of data values 8

Average ybar A bar A bar. B bar B bar C bar Cbar,


1 1 1

Estimated main effect A b a ^ -A b a r g Bbar^-Bbarg Cbar^-C bar2

Table 4.4 shows the hand calculations done on a response table. The
calculations shown are for a process optimization investigation conducted with
three process design factors F, S, and T, and a response called Yield. The table
shows the treatments used. The bottom row of the response table shows main
effects calculated.
Another well-known calculation method is due to Yates [111.
68 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS T O ROBUST DESIGN

TABLE 4.4
A COMPLETED RESPONSE TABLE

Random Observed F S T
Trial
Order Yield | a B 60 80 70 82
4 1 164 164 164
1 166 166 166
8 161 161 161 161
5 160 160 160 160
6 184 184 184 184
3 187 187 187 187
2 179 179 179 179
7 8 182 182 182 182
Total 1383 651 732 701 682 68 8 695
No. of data values 8
Average 172.9 162.8 183.0 175.3 170.5 172.0 173.8
Estimated main effect 20.2 -4.8 1.8

4.5 GRAPHIC EVALUATION OF MAIN EFFECTS


Taguchi methods often use a graphic technique (see Fig. 4.1) to convey rapidly
the relative magnitudes of the different factor effects [2]. This technique results
in a visual display of the relative effects of each of the individual design factors.
The technique plots the numerical values of the factor effects on the Y (vertical)
axis, visually highlighting the relative magnitude of the effects for quick
comprehension. Implemented easily with spreadsheet software with graphics, such
a plot quickly identifies the optimum setting for each factor under study.
Generally speaking, optim ization in Taguchi experiments aims at
maximizing the signal-to-noise ratio. In Chapter 5, we shall show how maximizing
the signal-to-noise (5/AO ratio directly minimizes the variability due to noise. Besides
robustness, Taguchis optimization also seeks to adjust the final performance to
the desired target One may sometimes reach both these goals quickly with the aid
of graphic displays.
Figure 4.1 suggests that in order to maximize yield, one should set factor F
at treatment B and factor S at 60 rpm. The factor T displays little effect on yield,
hence it may be set at either 70 or 82. We emphasize that this simple approach
is valid only if no interactions are present!
Since one assumes additivity of effects for control factors in most Taguchi
experiments, one is also able to predict the value of the optimized performance
from these experiments. In the above example, one would find the maximum
predicted yield by adding the effects of factor F at treatment B and factor S at 60,
THE FOUNDATION OF TAGUCHI METHODS: THE ADDITIVE CAUSE-EFFECT MODEL 69

Factor F Factor S Factor T

Factors and treatments


Fig. 4.1 Graphic display of factor effects on yield evaluated in Response
Table 4.4.

with factor T set at 82. These settings would maximize yield. The projected maximum
yield is, simply,
ybarmax = ybar + (Fbar2 - ybar) + (Sbarj - ybar) + (7bar2 - ybar)
= 172.9 + (183.0 - 172.9) + (175.3 - 172.9) + (173.8 - 172.9)
= 186.3
70 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

When one has obtained the projected optimum performance as above, one should
run the verification experiment. This is to be done by setting factor F at B, factor
S at 60 rpm and factor T at 82, to confirm that the actual yield is indeed close to
this projection. (As mentioned in Section 4.2, the verification experiment alone
puts the additivity assumption to test, and hence the acceptability of the main
factor model as the basis for performance optimization.)

4.6 OPTIMIZATION OF RESPONSE LEVEL AND VARIABILITY


When the additivity assumption holds, it is possible to estimate the main factor
effects using a single set of experiments based on the orthogonal design. However,
if the process has excessive variability because of the effect of factors not included
in the design, the main factor estimates produced by only one application of the OA
can be far from their true values. These estimates may be improved by replicating
the trials. The averages found by replication have less variability and thus improved
precision. Also, with replication, a single erroneous sample cannot distort the
results much.
Replication of the orthogonal experiments can also help us see (a) factors
that affect the average performance, and (b) factors that affect the variability of
performance. These two pieces of information, as we shall see soon, are of critical
value in producing the robust product/process design one of the most important
goals of Taguchi methods.
The response table provides a quick way to estimate the main effects. A more
formal and precise assessment of the factor effects can be made, however, if we
compare the variability or variance effects (caused by the different factors and
those due to the different levels at which the investigator set a given factor during
experimentation).
The example we now give illustrates how orthogonally designed experiments
lead rapidly to the optimization of process design parameter settings to improve
manufacturing processes.

EXAMPLE 4.1: Ina Seito Company's Tile Manufacturing Experiment [4], The
Ina Seito Tile Company of Japan, in the late 1950s faced the problem of high
variability in the dimensions of the ceramic floor tiles it produced. Such
variability made many of the tiles unacceptable and reduced process yield. Analysis
of rejected tiles showed that tiles in the centre of the pile fired inside the kiln
experienced lower temperature than those near the periphery. Brainstorming by
Ina employees led to the listing of many process factors whose effects, they felt,
should be investigated.
Seven factors (designated A, B, C, D, E, F and G), each of which could be
set at two distinct levels in practice, were identified. The investigator assumed
initially that the effect of each of these factors was independent of the presence of
other factors (i.e., there were no interactions) and that the effects were additive.
Since the study involved seven factors, each of which could be set at two possible
treatments, the investigator selected an L 8 orthogonal array (shown in Table 4.5) to
guide the statistical experiments. (In Chapter 6 we shall describe how one made
this choice.) All seven factors in these experiments concerned the apportionment
THE FOUNDATION OF TAGUCHI METHODS: THE ADDITIVE CAUSE-EFFECT MODEL 71

of materials or the tile making recipe. Table 4.5 shows the results of the eight
orthogonal experiments run.

TABLE 4.5
RESULTS OF RUNNING THE TILE-MAKING EXPERIMENTS

Experiment 1
Orthogonal Array Columns No. of Tiles
1 2 3 4 5 6 7- Found Defective

1 A1 B1 Cl D1 El FI G1 16/100
2 A1 B1 Cl D2 E2 F2 G2 17/100
3 A1 B2 C2 D1 El F2 G2 12/100
4 A1 B2 C2 D2 E2 FI G1 6/100
5 A2 B1 C2 D1 E2 FI G2 6/100
6 A2 B1 C2 D2 El F2 G1 68/100
7 A2 B2 Cl D1 E2 F2 G1 42/100
8 A2 B2 Cl D2 El FI G2 26/100

As mentioned earlier, a major advantage of using the OA is the simplicity


and efficiency in data analysis. The additivity assumption helps simplify data
analysis greatly, thereby reducing the need to do ANOVA and the F-test before
useful practical deductions from the experimental data can be made [6 ], For a
full factorial design though, ANOVA and F-test are essential.
The effect of setting parameter A at the two treatments A1 and A2 would
be assessed as done in the yield maximization example in Section 4.5. Following
that procedure, the total number of tiles found defective with factor A set at A1
(because the effects of the other factors cancel out in this summation) would be
16 + 17 + 12 + 6 = 51
and the effect of setting A at A2 would be
6 + 68 + 42 + 26 = 142
Thus the average per cent defectives with A set at A1 and A2 were (51/4) or
12*75% and (142/4) or 35.50%, respectively. Owing to the additivity assumption,
this effect would be independent of the effect of the other process parameters B,
C, D , etc. Table 4.6 shows the % defective calculations.
We may now summarize the approach used here. Since one is aiming here
at minimizing the variability in tile size, one uses size variability under different
production conditions as the performance measure. The empirical study begins by
assuming additivity. This permits the direct use of an appropriate OA to guide the
setting of the process parameters A, B, CyD, . . . in the study. The use of the OA
later aids in the rapid identification of the optimum levels for the process
factors. The optimum treatments (identified from Table 4.6) are:
A1 B2 C2 D1 E2 F I G2
A confirmation experiment run with these settings could verify if these settings
72 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

TABLE 4.6
SUMMARY OF ESTIMATED FACTOR EFFECTS

Factor Total No. of Per cent


Treatment Defectives Defective

A1 51 12.75
A2 142 35.50
B1 107 26.75
B2 86 21.50
Cl 101 25.25
C2 92 23.00
D1 76 19.00
D2 117 29.25
El 122 30.50
E2 71 17.75
FI 54 13.50
F2 139 34.75
G1 132 33.00
G2 61 15.25

indeed minimized the tile size variability. The actual results confirmed this [4].
Figure 4.2 shows the factor effects graphically.

4.7 ORTHOGONAL ARRAYS vs. CLASSICAL STATISTICAL


EXPERIMENTS
As remarked repeatedly in this chapter, a key assumption in Taguchi methods
is that the factor effects are additive. To determine the reasonableness of the
additivity assumption in a given application of these methods, Taguchi suggested
that one must always run the verification experiment. One major criticism of
Taguchi methods, however, is about the statistical purity of these methods when
viewed in the framework of classical statistical experiments [6 , 10, 16, 17].
In 1925, Fisher [2] proposed the classical statistical experiments for agri
cultural yield research. With classical experiments one can find out:
1. Main effects caused on the response (dependent) variable by the indi
vidual factors.
2. Interactions in which the effect due to one factor is dependent not
only on the level of that factor, but also on the level(s) of one (or more) other
interacting factor(s), as in the lithography example in Section 3.1.
3. The nature of the dependency between the factors and the response
variable.
Fisher suggested that one should use ANOVA to assess these effects. Undoubtedly,
the ideal investigation involving several factors would be a full factorial experiment
THE FOUNDATION OF TAGUCHI METHODS: THE ADDITIVE CAUSE-EFFECT MODEL 73

Factors and treatm ents


Fig. 4.2 Graphic display of factor effects on % defective in tiles produced.

that includes all combinations of the treatments of the different factors. However,
full factorial experiments involve the largest number of individual trials for a given
number of factors and treatments. Since the publication of Fishers work, many
statisticians have proposed special experimental designs (combination of factors
and treatments) to study factor effects with fewer trials [3, 17, 18].
Each such special design has a rational relationship to the purpose of
experimentation, the needs of the investigator, and the physical limitations of the
experiments. All such designs begin with the statement of the investigators
objective and the identification of the factors that have the greatest potential
influence upon response. Some common statistical designs are:
74 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

Completely randomized
Orthogonal
Factorial
Blocked factorial
Randomized factorial
Randomized block
Balanced incomplete block
Latin square, etc.
In contrast to such formal designs, some statisticians feel that parametric
design (or robust design) using OAs and the analysis proposed by Taguchi do not
form a formal statistical methodology [10, 16, 17]. They note specifically that
orthogonal arrays can overlook particular effects (e.g., a confounded interaction)
in exchange for general effects. The Taguchi school responds here by saying that
for the sake of a substantial reduction in experimental effort, one may initially
overlook interactions and run a verification experiment to assess later whether
such an omission was reasonable.
Provided no serious non-additive effects or interactions are present in the
relationship between performance and the design parameters, many studies
completed since the publication of Taguchi methods suggest these methods can be
quite useful in practice [5, 7, 19]. Advocates of Taguchi methods suggest that
even if the methods lack statistical sophistication, if one runs the experiments using
OAs and then verifies the conclusions by a verification experiment, the outcome
can be quite effective, useful, and efficient in leading to the rapid empirical
optimization of designs [5, 6 , 7, 14].
However, interactions play a central role in seeking out the robust design.
The novel idea behind parameter design (page 13) is to minimize the effect of the
variation in the noise factors by choosing the settings of the design factors judiciously
to exploit the interactions between design and noise factors [32], rather than by
reaching for high precision and expensive parts, components and materials and
manufacturing control schemes.
In Section 4.6, we showed that Taguchi methods may estimate the factor
effects from the simple averaging of certain observations. Nonetheless, Taguchi
methods may also use ANOVA when appropriate to determine if the effect of a
particular factor on the response or its variability is significant. In particular,
F-tests on S/N ratios are common in robust design studies. In such studies one uses
ANOVA to compare the relative magnitudes of certain sums of squares, as one
does in classical statistical experiments.
The two examples we now give illustrate the application of ANOVA in
Taguchi methods. The first example shows how one determines the dof in a
design optimization problem. This count is a key parameter that guides the selection
of the appropriate OA on which the statistical experiments arc to be based. The
second example illustrates the ANOVA steps in a multi-factor design optimization
investigation.
The notion of dof is an important one in statistical analysis. The number of
independent aspects associated with an experimental design (or a factor, or a
sum of squares) is called its dof. A statistical experiment with nine rows in the
THE FOUNDATION OF TAGUCHI METHODS: THE ADDITIVE CAUSE-EFFECT MODEL 75

matrix has nine dof. The proper counting of the dof is essential in the correct
analysis of the results obtained in statistical experiments.
In any ANOVA table, one computes the mean sum of squares for each
factor by dividing the factors sum of squares by its dof. One computes the
experimental error variance, which equals the error mean square (Section 3.3), by
dividing the error sum of squares (Sections 3.3 and 3.5) by the dof for error.
Since each factor in the experiment discussed in Section 4.2 has three
treatments and the overall effect of each factor must satisfy Eqs. (4.1.2-4.1.5)
effectively, each factor has only two dof. Generally speaking, the dof associated
with a factor is one less than the number of different treatment levels at which
the investigator sets that factor during experimentation. One finds the dof for
error as follows:
dof for error = 1 (total No. of trials x number of repetitions) - 1
- total dof for factors and interactions

EXAMPLE 4.2 : Finding the dof in an investigation [5], An investigator plans


to conduct experiments to study the main effects of factors A, B, C, Dy E, and F,
and the interaction effect A x B. Factor A is to be set at two levels, and the factors
{B, C A E, and F) are each to be set at three levels. The dof in this experiment
will be

Overall mean 1 dof


A 2 - 1 = 1 dof
B, C, A E, F 5 (3 - 1) = 10 dof
A xB (2 - 1) x (3 - 1) = 2 dof
Total 14 dof
Hence, at least 14 experiments should be run. (In Chapter 8 we shall show that
one would select here the L 18 OA to plan and run 18 experiments. The four
unused columns in the array do not hurt the analysis or the optimization procedure
in any way. Rather, the extra four experiments improve the precision of the
study.)

EXAMPLE 4.3: A manufacturer experienced considerable variation in the hard


ness of car seats produced by injecting liquid ingredients plus some additives into
a pre-heated mould. The investigator identified seven factors (each operable at 2
distinct levels) for investigation so that the non-significant factors could be
weeded out and the significant factors could be set at their best levels [7].
Since the study involved seven independent factors, each operable at two
distinct levels, the investigator selected the Lg OA to guide the experimental trials
(see Fig. 6.1). Also, the investigator randomized the sequence in which these tests
were run to minimize any systematic adverse influence of the uncontrolled factors.
Table 4.7 shows the process parameters (factors) and their respective available
settings. The investigator assigned the seven factors and their levels (the treatments)
to the L 8 array as done in Example 4.1. The experiments took three days to complete.
During each trial 20 car cushions were made, cured, and checked for hardness in
76 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

TABLE 4.7
THE PROCESS FACTORS AND THE AVAILABLE TREATMENTS

Process Factor Level 1 Level 2

A DBTL additive/100 kg polyol 7 10


B Mould temperature (C) 50 43
C Isocyanate temperature (#C) 25 30
D Ventilation control Current New
E Polyol temperature (*C) 23 26
F Ratio Current Lower
G Shot time (s) a b

newtons, effectively replicating each of the eight different process setting 20


times. Reference [7] contains the original data for this problem.
The next step was the construction of the ANOVA tables using the observed
results both for variability of the process (by calculating first the S/N ratio, discussed
in Chapter 5), and for the mean hardness (the signal) produced. These different
steps produced the information required to lead to a robust process design. The
next task involved identifying process parameter levels that led (a) to minimum
variability of hardness in the cushions produced, and (b) the target hardness desired
of the cushions. (The reader should review Sections 1.9 and 3.2 at this point.)
It was discovered during experimentation that three of the process factors {B,
D, and F) produced very little process variability. They were therefore eliminated
from consideration. The investigator tested the remaining factors using ANOVA to
determine how strongly these affected variability. Table 4.8 shows the ANOVA
results for process variability.
TABLE 4.8
THE ANOVA FOR PROCESS VARIABILITY (S /N RATIO)

Process Sum of
Factor dof Squares (SS) Mean SS F -Value
A DBTL additive 1 11.15 11.15 11.42*
C Isocyanate temperature 1 6.55 6.55 6.71
E Polyol temperature 1 7.80 7.80 7.99
G Shot time 1 11.84 11.84 12.12*
Error 3 2.93 0.98
Total 7 40.27

* Significant at 95% confidence Fq5(1, 3) = 10.13 as found from the F-table in


Appendix A.

The F-Value column in Table 4.,8 suggests that two process parameters, A
and G, have significant influence on variability. Consequently, A and G should be
set at their lower intensity levels, Ig and a, respectively. This completes the first
basic step of Taguchis two-step optimization procedure (see Section 6 .8 ).
The second ANOVA focussed on the mean hardness produced by the
experimental trials. Table 4.9 displays the ANOVA data for the signal (mean
hardness).
THE FOUNDATION OF TAGUCHI METHODS: THE ADDITIVE CAUSE-EFFECT MODEL 77

TABLE 4.9
ANOVA FO R MEAN HARDNESS

Process Sum of
Factor dof Squares (SS) Mean SS F -Value

C Isocyanate temperature 1 10416.75 10416.75 320.71*


D Vent control 1 384.51 384.51 11.93*
E Polyol temperature 1 204.75 204.75 6.30*
F Ratio 1 5487.31 5487.31 168.94*
Error 155 5034.83 32.48
Total 159 21531.15

* Significant at 95% confidence

In later chapters (in particular, Chapter 5) we shall explain in greater detail


how we could use the information in the two above ANOVA tables in selecting
the optimum levels for the process parameters. We only remark now that, based
on the data in Tables 4.8 and 4.9, the process factor C (Isocyanate temperature)
would serve the best as the adjustment parameter (see Section 5.9). In the
Taguchi methodology, an adjustment parameter is a design factor that has little
effect on variability but it has major effect on average performance. As described
in [7], after optimization, the cushions built showed hardness in the range 221 3.1 N.
Hardness variability before this optimization was 220 10 N.

4.8 SUMMARY
Contrary to what many engineers and scientists believe and practice, one cannot
obtain reliable and reproducible results from empirical investigations by changing
the variables one-at-a-time and observing the effect while holding the other factors
constant. The one-factor-at-a-time study misses interaction effects completely.
One runs a statistically designed experiment all-factors-at-once\Yet, because
of the soundness of the ANOVA theory behind it, such experimentation produces
highly reliable and reproducible results. In statistical experiments one varies
several influencing factors together from trial to trial in a pre-planned, systematic
fashion. The special design, or structure, or plan used in a statistical experiment
adjusts the factor settings in the different trials such that maximum information
can be generated from a minimum number of trials.
In empirical optimization, ANOVA (combined with the F-test) identifies
which influencing factors have the largest impact on (a) the average level of
performance, and (b) the variability of the response variable. ANOVA also identifies
factors that do not influence either the performance, or its variability. The statistically
designed experiments help in efficiently separating the trivial many design
parameters or process variables from the vital few' that the designer should set
optimally, to make the design robust.
Taguchi* s robust design procedure makes particular and extensive use of
additive (or main effect) models and OAs rather than the classical full-factorial
designs, a valuable shortcut.
78 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

EXERCISES
1. Using Eqs. (2.3.9) and (2.3.7), prove that the average error (e7 + e8 + e9)B in
Section 4.2 will have the variance (1/3) (<JE)2.
2. A study involved 32 experiments with nine control factors to help optimize
the routing process referred to in Exercise 3.1 (see [14]). Table E4.1 shows the factor
settings used and the resulting average router life observed in each experiment.
By summing and averaging appropriate observations, estimate the main
factor effect for each control factor and identify the optimum setting for each
factor (ignoring any interaction effects) to maximize router bit life.

TABLE E4.1
RESULTS OF EXPERIMENTS CONDUCTED TO STUDY ROUTER LIFE

Suc- xy- Suction Stack Observed


Experiment tion Feed In-Feed Bit Spindle Foot Height Depth Speed Life
1 1 1 1 1 1 1 1 1 1 3.5
2 1 1 1 2 2 2 2 1 1 0.5
3 1 1 1 3 3 1 2 2 1 0.5
4 1 1 1 4 4 2 1 2 1 17.5
5 1 2 2 3 1 2 2 1 1 0.5
6 1 2 2 4 2 1 1 1 1 2.5
7 1 Am 2 1 3 2 1 2 1 0.5
8 1 2 2 2 4 1 2 2 1 0.5
9 2 1 2 4 1 1 2 2 1 17.5
10 2 1 2 3 2 2 1 2 1 2.5
11 2 1 2 2 3 1 1 1 1 0.5
12 2 1 2 1 4 2 2 1 1 3.5
13 2 2 1 2 1 2 1 2 1 0.5
14 2 2 1 1 2 1 2 2 1 2.5
15 2 2 1 4 3 2 2 1 1 0.5
16 2 2 1 3 4 1 1 1 1 3.5
17 1 1 1 1 1 1 1 1 2 17.5
18 1 1 1 2 2 2 2 1 2 0.5
19 1 1 1 3 3 1 2 2 2 0.5
20 1 1 1 4 4 2 1 2 2 17.5
21 1 2 2 3 1 2 2 1 2 0.5
22 1 2 2 4 2 1 1 1 2 17.5
23 1 2 2 1 3 2 1 2 2 14.5
24 1 2 2 2 4 1 2 2 2 0.5
25 2 1 2 4 1 1 2 2 2 17.5
26 2 1 2 3 2 2 1 2 2 3.5
27 2 1 2 2 3 1 1 1 2 17.5
28 2 1 2 1 4 2 2 1 2 3.5
29 2 2 1 2 1 2 1 Am 2 0.5
30 2 2 1 1 2 1 2 0
A * 2 3.5
31 2 2 1 4 3 2 2 1 2 0.5
32 2 2 1 3 4 1 1 1 2 17.5

3. Use the F-table in Appendix A to verify that all process factors (C, D, E,
and F) shown in Table 4.9 significantly affect mean cushion hardness. What dof
for the F-statistic would be used here? How confident are you of the assertion
that all factors affect hardness?
Optimization Using Signal-to-Noise
Ratios
5.1 SELECTING FACTORS FOR TAGUCHI EXPERIMENTS
With product and process features rapidly growing, it is nearly impossible today
to design a soundly performing product using only the first principles of science.
These principles often help the designer in the selection of the DPs to create a
product with the desired performance. However, the designer rarely has control
over factors beyond the DPs such as voltage fluctuation, raw material variations
during manufacturing, load variations in service, corrosion, etc. known as noise,
these factors often have large effect on performance. To produce a quality design
the designer must be aware of these effects also, besides the first principles of
science.
When a mathematical model expressing performance as a function of the
different DPs is available, one is often able to optimize the settings of these para
meters. However, such optimization quickly becomes unwieldy and even
impossible when the model must include also the environmental and other sources
of disturbance (Table 1.4). In particular, a robust design (for which the output must
stay at or very near target performance) cannot be reached when the designer has
only limited knowledge of the effect of the factors outside his control.
According to Taguchi, variability, which sums up the effect of all factors not
in the designers control, is the primary obstacle in achieving robust performance.
A rise in a vehicles fuel consumption, or undesirable fluctuation in the thickness
of sheets rolled by a rolling mill is perhaps typical rather than an exception. Many
factors that the user of the product/process does not control may cause performance
to vary. Thus, quality design cannot be complete when the designer succeeds in
reaching only the functional design (Section 1.7). Quality design, Taguchi suggested,
should include performance variability reduction and hence aim at robustness.
Taguchis methods may not be statistically pure [6 , 17]. However, the
engineering insight Taguchi has shown is perhaps rare. His methods enable designs
to achieve (a) minimum dispersion in performance about target, (b) minimum
sensitivity to variations transmitted from components, and (c) minimum sensitivity
to environmental noise [16]. Taguchi aimed at making the design robust first,
followed by an adjustment to put performance at the desired target. The task begins
by recognizing that the different factors influencing performance belong to two
distinct categories: Design parameters and Noise factors.
Design parameters (DPs) are the distinct and intrinsic features of the process
or the product that influence and determine its performance. The designer selects
the nominal settings of these parameters such that the resulting performance is on
target. These settings also define the design specification for the product or process
in question.
79
80 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

Noise factors are those factors that are either too hard or uneconomical to
control, even though these may cause unwanted variation in performance.
Table 1.2 summarizes some typical noise factors commonly encountered.
In order to achieve on-target performance with minimum variability, the designer
should Find ways to minimize the disturbing influence of the hard-to-control factors
among these. Taguchi proposed that whenever one does not completely know the
effect due to the different factors, one should empirically identify the optimum
settings of the DPs by doing certain special experiments. The best settings, Taguchi
showed, may be discovered by systematically varying the DPs in experiments.
One conducts these experiments directly on the prototype product or process to seek
out values for the parameters that minimize its sensitivity to the uncontrolled factors
by judiciously exploiting any DP-noise interactions present [32], Taguchi suggested
that this should be done after the functional design is complete (Section 1.7).
The plan that guides these experiments is statistical in nature, the experimental
design being fractional factorial rather than full factorial (Section 4.2). The first
step in these experiments involves developing the appropriate design parameter
matrix (the control OA, Fig. 5.1). This a iTa y shows the test settings for each DP

Control Orthogonal Noise Orthogonal Observed Performance


Array A rra y Performance Statistic

Test Noise Factors


1 1 t 1 -. . 1 *11
2 1 1 2 ... Z21
4 *
A
S/N1
%
n 2 2 2 nt

Expt. Control Factors Test Noise Factors


t 1 1 1 . .. 1 *12
1 1 1 1 ... 1 2 1 1 2 ... Z22
2 1 1 2 2 *

%

4




*
t *
c 3 2 3 .2 n 2 2 2 .. Zn2

Test Noise Factors


1 11 1 .. . 1 he
2 \1 2 ...
**2 e

< i Hc.^c.S/Nc

+ >
% 4

n 2 2 2 ...
Fig. 5.1 The parametric experiment plan.

to be used in the experimental trials. A similarly developed noise factor array


specifies the test levels at which one would deliberately set and study some of the
noise factors. Again, the source of noise may be the catalyst used, the operators,
machine differences, batch-to-batch variation in parts purchased, or material quality.
After one has set up the control and noise arrays, one does the actual experiments
and observes the output (the resulting response). From the observed responses
OPTIMIZATION USING SIGNAL-TO-NOISE RATIOS 81

one computes the performance statistic values {//,, a,2, S/Nj}. The analysis of the
statistics obtained from the different experiments then follows. This analysis
predicts the optimal settings of the design parameters and the improved level of the
designs performance that would result from these settings. The final step in
optimization involves an empirical verification that the optimum design parameter
settings thus identified would actually deliver a performance close to the projected
improved performance.
The control and noise arrays play a key role in ensuring that one runs only
the necessary experimental trials and nothing more. As Fig. 5.1 shows, the
columns of the control array represent the different parameters changed in
experimentation. The rows represent the different combinations of the settings of
these parameters used in the particular experimental trials. As shown, for each
combination of design parameter (control factor) setting i (e.g., 3 2 3 ... 2 ) and
noise factor setting j (e.g., 1 1 2 . . . ) , one obtains an observed performance Ztj.
Later, one summarizes the different {Z;)} into performance statistics (//,, a 2), and
a special metric known as the S/N ratio.
One may need several iterations of such experiments to exploit any DP-noise
interactions to identify precisely the DP setting at which the effect of noise factors
will be sufficiently small. This final setting identifies the robust design (Section 1.9).

5.2 TO SEEK ROBUSTNESS ONE SHOULD MEASURE


PERFORMANCE BY S/N RATIOS
In recent years the features of many products that seem to bring the highest
satisfaction to their users have been carefully studied. These studies reveal that on-
target performance usually satisfies the user best, and target 5 type tolerance
specifications to represent the acceptable range of product quality are often
inadequate. Products falling within the 5 tolerances continue to cause a quality
loss experienced by its user as adjustments he must do and even repairs. This
is usually so when performance is near the edge of the customers tolerance limit.
Also, parts and components that just meet tolerance specifications may cause
problems because of the catastrophic stacking of tolerances [1], Not surprisingly,
such events add to in-house costs for the manufacturer and also affect his sales and
reputation. The Sony-Japan vs. Sony-U.S.A. case (Fig. 1.2) revealed graphically
that the best way of meeting customer expectations is to make products with
minimum variability and close to the target, instead of merely meeting specifi
cation tolerances. A similar well-documented study comes from Mazda [1], All
such experiences suggest that it is inadequate to quantify the loss due to quality
by the traditional dichotomous model

L(y) = 0 if \y - target I < 5q-, A0 otherwise


where A0 is the cost of repair or replacement. The quadratic loss function
L(y) = k(y - target)2
appears more appropriate.
Since one may seek to maximize a performance aspect or minimize it,
82 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

variations to the above loss function form are available. If the performance
characteristic y happens to be such that the smaller it is the better (as in pollution
generated per MW of electricity produced), then one expresses the loss best by
the expression
L(y )= k f
On the other hand if the performance characteristic is such that the larger it is
the better, as with the bonding strength ( y) of adhesives formulated, then

L(y) = *(1//)
In general, loss functions can be asymmetric. (An illustration of asymmetric loss
functions appears in Example 11.2.)
Market research with customer experiences may help quantify the true loss
function, which can then become the basis for design optimization [5, 16]. Instead
of directly using the loss function, however, Taguchi recommended several
special forms to which experimental data on product/process performance should
be transformed before optimization. Taguchi called these special forms S/N ratios.
The rationale for this switch over (to SIN ratios) instead of working directly with
the quality characteristic measurements is given in the following:
The S/N ratio is a concurrent statistic a special kind of data summary. A
concurrent statistic is able to look at two characteristics of a distribution and roll
these characteristics into a single number, or figure of merit. An example can
illustrate this well.
The objective of robust design is specific: robust design seeks optimum
settings of DPs to achieve a particular target performance value under most noise
conditions. Suppose that in a set of statistical experiments one finds the average
quality characteristic to be /j. and the standard deviation (caused by the noise
factors) to be a Let the desired performance be fj$. Then one must make an
adjustment in the design to get performance on target by adjusting the value of
a control factor by multiplying it by the factor (jjoI jj). However, this also
affects the standard deviation, which becomes (using Eq. (2.3.7)). Since
delivering on target performance is the goal, the loss after one has adjusted the
process is now due to the variability remaining from the new standard deviation
(of performance) only (see Section 1.4). This equals
Loss after adjustment = K j^l^i)2 a 2 = )2a 2/jx2
= constant/ { ji2/a 2)

The factor ( / i 2/ a 2) reflects the ratio of average performance jx2 (which is the
signal), and a 2 (the variance in performance) the noise.
Maximizing /i2/o-2 or the S/N ratio therefore becomes equivalent to mini
mizing the loss after adjustment. Additivity of design parameter effects is a primary
requirement that permits use of the economical orthogonal statistical experiments
in design optimization. For improving additivity (see [14], p. 297) one often
takes the logarithm of (ft2/a 2) and expresses the SIN ratio in decibels, as

S/N= 10 log 1 0 (M 2/<t2) (5.2.1)


OPTIMIZATION USING SIGNAL-TO-NOISE RATIOS 83

It takes about half a decibel gain to obtain a 10% improvement in (/i/cr). The
range of values of fi2I a 2 is (0, ) while the range of values of S/N is (- >, + >).
The maximization of the S/N ratio by a suitable selection of the DPs makes the
design robust, a major goal of quality engineering.
Let yh y2, . . . , yn represent multiple values of a performance characteristic
Y observed in the parameter experiments. Then the following respective S/N
ratios (denoted by S/N(8)) become the most appropriate choices in guiding the
optimization of design parameter settings for the cases stated [5 ],
If the nominal value for a characteristic Y is the best for the customer, then
the designer should maximize the S/N ratio
S/N(6)) = 10 log10 ( ybar2Ar) (5.2.2)
where
n

s 2 = Z (yi - ybar)2/(n - 1)

In the above procedure, one repeats observations (under the diverse settings of the
noise factors in the noise OA) n times at each selected combination of DP settings
in the control OA. The idea that the nominal response is the best implies that if
all observations {y(} were exactly at the average (i.e., at ybar) and thus the variability
in {y/} was nil (i.e., s2 was zero), the design would be the best. If being on target
T is the best, then one should maximize
S/N( 6) = 10 log io (T2/.r) (5.2.3)
where
s2 = Z (yi - t )21(n - 1)

If the diminishing characteristic of response Y. results in improved product/


process performance, one should use
S/N(9) = - 10 logio [Z yr/n] (5.2.4)

If the larger the characteristic Y, the better it is. Then


S/N(9) = - 10 log jo [Z (l/y 2)/n] (5.2.5)

If one measures product performance on a binary (GO/NO GO) scale, Taguchi


recommends that one should use the following performance statistic:
S/N(d) = 10 log10 [>/( 1
/?)] (5.2.6)
where p is the proportion of products found good in the parametric experiments.
In design optimization one attempts always to maximize the S/N ratio.
In summary, the S/N ratio is a predictor of quality loss that isolates the
sensitivity of the products function to noise factors. In robust design one minimizes
this sensitivity to noise by seeking combinations of the DP settings that maximize
the S/N ratio. Further, use of the S/N ratio attains robustness independent of target
84 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

setting. (As pointed out in Section 5.1, frequently a designer is unable to adjust
performance to target without also affecting variability.)
The additivity of the DP effects also becomes maximum when one uses the
most appropriate S/N ratio. Without the presence of additivity one may have to
conduct more experiments in order to consider the effects of DP-DP interaction
and to achieve good predictability of performance at the optimized parameter
settings.
One is usually able to select the most appropriate S/N ratio from among
several candidate S/N ratios by experimenting with special OAs [5]. When one does
ANOVA for the SIN ratio, one reserves a few dof by convention, for estimating
the error variance. A smaller error variance of the S/N ratio (compared to the mean
square for the control factor effects) signifies that the additivity of the chosen S/N
ratio is better (or interaction effects are minimal); hence the chosen S/N ratio is
more suitable [5].
Parameter optimization using SIN ratio with DP-DP interaction minimized
often becomes a simple two-step procedure [8 ]. First, one maximizes the S/N ratio
*

without being concerned about the mean performance. This results in robustness in
respect to the uncontrolled variables. Next, one adjusts the mean performance by
using an adjustment (control) factor to bring the mean on target. (In Section 6 .8 we
show that control factors having little or no effect on SIN ratios but a high influence
on mean performance serve the best as adjustment parameters.) Equations (5.2.1) -
(5.2.6) show the different useful SIN ratios.

5.3 S/N RATIO IN OPTIMIZATION AN EXAMPLE


A set of nine welding experiments aimed at minimizing void volume (measured
in mm3) at the weld interface produced the observations shown in Table 5.1.

TABLE 5.1
W ELDING SETTING OPTIM IZATION TESTS

Welding Replicated Observations SIN


Setting yi y% ?3 ?4 Ratio

1 100 95 125 85 -40.2


2 110 105 130 90 -40.8
3 125 115 139 99 -41.6
4 84 79 104 72 - 38.6 maximum
5 92 85 110 80 - 39.3
6 99 95 121 94 -40.2
7 99 96 120 87 -40.1
8 106 105 131 97 -40.9
9 118 117 140 105 -41.6

Minimization of voids is obviously the smaller the better case (Eq. 5.2.4). The
investigator here obtained four replicates at each welding setting to help find the
required S/TV statistic. (Note that replication lets the investigator simulate the
effect of the uncontrolled noise factors on performance. All S/N ratios shown in
this example have a negative sign. One may think of this as the case when there
OPTIMIZATION USING SIGNAL-TO-NOISE RATIOS 85

is more noise than signal, which is true as one may verify by computing the
row averages (2/y,*/4) and the standard deviations.)
One may now find the best welding setting; at this setting the S/N ratio is
maximum. The S/N ratios tabulated suggest that Welding Setting 4 (S/N ratio =
- 38.6) is the setting expected to produce the smallest volume of welding voids.

5.4 NOT ALL PERFORMANCE CHARACTERISTICS DISPLAY


ADDITIVITY
As already mentioned, the goal of running robust design experiments is to be able
to uncover the effect control and noise factors have on performance, in order to
predict and optimize the products performance and robustness. If the effects of
the control factors are not additive [i.e., they do not follow the superposition or
separability principle (see Section 4.1.7)], however, such prediction would involve
considerably more experimentation, especially in the evaluation of DP-DP
interactions.
Additivity of effects exists if performance is influenced only by the main
effects of the control factors (DPs) and no DP-DP interaction effects are present.
When only main effects need to be discovered, it suffices to run only the small
number of experiments required by the appropriate control OA. Such experiments
(though often run without the full appreciation of the underlying assumptions)
are not uncommon in mechanical, chemical, and metallurgical engineering. If
interaction effects are also present, to obtain predictability of results more experiments
with a more elaborate combination of factor settings will have to be run. This is
often an expensive undertaking.
If strong DP-DP interactions are present, it may be difficult, as Taguchi
pointed out, to achieve optimum process performance in the plant when the process
is taken out of the laboratory and scaled up. Similarly when design work is finished
and the product is fabricated and delivered to the field, its performance would
probably not be robust, without careful optimization.
The complexities introduced by interactions underscore Taguchis argument
for identifying a quality (performance) characteristic and an associated S/N ratio
such that the DP effects are additive. One should identify this quality characteristic
before running the optimization experiments.
However, the identification of such a quality characteristic is not easy.
Experts provide the following guidelines for selecting the right quality
characteristics; these guidelines maximize the chances for additivity [5].
1. The quality characteristic (y) should directly be related to the energy
transfer associated with the basic mechanism of the process or the product. For
example, in the study of lithography, instead of studying yield (which showed
strong interaction, Section 3.1), experts would recommend the measurement of
line width under different experimental conditions. Here, the technical knowledge
specific to lithography would be most useful in identifying that line width rather
than yield is a better energy-affected quality characteristic. Similarly, to prevent
sagging in spray paint design, it is best to measure the diameter of drops created
rather than the distance of sagging. In chemical processes, tracking the
86 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS T O ROBUST DESIGN

concentration of the reactants and the products resulting from the experiment is
far more beneficial than the yield of only the desired product.
2. As far as possible, the measured quality characteristic should be a
continuous variable.
3. The quality characteristic should be monotonic, monotonically rising (or
falling) with respect to the control factors. This may be difficult to judge before
the experiments are actually run. However, the lack of monotonicity makes the
study of interactions and hence the conduction of a larger number of experiments
critically important a difficult task when seven or eight control factors are
involved.
4. The quality characteristics selected should be easy, to measure and
complete they should cover all (performance) dimensions of interest.
If the effort to eliminate interactions still fails, optimization can be achieved
by expanding the experimental design to explicitly include interactions. In
Section 8.4 and Chapter 10 we discuss the methods that apply in such cases.
The S/N ratio that best suits a particular problem may be determined by
performing ANOVA of the S/N ratios. The most appropriate S/N ratio will result
in the smallest relative error mean square or the smallest ratio of error mean
square and the mean square for the factor effects. (This results from the lower
level of interactions or improved additivity of main factor effects. See [5], p. 208.)
Besides measuring the quality or performance characteristic, while the
experiments are being run, one should also measure productivity and/or cost
factors in order to eventually achieve economic trade-offs in design decisions.

5.5 THE OA AS THE EXPERIMENT MATRIX


Many conventional experimental investigations study the influence of factors
one at a time [12]. This is so primarily because real products and processes
usually involve a large number of design features, manufacturing conditions,
operating conditions, environmental factors, etc. and studying them one at a time
provides apparent convenience. However, as pointed out in Chapter 3, appropriate
statistical experiments alone can provide the precision and reliability required in
such studies. Also, perhaps paradoxically, statistical experiments cost the enterprise
less, not more, by requiring only the fewest trials to complete such studies.
Further efficiency is possible. If one assumes additivity, it is often possible
to group together the factors or variables into OAs and then to study their effects
with further savings in effort. Orthogonally designed statistical experiments
allow several variables to be studied simultaneously and also economically. Also,
OAs have a special structure, which enables the experimenter to extract rapidly
more precise information Li.e., estimate the effects with a smaller variance (see
Section 4.2)] than if he used the one-factor-at-a-time approach.
Dehnad ([14], p. 292) has noted that the outer OA is more efficient in
simulating the effect of noise than is Monte Carlo simulation. The limitations of
the outer arrays as noise simulators are discussed in [16].
In Section 3.5 it was pointed out that in the successive experiments run using
OPTIMIZATION USING SIGNAL-TO-NOISE RATIOS 87

an OA, the investigator changes values (settings) of the variables under study only
as specified in that OA. For instance, the row entries in the array in Table 4.1
(e.g. P 2 0,2 R?> Si in Experiment 5) rigidly indicate how these settings should be
changed on an experiment-to-experiment basis. Subsequently, the orthogonal
structure of the array makes it possible for the main effect of each variable to
separate mathematically from the main effect of the other variables.
Once the investigator has run a complete set of orthogonally designed
experiments and analyzed the data obtained, he runs the confirmation or verification
experiment. The confirmation runs aim at seeking verification that
1. the assumptions made in setting up the original product/process
performance model especially additivity and the absence of DP-DP interaction
effects are valid and reasonable; and
2. when one sets the parameters at their optimum values as suggested by
the analysis of experimental results, one actually achieves the predicted target
performance.

5.6 TH E AXIOM ATIC APPROACH T O DESIGN


So far the focus of our discussion has been on a single performance characteristic,
although one has also used the Taguchi method in situations in which more than
one quality feature is to be optimized [5]. A recently proposed design formalism
called the axiomatic approach to design (see Suh [13]) makes the simultaneous
handling of more than one quality characteristic a stated goal. In this approach,
one states the design objectives as specific Functional Requirements (FRs),
function here implying a performance characteristic desired of the product. For
instance, passive electronic filters help measure displacement signals generated
by strain gauge transducers. It is necessary that the filter designer minimizes any
distortion in the cutoff frequency output (FR X), and achieves also a full-scale
deflection of the galvanometer beam (.FR2). The task of design then becomes
successfully mapping these two FRs to the real, physical entity (the design)
characterized in terms of DPs. Certain principles, termed design axioms, guide the
process that produces a good design.
The axiomatic approach recommends that the designer should attempt to
satisfy the perceived customer needs with a minimal set of independent FRs. As the
number of FRs increases, the design becomes more and more complex. Therefore,
one should satisfy only the absolutely essential FRs and not overdesign. Thus the
axiomatic approach formally states as the objective of design what value engineering
attempts to achieve. The design process begins with the articulation of the FRs
that satisfy a given set of needs. The design ends with the creation of a physical
entity satisfying these FRs. The case study discussed in Chapter 8 illustrates how
some real product designs may involve optimization of more than one FR.
The axiomatic approach emphasizes that the different FRs the designer aims
at satisfying should be independent of each other. If the design proposed (by
specifying values of the DPs) makes the FRs interdependent, then aspects or parts
of the design should be separated to decouple the FRs (see [13]).
88 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

Additionally, the axiomatic approach requires that only a minimum measure


of knowledge, called information, should be required to satisfy a given FR.
Information here is the success probability of achieving the specified FRs in
product design, defined as the logarithm of the inverse of this probability. In
axiomatic design, one interprets information as

range of performance delivered by design


Information = log 10
tolerance as required

Suh shows that if one minimizes information during design, one minimizes also the
variability due to noise or the uncontrolled factors, as attempted in Taguchis
robust design. The undesirability of the effect of noise (on performance) emphasized
in both the Taguchi methods and the axiomatic approach is noteworthy.

5.7 SUMMARY
The objective of robust design is specific: by judiciously exploiting DP-noise
interactions it seeks optimum settings of DPs to achieve a pre-specified target
performance under most noise conditions. Experiments aimed at reaching the robust
design obtain measurements of S/N ratios to discover the effect control and noise
factors have on performance, in order to predict and optimize the products
performance and robustness. The S/N ratio is a concurrent statistic. It is able to
look at two characteristics (here the deviation of performance from the target and
its variability) of a distribution and roll these characteristics into a single number,
or figure of merit.

EXERCISES
An investigator used the L 8 orthogonal design (Appendix B) to conduct replicated
experiments involving four parameters (adhesive type, conductor material, curing
time, and integrated circuit post coating) to maximize bonding of mounted ICs on
a metallized glass substrate [15].
The parameter settings and observations were as in Table E5.1.

TABLE E5.1
IC BONDING TEST RESULTS

Set Adhesive Conductor Time Coating Replicated Observations

1 D Cu 90 Sn 73.0 73.2 72.8 72.2 76.2


2 D Cu 120 Ag 87.7 86.4 86.9 87.9 86.4
3 D Ni 90 Ag 80.5 81.4 82.6 81.3 82.1
4 D Ni 120 Sn 79.8 77.8 81.3 79.8 78.2
5 H Cu 90 Ag 85.2 85.0 80.4 85.2 83.6
6 H Cu 120 Sn 78.0 75.5 83.1 81.2 79.9
7 H Ni 90 Sn 78.4 72.8 80.5 78.4 67.9
8 H Ni 120 Ag 90.2 87.4 92.9 90.0 91.1
OPTIMIZATION USING SIGNAL-TO-NOISE RATIOS 89

1. Confirm that the factor effects are as follows:


Adhesive Conductive Curing
Factor: Type (1) Material (2) Time (3) (l)x (2 ) (l)x (3 ) (2)x(3) Coating

Average
strength 1.96 0.73 5.44 0.52 - 0.25 0.82 8.71

log 10 (s) 0.423 0.059 -.090 0.057 -0.041 - 0.289


0.006

We have expressed variability here as log10(s), where s is given by

j = (yi - ybar) 21(n - 1)

2. Develop a graphical display of the factor effects on average bond strength.


Identify the factor settings that would maximize bonding.
3. Use the larger-the-better S/N ratio to confirm that in order to maximize bonding,
adhesive type should be set at H, cure time should be 120, and post coating
should be Ag.
4. How do these settings compare with the parameter settings necessary to
minimize variability (logio (s))?
Use of Orthogonal Arrays
6.1 WHAT ARE ORTHOGONAL ARRAYS?
Orthogonal arrays have been already discussed in several places in the preceding
chapters. They are special experimental designs that require only a small number
of experimental trials to help discover main factor effects (see Section 4.1). OAs
are fractional factorial designs and symmetrical subsets of all combinations of
treatments in the corresponding full factorial designs. In this chapter we provide a
comprehensive discussion on the use of OAs in design optimization studies. We
first position OAs appropriately in the larger framework of statistically planned
experiments.
We briefly introduced, in Section 4.2, certain special statistical experiments
known as matrix experiments. Matrix experiments are a set of statistical experiments
in which the investigator subjects the different factor settings to variance on an
experiment-to-experiment basis. When all the experiments are complete, one
analyzes the observations to determine if changing the various factors had any
effect on the response. In some cases the factor effects are additive, linear, and
separable. In such cases special matrix experiments known as OA designs allow
us to study the main factor effects of several design parameters at once and
efficiently.
Table 4.1 shows a typical OA (experimental) design. This design uses the
L 9 standard OA (see Appendix B).
The experiment guided by an OA may not use all columns, but it must use
every row of the array. The number of rows in an OA determines the total number
of experiments to be run in the investigation. Therefore, Table 4.1 shows nine
experiments that must be run there.
Statisticians construct the OAs such that the vertical columns of these
arrays acquire a special combinatorial property: in any pair of columns in an
OA, all combinations of the treatments (of the two factors assigned to this pair)
occur and they do so an equal number of times. In Table 4.1, for instance, the
two columns assigned to factors Q and R together contain all 9 combinations
possible between treatments {Qh Q2, Q3], and {Ru R2, R3}. Observe also that
any treatment pair (e.g. Q3R () occurs once and only once between the Q and R
columns. And this is true for every pair of the orthogonal matrix columns in
Table 4.1.
The above property is called the balancing property of OAs. This balancing
property permits the use of simple arithmetic to find the effect of the experimental
factors (P, Q, R, etc.) on the response under study, as explained in Section 4.2.
Clearly, not every arbitrarily made up treatment array can have the above
properties and hence be orthogonal. The label orthogonality implies that the
90
USE OF ORTHOGONAL ARRAYS 91

entries in the array satisfy a special mathematical condition. Suppose we define


Yiy a weighted sum of nine experimental observations X\, Jt2, . . . , as
Yj = WuXi + hji2 x 2 + W/3 X 3 + . . . + W/9 X 9
such that the weight factors {w^} satisfy the condition
Wn + VV + W + . . . + W =
,'2 ,3 /9 0

Then, in mathematical terminology, one calls Y{ a contrast One calls the two
contrasts Yx and Y2 orthogonal if the inner product of the vectors corresponding to
the weights { w y } and { w j } is zero. Thus, Yx and Y2 are orthogonal if
2

^ 1 1 ^ 2 1 ^ 1 2 ^ 2 2 ^13^23 + + ^19^29 = 0

We have no direct use of contrasts anywhere in this book. One should only remember
that one does not set up the columns in an OA arbitrarily. Any two OA columns
are mutually orthogonal.
Sometimes OAs can be particularly useful. Suppose that one has used
OAs to design and guide the running of experiments and the additive model
(Section 4.1) is a valid representation of the cause-effect relationship of the
process under study. Then, the simple averaging of certain observations
obtained can estimate the main effect of the individual factors under study
(Section 4.2). Use of OAs to plan matrix experiments also ensures that if the errors
in each experiment are independent and have zero mean and equal variance,
then the estimated factor effects are mutually uncorrelated. This improves the
predictive value of the cause-effect model to predict the response for treatment
combinations not directly observed experimentally. However, one may gain these
benefits of using the OAs only if one does all the experiments specified by the
orthogonal matrix.

When all experimental factors have only 2 levels:


No. of Factors OA to be used

2-3 u
4-7 1-8
8-11 Ll2
12-15 ^16

When all experimental factors have only 3 levels:

No. of Factors OA to be used

2-4 u
5-7 L27

Fig. 6.1 Rules for selecting standard OAs.

Rather than constructing an OA anew for every design optimization problem one
faces, in practice one uses one of the many standard OAs provided in statistical
92 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

texts. Appendix B contains the commonly used standard OAs. As we shall see later,
each such standard array applies to and is most appropriate for investigating
certain specific factor effects.
To conserve resources, generally the investigator attempts to employ the
smallest size OA, meeting the purpose at hand. However, to test the validity of
the additivity assumption (Section 4.1), sometimes one uses a larger OA,
which allows the evaluation of between-factor interactions, in addition to the
main effects.

Number of Number of 3-level Factors


2-level
Factors 0 1 2 3 4 5

0 u u L9 Lig
1 u u Lig Li8
2 U l8 u u L j8 LlS
3 u u u Lie L l8 Li8
4 u l8 u Ll6 L is Lig
5 u Ll6 Ll6 Lie Ll8 Li8

Fig. 6*2 Sample rules for selecting orthogonal arrays.

Experts recommend that a beginner should consider running experiments with


all factors set either at two or three levels (treatments) each. One may then choose
the required OA using the rules shown in Figure 6.1. In References [4] and [5], one
4

can find OAs useful for advanced level experiments. Figure 6.2 shows a small
subset of the rules to be used in advanced robust design studies.

6.2 OAs ARE FRACTIONAL FACTORIAL DESIGNS


Orthogonal arrays can lead to substantial savings in investigative effort, but running
experiments using OAs may not always suffice. Suppose we have a manufacturing
process involving three factors, each of which may be set at two possible levels,
and six factors, each of which may be set at three possible levels. Then the
optimization of the process by exhaustive experimentation would require 23 x 36,
or a total of 5832 experiments to be conducted. Such an approach of running
experiments is called a full factorial design.
The gains from conducting experiments using a full factorial design are also
substantial. A full factorial design and this is how most classical statistical
experiments are run can estimate all the main factor effects and all possible
interactions among these factors. Figure 6.3 displays a 24 full factorial design.
This design can help one investigate all the main effects, twofactor interactions,
three-factor interactions, and thefour-factor interaction among factors A, B, C, and
D. This design uses 16 experiments.
Fortunately, in many practical situations and these include the Taguchi-
type main-factor-only investigations it is sufficient to run only a fraction of
these full factorial experiments. This helps conserve both time and other valuable
USE OF ORTHOGONAL ARRAYS 93

Cl C2

D1 D2 D1 D2

B1
A1
B2

B1
A2
B2 -

Fig. 6.3 A full factorial (24) design with four factors (A, By C,
and D), each with two treatments.

resources. For the nine-factor manufacturing problem introduced at the beginning


of this section, one may obtain much information by conducting only 18 experiments
using the L 18 OA (Appendix B), rather than the 5832 full factorial experiments.
Similarly, Fig. 6.1 suggests that one may find the main factor effects of the four
factors A, B, C, and D in Fig. 6.3 from only eight experiments, using the Lg array.
An experimental design scheme of statistical experiments that uses OAs,
however, entails the following considerations and consequences:
1. The OA leads only to a main effect design. Use of an OA forces the
investigator to assume that the response one observes can be approximated by an
additive function, separable into the effects of the individual (main) control factors
under study. One assumes no other effects, in particular no interactions, to be
present. A verification experiment can later verify whether this approximation is
a satisfactory and a valid one.
2. The columns of the OAs are pairwise orthogonal. In every pair of columns,
all combinations of the levels of each (independent) factor under study occur and
they do so equal number of times.
3. It follows from point 2 that the main effect estimates of all factors
and their associated sum of squares are independent under the assumption of
normality and equality of observation variance. (The data analysis procedure
used here assumes that factors not in the investigators control cause comparable
observation-to-observation variance, see Section 3.3.) Hence the significance tests
(ANOVA and F , see Sections 3.3 and 3.4) for these factors are independent.
4. When OAs guide the experiments, one computes the main factor effects
easily. These computed effects may be then used to predict the response for any
combination of factor treatments, because one assumes that these effects are separable
and additive. The variance of the prediction error (caused by factors not controlled
in the experiments and the exclusion of interactions) is the same for all such
treatment combinations.
5. Factors which are studied may be discrete or continuous. For continuous
factors it is possible to break down the main effects of three-level factors into
94 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

linear and quadratic terms. A non-linear effect may sometimes be useful in fine-
tuning and improving the initial design [5],
6. In the initial stages of optimization, one may limit the investigation to
the study of main effects. Later on, it is possible to run larger orthogonally
designed experiments to study interaction effects also, if necessary.
An engineer may be motivated to seek the improved settings of DPs for two
reasons: He may seek settings that will improve some performance characteristic
(the response) to some optimum value. Alternatively, he may seek to find a less
expensive and alternative design, material, or method, which will provide equivalent
performance. Orthogonal arrays often provide an efficient, empirical approach to
achieve both these goals.
Practising Japanese engineers have very effectively integrated the use of
statistical experiments in their studies and made it extensive [34]. Even as far back
as 1976, for example, the Nippon Denso Company, a world class manufacturer of
electrical parts, conducted over 2700 experiments using OAs [5].

6.3 NOT ALL FACTORS AFFECT PERFORMANCE THE SAME WAY


Factors that may influence the final performance of a design are not all alike.
Control parameters (also called DPs) are those factors whose operating standards
(settings) the designer or the process engineer may specify. Sources of noise, on
the other hand, include all factors that are impossible or expensive to control
though these too may cause variations in the products functional characteristics
features of the product that represent the basic, measurable quantities showing how
well the product meets user expectations.
Robust design as a procedure aims at finding the settings for the control
parameters such that the noise factors would then have minimal effect on the*
products functional characteristics. The central idea here is to reduce noise sensitivity
of the functional characteristics. One accomplishes this by making the process
robust with respect to noise by judiciously exploiting DP-noise interactions
rather than controlling the sources of noise (as attempted, for instance, by reaching
for high precision and expensive parts and components or by plotting control
charts) to achieve quality performance.
Taguchi suggested further that, instead of focussing on specific performance
aspects, one should optimize a design to minimize the total societal loss that may
result when one puts the product or process to use. According to him the optimization
of DPs should consist of the empirical maximization of S/N ratios. (As we pointed
out in Section 4.2, S/N ratios are inversely proportional to societal loss. Thus,
maximizing the S/N ratio minimizes societal loss.)
In robust design the empirical effort itself may be physical, or computer
simulations conducted with mathematical models of the process or product, if such
a model is available. Taguchi suggested that whenever possible, these investigations
should include the use of noise arrays. These arrays permit explicit experimentation
with (or simulation of) noise rather than relying only on replicated runs to show
the effect of noise. Recently Kackar [35] has suggested that one may stratify the
replications to improve the representativeness of different noise strata.
USE OF ORTHOGONAL ARRAYS 95

In optimization studies, one puts noise into two broad categories: Factors
external to a product such as ambient temperature, humidity, dust, vibration, and
human variations in using the product, or in operating the process, etc., comprise
the external sources of noise. Internal sources of noise, on the other hand, are
factors that cause manufacturing imperfections and product deterioration.
Noise factors that one can observe to be at distinct levels (e.g. humid vs. dry
weather when one is testing a vehicle for fuel economy) are included in the noise
OA. If possible, each noise factor should be studied at several rather than only two
distinct levels, to improve the detection and exploitation of DP-noise interactions.
The noise OA is called the outer array of orthogonal experiments (see Fig. 5.1).
To the maximum extent possible, the outer array should include the distinctly
observable yet not-to-be-controlled noise factors that might influence the designs
performance in the field. However, not all noise factors can be thus included in the
outer array of a parameter design experiment. One may have physical limitations
or lack the exact knowledge of these factors.

6.4 IDENTIFYING CONTROL AND NOISE FACTORS: THE


ISHIKAWA DIAGRAM
The product or process characteristics that one is attempting to improve should
determine which control and noise factors should be included in a design optimization
study. Process engineers, manufacturing engineers, design engineers, technicians,
R&D specialists, customer service personnel, and others knowledgeable about the
product/process are often able to enumerate, with little difficulty, the various factors
that one should study. In this initial enmmeration, one should exercise care so as not
to exclude any factor from the experiments for obvious reasons, or because it is
already well-understood. This one factor may interact with the other factors and
produce unexpected effects.
The following three approaches prove to be very effective in identifying
factors for inclusion in optimization studies.
Brainstorming is fast becoming a familiar experience in many organizations.
It is a process that brings together people associated with the product/process or its
performance problems, with the objective of soliciting suggestions or ideas on, for
instance, which factors should be studied to improve performance. Before
brainstorming begins, the leader (called the facilitator) must ensure that the
participants understand that the objective here is the identification of potentially
influential factors, rather than solving the problem. Also, the brainstorming team
should work without indulging in criticism of the ideas or suggestions put forth
to maximize chances of catching all potentially important factors. The team should
include operators and technicians; they are usually intimately familiar with the
process. Many teams deliberately include the client usually the manufacturing
group or persons who will directly receive and will have to act on the basis of the
output of the investigation. Every good investigation should give this client ownership
of the investigation, minimizing the need to sell the results which the study brings
about, for instance, a key change in manufacturing methods.
Flowcharting is the next useful approach, particularly for determining
96 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

factors that might influence process results. A flowchart (Fig. 6.4) adds structure to
the thought process, thus avoiding possible omission of potentially significant factors.

Pour Cool Shake-out Air Shot Good


casting in mould casting cool - blast casting

The Casting Process Flowchart

Process Step Factor Identified

Pour casting Temperature of metal


Speed of pouring
Chemistry of metal

Cool in mould Time in mould


Ambient temperature of mould

Shake-out casting Intensity of vibration


Time of vibration

Air cool Ambient temperature


Rate of air flow

Shot blast Intensity of shot blast


Time of shot blast

Fig. 6.4 Identification of factors by process flowcharting.

Most engineers probably would appreciate how quickly the factors listed in
Fig. 6.4 could be identified by flowcharting the casting problem. Note that the
objective in flowcharting is also not to solve the problem yet. That is the job
of the experimental investigation that will follow.
The Cause-Effect (also known as the Ishikawa) Diagram is perhaps the
most comprehensive tool that enables one to speculate systematically about record
and clarify the potential causes (factors) that might lead to performance deviation
or poor quality.
The development of the cause-effect diagram begins with the statement of the
basic effect of interest. It then progresses to a systematic listing of causes that may
produce this effect (Fig. 6.5). Ishikawa [20] has personally given an excellent
account of how one should develop cause-effect diagrams.
Also called the fish-bone diagram because of its appearance, the cause-effect
diagram has a cause side written on the left-hand side (of the diagram) made up
of a spine or tree trunk and its branches, and an effect side on the right. One writes
the effect (cracked castings in Fig. 6.5) directly on the diagram. The tree trunk
shows the causes leading to this effect with primary, secondary, and possibly tertiary
causes of the effect branching off the main trunk of the cause tree. One adds to this
diagram any factor or cause that might possibly affect the response. This is how
one gradually develops the diagram.
Sometimes one begins cause-effect diagrams by thinking about the broad
categories of causes materials, machinery and equipment, operating methods,
USE OF ORTHOGONAL ARRAYS 97

Metal Mould Shot blast

Effect:
* cracked
castings

Shake out A i r cool

Fig. 6.5 The cause-effect diagram for cracked castings.

operator actions, or the environment The participation of those knowledgeable


about the process or product can make this effort highly productive. The participants
should add factors, based on their knowledge, by repeatedly asking the question,
why the effect? until the diagram appears to include all causes that one could
regard as the possible root causes.
Finding those factors that truly affect a design performance and quantifying
their respective effects require further study. As already mentioned, such a study
involves statistical experiments using OAs. The task of locating the factors that
should be included in these experiments can be very effectively guided by the
Ishikawa diagram.
Note that all three factor identification approaches discussed above would
ideally lead to the same factors as the potential sources of performance deviation.
However, one should not view these three approaches as alternatives. Many
manufacturing processes involve a large number of factors, some of which are in
the manufacturers control and some are not. Usually, the more complex a process,
the larger the number of factors that must be controlled. The investigator should
freely move between the three techniques described in this section and even mix
them till he feels that he has identified all design and noise factors that one should
statistically investigate.
After identifying the causes tentatively as done above, one should decide the
levels at which one should empirically investigate each of these factors.

6.5 AT WHAT LEVELS SHOULD ONE STUDY EACH FACTOR?


The cause-effect diagram would suggest perhaps many factors that need investigation.
In the screening round of statistical experiments, the investigator should include
most of these factors. However, he should confine his study of these factors to two
levels only so as to keep initial number of orthogonal experiments small. The
resulting investigation will eliminate several factors from contention. The investigator
may then study the remaining factors with multiple (usually three) treatment levels.
This stepwise approach helps in keeping the cost of running the experiments within
reasonable limits.
98 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

Some design factors are continuous variables such as curing time or % ratio,
while others are discrete or attributive, as with Alloy X vs. steel, or Shift A vs.
Shift B in a plant. For continuous factors, one should set the levels far enough
at sufficiently high and low levels, so that the effect, if present, has a good
chance to show up in the observations. In the later round of optimization experiments,
one sets the continuous variables typically at three different levels so as to reveal
any non-linear effects on response. (A special statistical procedure known as
the Response Surface Method [RSM] allows one to explore the shape of the
response surface when several factors may influence the response simultaneously.
Taguchi did not use RSM in his presentation of the robust design methodology,
to keep the approach simple. We review the utility of RSM in robust design in
Chapter 10.)
At what specific treatment levels should one test the individual design factors?
Experts say that many engineers tend to select levels not too far from some
conventional level for fear of producing off specifications material. However, the
production of inferior quality products at the investigative stage often reveals much
about the sensitivity of the process and provides important informative data that
might be of critical value in optimization. Experts say that good experiments do not
always make good products, but good experiments will provide important
information.
Setting the levels apart also can help identity non-linear effects when present.
Further, well-separated settings may reduce the need for repeating experiments
when a high degree of background (uncontrolled) noise is present.
The goal of Taguchi experiments is to evaluate and then optimize robustness
to assure that performance stays close to target. This requires our ability to predict
the effect or performance at different factor settings to be consistent. In order to
rapidly and reliably reach this predictability, Taguchi proposed additivity as a key
requirement. Recall from Section 4.2 that additivity of effects, if present, requires
that one study only the main effects, thus reducing the total cost and number of
experiments needed.
There is no way to foresee additivity in every case, though the proper choice
of the S/N ratio (Section 5.3) may help. Matrix experiments using OAs followed
by a verification experiment are a must if one wishes to assure predictability of
performance at different combinations of parameter settings.
Decidedly, interaction among design factors makes the challenge of
optimization by experimentation tougher. If an interaction is present, then one must
include it in the input-response or cause-effect model (e.g., one should expand
model of Eq. (4.1.1) by adding the appropriate interaction term) to improve the
predictability of the model. Also, when an interaction is present, the verification
experiment (Section 4.3) would show a poor fit for the main factors only additive
model, requiring further study. Then, one should include DP-DP interactions in
the optimization studies. Taguchi suggests, however, that the investigator should
strive to keep the cause-effect model main factor only to minimize experimentation
and use an SIN ratio that displays additivity of main factor effects.
In spite of Taguchis urging us to minimize/avoid DP-DP interactions, it
should be noted that interactions play a central role in seeking out the robust
USE OF ORTHOGONAL ARRAYS 99

design. The novel idea behind parameter design, as Nair [32] points out, is to
minimize the effect of the variation in the noise factors by choosing the settings
of the DPs judiciously to exploit the interactions between design and noise factors.
If such interaction is minimal or absent, one will have to reach for high quality
and expensive parts and components to achieve robustness.

6.6 REACHING THE OPTIMIZED DESIGN


This section summarizes the five basic steps in achieving a robust design, empirically.
Step 1. Identify initial and other possible (competing) settings of each
design parameter; identify the important noise factors and their ranges.
The prototype functional model of the product/process obtained from system
design (Sections 1.7 and 1.8) can often provide the technical reasons in the selection
of the DPs and their levels. Next, identify using the cause-effect diagram or
some similar device the noise factors that might cause variation in performance.
Step 2. Construct the design and noise orthogonal matrices and plan the
parameter design experiments (Fig. 5.1).
As already mentioned, statistical experiments provide an empirical, yet
reliable procedure in design optimization studies. One generally desires that the
number of experiments in such studies be not too large. Orthogonal arrays help
greatly in keeping the number of experiments to a minimum.
Again, in Taguchi methods, verification experiments alone can establish
whether the additivity assumption (fundamental in assuring the applicability of
OAs in planning the experiments) is tenable.
Step 3. Conduct the orthogonal experiments and compute the performance
statistic for each test run.
For a given combination of design parameter settings (a row in the control
array), one repeats the experiments once for each row in the noise (outer) array
to compute the performance statistic (Fig. 5.1). The designers objective is to
seek out the DP settings that maximize the SIN (see the Example 3 in Section 4.3).
There may be several such combinations that would suffice, provided the final
alternative designs do not differ in cost.
Step 4. Using the performance statistic values, predict the new setting of
each design parameter which, when incorporated in the final design, is expected
to yield the optimum performance.
The goal here is to predict design parameter values that maximize the S/N
ratio, which in turn makes performance robust. Taguchis approach makes this
prediction straightforward because here one keeps the underlying cause-effect
model main-factor-only and additive. The starting point is the assumption of
additivity, written as
Performance = jUo + jiei + ^ +. . . +e
In the above equation, /io represents the base performance of the system (the
process or the product design one is studying) and Heh fj,e2, ... represent the
effect of the individual design factors 0h d2, ... respectively, on performance.
100 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

here is the residual effect caused by all other factors interaction, environmental,
eto- ignored in the study. One assumes that e has a mean of zero and a variance
that includes DP-noise interactions.
If the assumption of additivity is valid, the predicted performance at the
optimized combination of parameter settings will be close to what one will observe
by actually running an experiment with factors 0 1 , 01, etc. set at their respective
(empirically found) optimum levels.
In rare, complex situations, one may require special transformations of
observed data as shown in Section 3.5 [5, 17, 21]. These transformations may help
one to achieve additivity of DP effects so that one may still use an orthogonal
experimental plan to conduct and complete the investigation.
Step 5. Confirm that the new (optimum) settings truly improve the
performance statistic.
It should be noted that even though the outer array provides efficiency [14,
p. 92], one is advised to replace the outer array application in Step 3 above by
Monte Carlo simulation [5, 16] when noise does not have a symmetric distribution
and the prototype functional model is a mathematical model (see Sections 1 8
and 10.5).

6.7 TESTING FOR ADDITIVITY


Additivity implies that the effect of each cause factor on die response is separable
from the effect of the other cause factors. Additivity also implies the absence of
all interactions viz. 2-factor, 3-factor, etc. [5], Taguchi reasoned that achieving
additivity of DP effects in robust design is very necessary not only because it
simplifies the necessary analysis, but also because if large interactions are present,
the optimum conditions obtained through experimentation may prove non-optimum
when the levels of some control factors change. As pointed out in Section 5.4,
some quality or performance characteristics or S/N ratios may not have additivity.
The transformations given in Section 4.1 might ease ones attaining additivity in
such cases.
Taguchi suggested the use of the verification test after the completion and
analysis of the orthogonal matrix experiments as the means to determine whether
a chosen performance characteristic or S/N ratio has additivity. If additivity is poor,
further orthogonal experiments that include some interaction terms may be needed
till one obtains a good fit, showing that one has found a satisfactory representation
(model) of the cause-effect relationship.

6.8 THE OPTIMIZATION STRATEGY


This section summarizes Taguchis (design) optimization strategy. If the reader is
new to Taguchis methods, he should review this section more than once as some
of the ideas discussed here are perhaps perplexing at the first reading.
One may view a product or a process as a system that is influenced by many
factors some in its design and some in the environment of its use. The outcome
of all these influences is the response. For a product, these factors are contained in
USE OF ORTHOGONAL ARRAYS 101

its design, and in the manner and the environment in which one uses the product.
How the product responds (functions) is its performance. For a process, the
influencing factors are the different process parameter settings, the environmental
factors, and the inputs to the process. The response here is the quality of the
product delivered by the process.
Taguchi categorizes systems being static or dynamic [5]. W hat
distinguishes these two categories is the nature of the target (performance) one
is seeking. For a static system, this target is fixed. The designer may wish to
maximize it, minimize it, or take it to some fixed value desired by the systems
final user.
For a dynamic system, the target is a function of the setting of a signal
factor , which the user adjusts dynamically during the use of the system to obtain
a desired performance from the system. The example of a dynamic system is the
steering mechanism of an automobile. The signal factor here is the rotation of the
steering wheel, the target at some instant of the vehicles use being a desired
turning radius. A servo system is another example of a dynamic system.
The Taguchi strategy for optimizing a design may be given as follows [5]:
One begins by visualizing that all influencing factors belong to one of the
following four categories:

1. Signal factors (M). These are the factors selectively set by the product
user or the process operator to attain the target system performance. Signal factors
have the special property: a change in their setting influences the average of the
systems response, but not its variability. As already mentioned, the angular position
of an automobile steering wheel is a signal factor, the turning radius of the vehicle
being the vehicles performance. One selects signal factors based on the engineering
knowledge of the system under design. For easy operation, it is desirable that the
product performance be very sensitive to the signal factor(s).
2. Control factors (z). These are the design features or parameters that the
designer sets at set points. In general, control factors may influence both the
average and the variability of response. The objective of product design is to
determine judiciously the levels for these parameters so that one achieves the
best possible performance. A multitude of objectives may determine this best
performance, such as maximum stability and robustness of the product while
keeping the cost minimum. (Robustness here implies insensitivity of performance
to noise factors.)
3. Scaling or Levelling factors (R). Scaling factors are a subclass of control
factors that the designer can adjust easily during product/process design to achieve
a desired functional relationship between the signal factor (M) and the response
variable for a dynamic system. For a static system, scaling factors can help adjust
the systems average performance to some desired, fixed target value. The gearing
ratio in the steering mechanism of a vehicle is an example of R for one can adjust
it during design to achieve the desired sensitivity of turning radius (the response
variable) to a change in steering angle (a signal factor).
4. Noise factors (x). These include all uncontrollable factors. Generally only
102 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

the statistical influence (average value, variance, distribution, etc.) of noise factors
can be known rather than their specific values. Variations in materials or instability
in the manufacturing process are examples of noise. Sometimes it is possible
to include some of the noise factors in the noise orthogonal or outer array (see
Fig. 1.5). The inclusion of noise factors in the optimization experiments allows one
to investigate z-x interactions in order to reach a Final design that will be robust as
far as these noise factors are concerned.
Given the above four categories of influencing factors, one may represent the
system response, y, by
y = f ( x t M, z, R)

The function/consists of two parts: One part, g(M t z, R ), is predictable this is


the desirable part of the response; the other part, e(x, M, z, R ), includes the influence
of noise and hence is unpredictable and usually undesirable. Further, if we desire
the predictable part to be linear, then the non-linear effect will be included in
e(x, M, z, R).
The degree o f predictability (this should be as large as possible to maximize
robustness or minimize the effect of noise on response) of response y of a
system is
Variance of g
Variance of e
Instead of directly using the degree of predictability ratio as above, Taguchi has
suggested that the use of
10 log 10 (Variance of ^-/variance of e)
is a more appropriate basis for appraising the effect of factors on a systems
robustness. Taguchi calls this transformed expression the (S/N) ratio.
Generally, the SIN ratio will be a function of z (the control factors), the levels
of which the designer selects and sets [5]. Chapter 7 presents an illustration of
how one may identify the control and scaling factors in practice and then
optimize them.
Among the many approaches possible here, Taguchi recommended a two-
step procedure as the effective strategy, which may be briefly stated as follows:
1. After conducting the orthogonally designed experiments, identify those
control factors {z} (among {0b d2, 9 3} in Fig. 1.5, or those in the control array
of Fig. 5.1) that show high S/N ratios. Set these factors at values that correspond
to the highest S/N values .
2. Identify the control factor that has no or little effect on S/N ratio but
has the highest effect on mean perform ance . This control factor should be
treated as the scaling factor (/?). Adjust the value of R such that performance is on
target.
All remaining variables (these neither influence the S/N ratio a measure
of robustness nor the mean performance) may be regarded to belong to category
x . The designer should leave these at their nominal levels.
USE OF ORTHOGONAL ARRAYS 103

6.9 TAGUCHIS TWO STEPS TO ON-TARGET PERFORMANCE


WITH MINIMUM VARIABILITY
The purpose of running orthogonal experiments is to identify the factor treatment
combination that minimizes the standard deviation (a fundamental quantity showing
the designs sensitivity to noise or its lack of robustness) of performance while
keeping the mean performance on target. We show below how one achieves this.
To keep the explanation simple, we assume that only one performance variable
a response called y is of interest.
Note first that minimizing standard deviation of y is equivalent to minimizing
the influence of noise factors. Unfortunately, factors such as the exact temperature
of a baking oven, the voltage of power supply, operator performance variations,
material quality changes, etc. cannot be economically manipulated or controlled in
the field or in a factory.
By definition, response y will be optimized if we are able to determine the
most favourable levels for each of the DPs such that the S/N ratio based on y is
maximum while the mean of y remains on target. This requires the exploitation of
any DP-noise interactions judiciously. One accomplishes this in two steps: In the
first step, we determine which control factors in the experiments have a significant
effect on the S/N ratio. One may do this with precision by doing the ANOVA of
the S/N ratios (Section 5.2), or informally, by examining a graphical display (for
example, Fig. 6 .6 ) of the S/N ratios computed from observations. This step finds
those factors that control the variability in process (or product) performance. For
each control factor thus located, we choose the level that gives the highest S/N
ratio . In the next step, from all the control factors that have a significant effect on

r f

Interference Thickness Depth % Adhesive

Fig. 6.6 Effect of controllable factors on average pull-off force and S/N ratio.
104 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

the mean (performance), we select the factor that has the smallest S/N ratio . Such
a factor can act as the adjustment (or scaling) param eter (R).
To arrive at the optimized design, set levels of the remaining factors at their
nominal levels at which they were before the conduction of optimization experiments.
Finally, set the level of the signal (or adjustment or scaling) factor R such that the
mean response y is on target. Chapter 7 presents the actual application of these
steps to a real design problem.
Taguchi empirically showed that in many situations this two-step process
leads effectively and efficiently to optimum DP levels (treatments). At these
parameter levels the standard deviation of response (performance variability) is
minimum while mean response is on target.

6.10 SUMMARY
Orthogonal arrays are special and efficient arrangements of factor settings that
guide the planning of design optimization experiments to achieve robust process/
product performance.
The initial round of orthogonal experiments aims at arriving at an additive
cause-effect model, which shows how the systems performance is dependent on
the different DPs. Next, one optimizes the design by closely examining the effect
of each of the design factors on (a) the mean performance, and (b) the S/N ratio
a metric or measure showing the robustness of performance. One calculates the
S/N ratio from the actual observed performance measurements. One obtains the
optimum design by adjusting the settings of the design factors by a two-step process.
To achieve optimization, one first sets the control factors that show a high
degree of influence on the S/N ratio at values that correspond to the highest S/N
values. This makes the design robust about the noise factors. The objective of this
step is to judiciously exploit any DP-noise interactions to minimize the adverse
effect of noise on performance.
Next, one examines those control factors that have little influence on the
S/N ratio further to see which one among them has the maximum influence on
the mean performance. One then adjusts this factor, called the scaling factor, such
that the systems response (performance) after this adjustment is exactly on target.
The remaining factors may be left at their nominal levels. These neither influence
the S/N ratio, nor the mean of performance.
Agriculturists have used statistically designed experiments to improve farm
yield, grade fertilizers, check seed performance, etc. for over 50 years. Industry
also uses factorial experiments regularly [9,11,12,18,22]. However, most of these
classical applications of statistical experimentation aim at optimizing only the mean
value of the response variable. Parameter optimization experiments using OAs as
professed by Taguchi aim additionally at reducing the va riability o f response caused
by noise the factors not in the designers or the product users control.

EXERCISES
1. In an automotive parts manufacturing operation, a certain part was to be assembled
by gluing an elastometric sleeve onto a nylon tube. The objective was to maximize
USE OF ORTHOGONAL ARRAYS 105

the pull-off force of the assembly by appropriately manipulating the four key
mechanical assembly factors the interference between the sleeve and the tube,
sleeve wall thickness, insertion depth, and per cent adhesive dip of the sleeve. The
factors not to be controlled in th& routine assembly operation were drying time,
temperature, and humidity.
The different factors and their respective levels available for experimentation
are shown in Table E6.1. Suggest appropriate inner and outer arrays for conducting
optimization experiments.
TABLE E6.1
DESIGN AND NOISE FACTOR LEVELS

Factor Levels
Interference Low Medium High
Sleeve wall thickness Thin Medium Thick
Insertion depth Shallow Medium Deep
Per cent adhesive dip Low Medium High
Drying time 24 hr. 120 hr.
Temperature 72F 150F
Relative humidity 25% 75%

2. Given the fact that the manufacturing engineer is interested in maximizing


the pull-off force, which S/N ratio from Section 5.2 would you choose for the
proposed experiments?
3. The observed pull-off forces (in lb.) in an L 9 (inner) x L 8 (outer) orthogonal
experiment are shown in Table E6.2. Calculate the mean response and the S/N ratio
of your choice toward eventually determining the optimum settings of the four
mechanical assembly factors.
TABLE E6.2
PULL-OFF FORCE TEST RESULTS
(Inner Array) (Outer Array)
Sleeve Inser % Adhe 120 120 120 120 24 24
24 24
Inter Wall tion sive (drying time)
ference Thick Depth Depth 150 150 72 72 150 150 72 72
ness (temperature)
75% 25% 75% 25% 75% 25% 75% 25%
(Relative humidity)

Low Thin Shallow Low 19 20 20 20 20 17 10 16


Low Medium Medium Medium 22 24 20 20 20 19 16 15
Low Thick Deep High 20 23 18 23 16 19 17 16
Medium Thin Medium High 25 23 19 21 19 19 17 18
Medium Medium Deep Low 25 28 21 26 25 19 19 20
Medium Thick Shallow Medium 25 23 20 15 19 20 16 16
High Thin Deep Medium 22 24 19 17 24 18 19 16
High Medium Shallow High 24 23 20 18 16 15 16 14
High Thick Medium Low 29 23 23 23 17 19 20 16
106 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

4. Check your findings with Fig. 6 . 6 and confirm that the following settings
maximize the assemblys pull-off force performance while minimizing variability
tracked by Eq. (5.2.5):

Mechanical Optimum
Assembly Factor Setting

Interference Medium
Sleeve wall thickness Medium
Insertion depth Deep
Per cent adhesive depth Low
Case Study 1: Process Optimization
m
7.1
Optical Filter Manufacture
THE PROCESS FOR MANUFACTURING OPTICAL FILTERS
Published literature reports over 400 successful industrial applications of Taguchi
methods in the western industries since 1985 [5, 6 , 7, 14, 19, 23] excluding those
from Japan [1, 4, 8 ]. One typical application of the Taguchi method to optimize a
manufacturing process is given in [14], This chapter presents the background of
this application, the procedure used, and the reported improvement in robustness
thereby achieved.
Optical filters are devices that transmit only a narrow band of visual wave
lengths, suppressing other wavelengths. The manufacture of optical filters consists
of coating a quartz substrate (the underlying base layer) with thin crystallized layers
of titanium dioxide and silicon dioxide. A filters index of refraction and its index
of absorption are the two key characteristics that determine how well the filter
functions, i.e., how well it separates lights of certain wavelengths. A major problem
faced by the manufacturers of optical filters is the high variability of the filters
refractive index, caused mainly by the variability in the thickness of the coating
layer. Even if this example is somewhat remote from processes the reader might be
interested in or is familiar with, the reasons for discussing it here are three-fold:
First, this is a typical real problem, affected by all the nuances of many factors
evading the manufacturers control and their effects not readily apparent. Second,
this study uses almost all the steps of robust design described in Chapter 6 ; hence
it serves well to illustrate those steps. Third, the background of this problem is
well-documented in [14], and therefore, those seeking more details may refer to it.
The process design (or control) parameters in the manufacture of optical
filters include the method of cleaning the quartz substrates before coating, the
temperature at which one holds the substrates, coating vapour nozzle position, etc.
By brainstorming with the manufacturing technicians, the investigators were able
to identify eight such parameters (Table 7.1). Typical robust design experiments, the
reader should note, include five to ten control parameters that one studies together.
TABLE 7.1
EXISTING AND EXPERIMENTAL SETTINGS FOR PROCESS PARAMETERS
Control Parameter Existing Setting 1 Setting 2
A Rotation method Oscillating Continuous Oscillating
B Wafer code 668G4 678D4
C Deposition temperature 1215C 1210*C 1220C
D Deposition time Low High Low
E Arsenic flow rate 57% 55% 59%
F HC1 etch temperature 1200C 1180C 1215C
G HC1 flow rate 12% 10% 14%
H Nozzle position 4 2 6

107
108 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

In this example, the investigators observed that the sources of noise factors
that were either expensive or impossible to control included ambient temperature
and humidity, drifts occurring in the settings of control parameters, and variation
in raw materials. The principal sources of noise, as the investigators could identify,
included uneven temperature, uneven Ti/Si dioxide vapour concentration, and uneven
vapour composition profiles in the chamber in which one kept the substrate for
coating. Other factors could be substrate location effects, variation in deposit
thickness across the face of the substrate, etc. The investigators chose to obtain
multiple measurements of the performance characteristic at several points on
each filter experimentally produced to assess the effect of these principal
uncontrolled (noise) factors.

7.2 TEST SETTINGS OF CONTROL PARAMETERS AND THE OA


The investigators found that one could set the different control parameters at any
of the several possible settings shown in Table 7.1. Clearly, these different settings
would merit inclusion in the study. However, before proceeding to design the
orthogonal experiments, the investigators verified that these settings reflected the
reasonable range over which these process control parameters could be varied.
Notice that the investigators chose two test settings for each control parameter such
that these settings bracketed the setting routinely employed by the manufacturing
technicians.
With two test settings for each of the eight parameters chosen, at first
glance it would appear that one would need here 28 or 256 different runs to take
care of all the combinations possible of these settings. Under the conditions given
in Section 6.1 (with which by now the reader should be well acquainted), however,
an OA would drastically reduce the total experimental effort needed here.
A control array is picked (see Fig. 5.1) for the initial experiments from the
standard available OAs (Appendix B). (Note that at this stage one would not know
if further experiments beyond these initial ones would be necessary.) Since one
wished here to study eight factors at two treatments each, one could select the L l 2
or the L 16 array (Fig. 6.1). In fact, one could use the L 1 6 OA to study up to 15
control parameters, each with two settings. The investigators selected the L J 6 for
the present problem to increase the error degrees of freedom (Section 3.5); the L 16
provided a few extra experiments useful in improving the precision of the
analysis. Note that even the L 1 6 array would require only 16 experiments, a
considerable reduction from 256.
However, the selection of the L 16 array to guide the experiments in this
investigation did impose some constraints. It forced the investigators to assume that
each control factor had an independent effect on the response variable (the variability
of the refractive index here), and that only the main effects were important and
one could ignore any interactions. Whether these were valid assumptions would be
verified later with a confirmation experiment after the present set of experiments
were over and one had identified the optimal setting for each control parameter.
Next, the investigators examined the L ^ s linear graphs (see Section 8.2 I
and Appendix B) to decide upon the assignment of the different control factors I

t

1
*
i

4
1
*

*
I
4

*

'
CASE STUDY 1: PROCESS OPTIMIZATION OPTICAL FILTER MANUFACTURE 109

(A, B, C , etc.) to the OAs columns. Table 7.2 shows the final assignment. The
entries ( 1 and - 1 ) in the array designate the two coded settings for each factor
that would constitute the different individual experiments. Notice that the
investigators left some array columns unassigned.

TABLE 7.2
THE L 16 ORTHOGONAL ARRAY

Factor
Assigned A B C D E F G H
C olum n-> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Experiment 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
2 1 1 1 1 1 1 1 1 1 1 1 1 1 1
3 1 1 1 -1 - -1 -1 1 1 1 1 -1 -1 -1 -1
4 1 1 1 1 -1 -1 -1 -1 -1 -1 1 1 1 1
5 1 - 1 - 1 1 -1 -1 1 1 -1 -1 1 1 - 1 - 1
6 1 - 1 - 1 1 -1 -1 -1 -1 1 1 -1 -1 1 1
7 1 _i _i _
1 1 1 1 -1 -1 -1 -1 1 1
8 1 _i _i _i _
1 1 - 1 - 1 1 1 1 1 - 1 - 1
9 -1 1-1 1 - 1 -1 1-1 1 -1 1 -1 1 -1
10 -1 1-1 1 - 1 - 1 - 1 1 -1 1 -1 1-1 1
11 -1 1 -1 -1 -1 1 1-1 1 -1 -1 1-1 1
12 -1 1 - 1 -1 -1 1 -1 1 -1 1 1 -1 1-1
13 -1 -1 1 1 - -1 1 1-1 -1 1 1 - 1 - 1 1
14 -1 -1 1 1 - -1 1 -1 1 1 -1 -1 1 1-1
15 -1 -1 1 -1 1 -1 1-1 -1 1 -1 1 1-1
16 -1 -1 1 -1 1 - 1 - 1 1 1 -1 1 - 1-1 1

In practice, it is immaterial as to which control factor is assigned to which


column of the OA, as long as these assignments are consistent with the OAs linear
graph. Once done, however, the column assignments would remain unchanged for
all 16 experiments.
Note also that once planned as shown in Table 7.2, the orthogonal experiments
would produce filters without any regard to whether the product thus produced
would be acceptable (i.e., meet some specification) or not. As already mentioned,
the aim of optimization experiments is to uncover parameter settings that lead to
quality production rather than to produce acceptable items in each experiment. The
effort expended in parametric optimization experiments is an investment aimed
at engineering quality into products [ 1 ].
Several obvious sources of noise additional factors that might cause
refractive index to vary could not be economically included in the present study.
This is typical of real life optimization experiments. In the present investigation one
assessed the effect of noise by making multiple measurements of the functional
characteristic at different positions on each filter experimentally produced, and
on several filters produced under identical control parameter settings. The
investigators made this measurement at five places on each filter produced. The
total number of filters produced under each test setting combination (as shown in
Table 7.2) was 14.
110 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

7.3 PERFORMANCE MEASUREMENTS AND THE S/N RATIO


In this investigation one made a total of 5 x 14 or 70 (five different positions on
each of the 14 filters manufactured per experimental run) measurements of the
performance parameter (the epitaxial coating thickness of the deposit obtained on
the quartz substrate).
The average thickness values {ybar,} shown in Table 7.3 produced an estimate
of the thickness of crystals grown under the stated experimental conditions. {C?2) J

TABLE 7.3
MEAN COATING THICKNESS AND log OF VARIANCE BY RUN

Experiment (i) ybar, (/im ) log10 (s 2)/

1 14.821 - 0.4425
2 14.888 -1 .1 9 8 9
3 14.037 - 1.4307
4 13.880 - 0.6505
5 14.165 - 1.4230
6 13.860 - 0.4969
7 14.757 -0 .3 2 6 7
8 14.921 - 0.6270
9 13.972 - 0.3467
10 14.032 -0 .8 5 6 3
11 14.843 - 0.4369
12 14.415 -0.3131
13 14.878 -0 .6 1 5 4
14 14.932 -0 .2 2 9 2
15 13.907 -0 .1 1 9 0
16 13.914 -0 .8 6 2 5

showed the variability in the resulting thickness. One calculated ybar* and {(.s2),}
using the formulas

1 70

^ 70 h yi

2 1 7

Si = 69 ,=i {yi ~ ybar,)

where >) represents a single measurement of thickness. For improved statistical


properties [14], the actual analysis employed logi0 (j,?), rather than s f directly.
Table 7.3 shows the values.
Recall that the objective of this study was to reduce the variations in refractive
index (the performance characteristic of interest). This translated to making the
thickness variance s f (an equivalent engineering property of the filter) small.
The study also aimed at achieving a mean thickness of 14.5 (jm. The procedure
followed the two-step process given in Section 6.9.
CASE STUDY 1: PROCESS OPTIMIZATION OPTICAL FILTER MANUFACTURE 111

7.4 MINIMIZING log10 (s2), THE VARIABILITY OF THICKNESS


Because of the special structure of the OA used here and the manner in which one
assigned the control factors A to H to the OA columns, it was not difficult to
calculate the effects of each of the control factors, assuming that additivity
(separability) of the main effects held and there were no factor-factor interaction
effects (Section 4.2). The effect on logio (s2) caused by the control factor A (rotation
method) could be estimated as follows:
Effect of A on log 1 0 (s2) = avg. (logi0 (s2} with factor A at Setting 2)
- avg. (lo g io (s2) with factor A at Setting 1)

= E log I 0 ( j 2 ) ; / 8 - log 10 ( 5 2 ) , / 8
i*9 /=1
= - 0.4724 - (- 0.8245) (from Table 7.3)
= 0.3521
One similarly estimated the effects of the other control factors (B to H) on
logio (s2). Table 7.4 shows these.
TABLE 7.4
AVERAGE VALUE OF log10 (s2) OBSERVED AT EACH FACTOR SETTING

Control Parameter Setting 1 Setting 2 Difference

A Rotation method - 0.8245 -0 .4 7 2 4 0.3521


B Wafer code -0 .7095 -0 .5 8 7 5 0.1220
C Deposition temperature -0.7011 -0 .5 9 5 8 0.1053
D Deposition time - 0.5237 - 0.7732 - 0.2495
E Arsenic flow rate - 0.6426 - 0.6543 -0 .0 1 1 7
F HC1 etch temperature -0 .6 1 2 6 - 0.6843 -0 .0 7 1 7
G HC1 flow rate - 0.5980 - 0.6989 -0 .1 0 0 8
H Nozzle position -0 .3 6 5 6 -0 .9 3 1 3 -0 .5 6 5 8

The entries in the Difference column in Table 7.4 show the effect of the
corresponding factors on log 10 ( s 2), the variability in thickness. One notices that
control factors A (rotation method) and H (nozzle position) have greater absolute
effect than the six other factors. To minimize variability, therefore, one should
set the factors A and H as follows:
Rotation method at Setting 1, or continuous.
Nozzle position at Setting 2, or 6 .
If additivity of effects existed, these settings would presumably lead to a reduction
in the process logio (s2).

7.5 THE CONFIRMATION EXPERIMENT


In setting the two control factors A and H (rotation method and nozzle position)
at their optimum levels as above, one made several assumptions. These were:
1. The effects of the (main) factors are additive.
112 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

2. There are no control factor-control factor interactions.


3. The relationship between each control factor and logjo ( s 2) is linear.
The investigators made three independent confirmation runs after setting rotation
method as continuous and nozzle position at 6 .
The average log 10 ( s 2) value observed was -1.244. The new variance in
thickness, s2, therefore became 0.057, about 60% of the original variance of 0.143
before one set the two control factors at the new settings. The investigators considered
this reduction sufficient as the confirmation. For the purpose at hand, the experiments
had succeeded in reducing the variability in thickness, achieving thus the first
objective of this investigation.

7.6 ADJUSTING MEAN CRYSTAL THICKNESS TO TARGET


i

As mentioned in Section 6 .8 , some control factors affect only the mean perfor
mance, but not the variability of performance under noise. As mentioned there,
these factors affect the variability of performance minimally, or not at all. One
may identify such factors from the experimental results, by focussing primarily
on the average mean of each run rather than its variability. One may use these
factors to fine-tune the manufacturing process to get its output on target
without affecting or increasing variability.
One could estimate the average values of ybar (epitaxial thickness) resulting
from a given setting of a control factor from the experimental results summarized
under ybar,- in Table 7.3. As for log 10 (s2) in Section 7.4, these estimates indicate
the effect of each control factor on average thickness. The calculations are similar
to those used in finding factor effects on log 10 C?2). Table 7.5 shows the results.
TABLE 7.5
AVERAGE EPITAXIAL THICKNESS AT THE TWO SETTINGS OF EACH FACTOR
Control Parameter Setting 1 Setting 2 Difference
A Rotation method 14.4161 14.3616 - 0.0545
B Wafer code 14.3610 14.4167 0.0556
C Deposition temperature 14.4435 14.3342 -0 .1 0 9 4
D Deposition time 14.8069 13.9709 -0 .8 3 5 9
E Arsenic flow rate 14.4225 14.3552 -0 .0 6 7 4
F HC1 etch temperature 14.3589 14.4189 0.0600
G HC1 flow rate 14.4376 14.3401 - 0.0975
H Nozzle position 14.3180 14.4597 0.1417

One notices from Table 7.5 that factor D (deposition time) has the largest
effect on average thickness (ybar), the other factors having relatively little influence.
From the data in Table 7.4 one finds that deposition time has relatively small
effect on variability, so that changing deposition time to adjust average thickness
produced by the manufacturing process would not add variability. In other words,
deposition time has little interaction with noise factors, hence it would serve
effectively as the thickness adjustment or scaling parameter (Section 6 .8 ). Adjusting
the deposition time to make ybar = 14.5 pm, assuming that its effect on ybar (the
average thickness) is linear, accomplishes Step 2 of Section 6.9.
CASE STUDY 1: PROCESS OPTIMIZATION OPTICAL FILTER MANUFACTURE 113

Thus one has optimized the filter manufacturing process. Figures 7.1 and 7.2
illustrate the factor effects graphically.

F a c to r s and treatm en ts
Fig. 7.1 Effect of factors on epitaxial thickness of crystals.

F a c to r s and treatm en ts
Fig. 7.2 Effect of factors on log 1 0 (s2)
Selecting Orthogonal Arrays and
Linear Graphs
8.1 SIZING UP THE DESIGN OPTIMIZATION PROBLEM
Before one attempts to select an OA to guide the design optimization experiments,
one must answer the following critical questions:
1. How many factors are to be studied?
2. How many treatment levels are possible for each factor?
3. What specific 2-factor interactions are to be investigated?
4. Would one encounter any particular difficulty during the runs (e.g.
some factors may not permit frequent treatment changes)?
Except in unusual circumstances, the investigator will be able to locate a
standard OA fitting his needs. If a standard array does not suffice the investigation
objectives, the investigator should refer to an advanced text on Taguchi methods,
such as references [4, 5, or 8 ],
The first step in selecting the correct standard OA involves counting the
total degrees o f freedom (dof) present in the study. This count fixes the minimum
number of experiments that must be run to study the factors involved.
In counting the total dof, the investigator commits 1 dof to the overall mean
of the response under study. This begins the dof count as 1.
The number of dof associated with each factor under study equals one less
than the number o f treatment levels available for that factor. Following this the
investigator considers the 2 -factor interactions of interest.
One determines the total dof in the study as follows: If nA and % represent
the number of treatments available for two factors A and B respectively, (nA x nB)
would equal the total combinations of treatments. Then
1 = dof to be used by the overall mean
nA 1 = dof for A
nB - 1 = dof for B
(nA - 1) x (nB - 1) = dof required to study the A x B 2-factor interaction

An example will illustrate this procedure. If a design study involves one 2-level
factor (A), four 3-level factors (B, C, D, E ), and one wishes to investigate also the
A x D interaction, then the dof would be as follows:
Source o f d o f Required d o f
Overall mean
A 2-1 = 1
B, C, A E 4(3 - 1) = 8
A x D interaction (2 - 1) x (3 - 1) = 2

114
SELECTING ORTHOGONAL ARRAYS AND LINEAR GRAPHS 115

Hence,
total dof = 1 + 1 + 8 + 2 = 1 2

Therefore, in this example, one must conduct at least 12 experiments to be able


to estimate the desired four main and one 2-factor interaction effects. The
corresponding OA must therefore have at least 12 rows.
Taguchi identified several basic OAs which he called standard OAs. These
OAs usually suffice the needs of most design optimization studies. Appendix B
provides a summary of the often used standard OAs.
After finding the dof, the selection of the appropriate standard OA becomes
reasonably straightforward.
In the example given above, the investigation involves one 2-level factor,
four 3-level factors, and only one 2-factor interaction. If one could drop the 2-factor
interaction from the study, then one would look for a standard array with a
number of rows satisfying the required dof. The dof count (without the 2-factor
interaction) suggests that one needs here only 1 1 experiments (or a minimum of 1 1
rows). The OA one selects should also have at least one 2-level column and four
3-level columns. Figure 6.2 shows that the L 18 standard array meets these conditions.
Note, however, that not all the 18 columns of this array would be assigned to a
factor; the study would use only 1 1 orthogonal columns, leaving the remaining 6
columns unassigned. (We will explain shortly which factor would be assigned to
which column.)
The structure of this OA and the reasons (of balancing the factor effects)
mentioned in Section 4.2 require that all 18 experiments indicated by the L lg OA
must be run. If one runs fewer than 18 experiments here, then it would not be
possible to complete the analysis necessary to evaluate the desired effects. One
assigns the factors A, B, Et etc. to the L 18 OA columns after referring to an appropriate
linear graph (a characteristic of OAs, explained in Section 8.2). Briefly, linear
graphs form the starting point in column assignments. The L 18 linear graph
(shown in Appendix B) shows that Columns 1 and 2 in this OA interact, but this
interaction does not affect the rest of the columns. Therefore, one may assign the
single 2-level factor (A) to Column 1. One may assign the four 3-level factors
to Columns 3 -6 , respectively.
Suppose now that one wishes to study also the interaction between the single
2-level factor (here A) and one 3-level factor (here Z>). Then one would assign
Columns 1 and 2 to form a higher order (6 -level) column and estimate the interaction
by a 2-way table. However, such modifications and advanced designs are beyond
the scope of this introductory text. The interested reader may refer to [6 ].
Experts on Taguchi methods recommend that, as far as possible, a beginner
should stick to the direct use of one of the standard OAs. Also, a beginner should
restrict the number of experiments to a maximum of 18. This causes the choice of
OAs to remain between L4 and L 18. Further, a beginner should restrict his
investigation to either all 2-level or preferably all 3-level factors, and he should
not in the initial stage of study attempt to estimate the interactions. One
should use the guidelines given in Figs. 6.1 and 6.2 to pick the appropriate array.
Some general guidelines for the assignment of the different factors to the
116 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

array columns are also available. Taguchi suggested, for instance, that factors that
are rather difficult to change from experiment to experiment should be assigned
to the columns toward the left in the array.
It is desirable that one randomly orders (i.e., randomizes) the sequencing of
the individual experiments to be run, to minimize any biasing effect of the
uncontrolled factors. Such bias may develop, for instance, from cyclic variation of
ambient temperature over 24 hours, or a shift change in the plant, or a switch in
the batch of raw material used.
Again, we suggest that the reader should refer to an advanced text on
Taguchi methods [4, 5, or 8 ] in order to find arrays appropriate for sophisticated
design optimization problems. The L 18 array is the most commonly used array
because it can study up to seven 3-level factors and one 2-level factor.
Inclusion of 2-factor (A x B type) and higher order interactions in optimi
zation studies leads to a slightly different procedure to identify the appropriate
standard array. This procedure involves a technique called linear graphs .

8.2 LINEAR GRAPHS AND INTERACTIONS


To retain simplicity in analysis and the rapid discovery of the optimum settings for
the design parameters, Taguchi emphasized that one should use the additive (main
factors only) model in robust design experiments. The additive model deliberately
avoids the study of interaction effects, focussing instead on identifying the main
effects of design factors. From the main factor effects one determines the treatments
at which one should set the control factors and the signal factor, to deliver robustness.
One employs the verification experiment to determine whether the main factors
only model is adequate for the optimization being attempted.
Sometimes interactions may be important and significant. To this end, Taguchi
suggested inclusion of a small number of 2 -factor interactions and estimation of
the interactions using certain special unassigned OA columns. In order to identify
which columns of an OA should be used to study certain 2 -factor interactions,
Taguchi devised a special method, known as the linear graph technique.
Linear graphs are graphic representations of interaction information in a
matrix experiment. They make handy tools in the assignment of the different main
factors and their interactions to the different columns of an OA. References [4] and
[5] contain a good collection of the standard OAs and their respective linear
graphs (see Appendix B for OAs in frequent use).
In a linear graph, one represents the columns of the array by dots and lines
connecting the dots (Fig. 8.1). Each dot on the linear graph represents a main
factor. A line connecting two dots represents the interaction between the two
corresponding factors.
Because of the particular combination of factor treatments within it, an OA
may have several associated linear graphs. For instance, the Lg OA displayed in
Appendix B has at least six distinct linear graphs. Figure 8.1 shows one standard
linear graph for the L 8 array. Table 8.1 shows the corresponding factor-to-column
assignments.
Note an aspect of critical importance in the linear graph shown in Fig. 8.1.
SELECTING ORTHOGONAL ARRAYS AND LINEAR GRAPHS 117

This graph shows that columns 3, 5, and 6 of array Lg correspond to the interactions
between columns (1, 2), (1, 4), and (2,4) of L8. One can use Column 7 to evaluate
only the main effect, and not the interactions. Table 8.1 shows these column
assignments.

Fig. 8.1 The standard linear graph for the Lg array.

TABLE 8.1
L8 COLUMN ASSIGNMENTS TO STUDY THREE
2-FACTOR INTERACTIONS

Assignment of
Column Factor or Interaction

1 A
2 B
3 A xB
4 C
5 A x C
6 BxC
7 D

We made a direct use of the linear graph representation of the Lg array


shown in Fig. 8.1 to construct Table 8.1. The column assignment as shown
would allow the study of the main effects of 2-level factors A, B, C, and A as
also the 2-factor interactions A x B, B x C, and A x C,
Because the construction of a standard OA is not arbitrary, each standard OA
is capable of identifying certain particular main effects and only certain 2 -factor
interactions. However, some OAs can be used in more than one way. For instance,
as Taguchi showed, the L 8 array also possesses an alternative linear graph, displayed
in Fig. 8.2. This second linear graph for Lg also has four dots representing four

Fig. 8.2 An alternative linear graph for the L 8 array


118 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

columns, but the lines have a different topology. Column 3 represents interaction
between Columns 1 and 2, Column 6 , interaction between Columns 1 and 7, and
Column 5, interaction between Columns 1 and 4. No other interactions or main
effects can be studied with this arrangement without a modification of this linear
9

graph* However, suppose the objective is to study four 2-level factors A, fl, C, and
A and estimate their main effects as also interactions A x B , f l x C , and B x D.
Then the column assignments shown in Table 8.2 would accomplish that goal.

TABLE 8.2
AN ALTERNATIVE COLUMN ASSIGNMENT FOR L8
TO STUDY THREE 2-FACTOR INTERACTIONS

OA Assignment of
Column Factor or Interaction

1 B
2 A
3 A x B
4 C
5 B x C
6 BxD
7 D

With only modest effort it is usually possible to locate a standard OA and the
applicable linear graph in many design optimization studies. We emphasize again
that the selection of the OA and the linear graph must be both correct This alone can
lead to the correct experiments to study the main and interaction effects of interest

8.3 MODIFICATION OF STANDARD LINEAR GRAPHS


It may be necessary to modify a standard linear graph to make it fit the special
needs of a real problem, if the standard assignment of columns does not evaluate
all the main and interaction effects of interest The following modifications to a
standard linear graph are possible:
1. Breaking a link and replacing this broken link with a free-standing dot, to
free an (interaction) column of little interest to assign it to an additional main factor.
2. Forming a new link between two dots by removing a third free dot from
the graph. The new line would now represent the interaction between the columns
assigned to the two dots at the ends of the newly formed link.
3. Moving a link between a pair of dots to go between another pair of dots,
provided the interaction of the second pair of dots (columns) is still the column
(link) being moved in.
The following example illustrates these three methods:

EXAMPLE 8.1: A design optimization problem involves five 2-level factors


A, B, C, D, and E. The designer wishes to estimate all five main effects, and also
interactions A x B and B x C
SELECTING ORTHOGONAL ARRAYS AND LINEAR GRAPHS 119

Since five 2-level factors are involved, a reference to Fig. 6.1 helps in the
preliminary selection of the Lg array as the experimental OA. A reference to
Appendix B, however, suggests that one available standard linear graph for the
L 8 OA would lead to the evaluation of four main factors and three 2-factor
interactions (Fig. 8.3(a)).

o 6 o 7

(a) (b)
Fig. 8 3 Modification of a standard linear graph: (a) Original graph; and
(b) removing an interaction to create two main effect columns ( 6 and 7).

One may use Modification Rule 1 above to remove one interaction from
Fig. 8.3(a) and create the modified linear graph as shown in Fig. 8.3(b). This
modification frees up the interaction Column 6 between Columns 1 and 7 (the
interaction being of no interest to the designer) and produces two free columns 6
and 7, which may be assigned to factors D and E respectively.
Some additional methods for modifying linear graphs to create special
OAs to suit special real needs are also available [6 , 8 ].

8.4 ESTIMATION OF FACTOR INTERACTIONS USING OAs


Two major advantages result from using OAs rather than full factorial designs
(Fig. 6.3) to plan design optimization studies. These are (a) the ease and (b) the
speed with which one may estimate the main and the interaction effects of interest.
Section 4.4 showed the method for estimating main effects using an OA. This
section shows how one may calculate 2 -factor interaction effects from observations
obtained in a set of orthogonally designed experiments. We should say a word
about interactions before we proceed. An interaction in a statistical experiment
is a difference between differences in effects as noted in the lithography example
cited in Section 3.1. One may think of an interaction to be a generalization of
the simple additive model
Yy = n + at + Pj + ij
which is the same as Eq. (4.1.2). Here, the effect due to one factor is affected by
the level of another factor present. This may be represented by
Yjj ~ fX + Gj + fij + (ccp)ij + jj (8.4.1)
in which the term (aj3),y represents the effect due to the two independent factors
interacting to produce an additional effect, besides their individual main effects a,
and pj.
120 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

Some examples of interaction are commonplace in real life. Many physicians


believe that the higher a patients blood pressure, the greater is the negative effect
of overweight on life expectancy. This belief specifies an interaction between
having high blood pressure and being overweight in influencing a persons life
expectancy. The chemical engineering process model
Utilization (%) = K (mixing HP per 1000 %)L (superficial velocity)**
similarly shows an interaction between the level of agitation (mixing HP per
1 0 0 0 g) present in the reactor and the superficial velocity present in producing an

effect on utilization per cent. %

Even though Taguchi himself and later several others [5, 14, 17] have
urged use of suitable mathematical transformations or other means (Section 4.1)
to minimize the effect of interactions on design optimization experiments, in some
real life experiments it is simply not possible to ignore DP-DP interactions.
Fortunately, most interactions when present tend to be between two factors. Three
and higher order interactions are infrequent and usually small in comparison with
2-factor interactions [18, 24].
Interaction between two dichotomous (2-level) factors is not difficult to
estimate, provided the interaction column in the pertinent OA is unused (i.e.,
unassigned to another factor). We illustrate the procedure with two examples.
E X A M P L E 8 .2 : The columns of the standard L 8 array have been assigned to
four 2-level factors A, B, C, D, and their interactions A x B, B x C, and B x D,
according to Table 8.2. The eight experiments run have resulted respectively in
observations y h y 2, y3, y4>y 5, y& and y8.
Recall from Table 8.2 that we assigned the A x B interaction to Column 3.
Keeping in view the structure of the L 8 array (specifically the entries in Column 3,
L 8 in Appendix B), one may construct a two-way table, shown below. This
allows one to estimate the A x B interaction. The 2-level (Hi/Lo or *T72)
settings for factors A and B in Column 3 are arranged such that one obtains the
following average responses because of the particular combinations of the factor
levels present in the eight experiments that produced {y*, i = 1 , 2 , ..., 8 }:
B Hi B Lo
y\ + y i y i + y%
A Hi
2 2

js + y^ y5 + ye
A Lo
2 2

This leads to the estimate of the A x B 2-factor interaction as


A x B interaction = [(y 2 + y 2)/2 + (y5 + y6 )/2] - [(y3 + y4)/2 + (y 7 + yg)/2]

E X A M P L E 8.3: Pressure and temperature are considered to be the key factors in


controlling microvoids in powder metallurgy processing. Two levels of pressure
and two levels of temperature are used in an L4 experiment, resulting in the following
four observations:
SELECTING OFiTHOGONAL ARRAYS AND LINEAR Gf=lAPf-/S

Temperature

Tx t2

Pi 9 5
Pressure
P, 8 7

The estimated main effect (on micro voids) caused by temperature going from Tx
to T2 equals
(5 + 1)12 - (9 + 8)/2 = 6.0 - 8.5 = - 1.5
The main effect on voids caused by pressure going from P x to P 2 equals
(8 + 7)/2 - (9 + 5)/2 = 7.5 - 7 == 0.5
The (temperature x pressure) interaction is the difference between differences.
The within row difference between differences equals
(9 - 5) - (8 - 7) = 3
This, one should note, is identical to the within column difference between
differences, which is
(9 _ 8 ) - (5 - 7) = 3
Therefore, the estimated interaction effect is 3, and not zero. This estimate
suggests that an interaction possibly exists between the two factors and it is
perhaps larger in magnitude than the main effects! An ANOVA (not shown
here) done with replicated observations (to result in a nonzero error dof) suggested
that all three effects the main effects of temperature and pressure, and the
interaction effect between temperature and pressure are significant.
The mere indication of the presence of interacdon, however, is not enough.
One needs here to develop the underlying prediction tnodel that will relate (within
the influence space) the levels of temperature and pressure to the level of resulting
voids. This may be done as follows:
Since one finds only the two main effects and the (Temperature x Pressure)
interaction to be present, one may assume that the total effect (the average volume
of microvoids) is given by the model
y = c + c + c pSp + (St x Sp)
where the subscripts t and p represent temperature and pressure. There being four
constants (C, Ch Cpy and Cp) in this model, it is not difficult to estimate them from
the four observations already obtained. In the model above, 5t denotes a change
within the temperature from T x, and Sp a change in pressure from P x. Accordingly,
substituting the appropriate terms in the expression for Y above, we get

9 = C + C f x 0 + Cp x 0 + C 9 ? x 0
5 = C + Cf (7i - 7\) + Cp x 0 + C* (T2 - 7\) x 0
8 = C + Ct x 0 + Cp (P 2 - Pi) + Ctp x 0 x (P2 - P x)
7 = C + C, (T2 - 7\) + Cp (P2 - P x) + Crp (T2 - Tx)(P 2 - P x)
122 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

This gives
C = 9, Cr = - 4/(T2 - 7\), Cp = - l/(P 2 - P x)
Ct p ^ y [ ( T 2 - T 1) ( P 2 - P l)}

These coefficients lead us to a prediction model (albeit a crude one), valid within
the influence space, as
y = 9 - 4St f(T2 - r ,) - Sp l(P2 - P i) + 3 8t f y i m - TX) ( P 2 - P x))

A rigorous approach to building predictive models, after one has established


certain cause-effect relationships using ANOVA, is regression analysis [18] (see
Sections 10.7 and 10.8).

8.5 SUMMARY
A design optimization study begins with the identification of the factor effects
to be investigated, the performance response to be optimized, and the applicable
S/N ratio. Once the task has been sized up, one determines the degrees of
freedom in the study. The dof count leads to the identification of an OA suitable
for guiding the statistical experiments. To the maximum extent possible, the
investigator should attempt to use a standard OA in his investigations, based on the
guidelines provided in Figs. 6.1 and 6.2.
If certain 2-factor interactions have to be included in the study, the investi
gator should examine the linear graphs (such as those shown in Appendix B)
associated with the standard OA selected. Generally speaking, it is possible to
decide on array column assignments to main and 2 -factor interaction effects with
the help of the standard linear graphs. Occasionally it may be necessary to modify
an existing linear graph to complete the assignment of columns and then conduct
experiments to estimate effects that are unusual but are of particular interest.

EXERCISE
1. Refer to Tables 7.2 and 7.3, and the appropriate linear graph for the L 16 OA
in Appendix B, to produce the estimated 2-factor interaction effects on crystal
thickness within each of the following pairs of factors:
1. Rotation method (A) and wafer code ( B).
2. Rotation method (A) and Arsenic flow rate (E ).
3. Deposition temperature (C) and HC1 etch temperature (F).
0 Case Study 2: Product Optimization
I ' I Passive Network Filter Design
9.1 THE PASSIVE NETWORK FILTER
Chapter 7 presented an illustration of applying Taguchis two-step procedure
(Section 6.9) to optimize the control parameters of a manufacturing process. This
chapter explores a more complex design problem one requiring the simultaneous
optimization of two performance features. Filippone [23] applied Taguchis
design optimization method to this problem. Suh [13] provides a discussion of
Filippone *s results. The present problem highlights the following points;
1. The problem involves two (rather than a single) performance characteristics,
both of which the designer must simultaneously optimize.
2. This example uses a mathematical model rather than a physical proto
type of the product to conduct the experiments,
3. The source of noise here is the uncertainty in the quality o f the components
to be used in fabricating the product. This uncertainty is a factor beyond the designers
control. The designer would attempt to deliver a design that provides satisfactory
performance, in spite of the presence of this noise. The designer would thus seek
a rather unique goal in robust design and this has been given particular emphasis
by Taguchi in his writings [4]. This makes it possible to use less expensive
components in design in order to reduce manufacturing cost, without compromising
the performance of the product.
4. This example illustrates the complex character of some real design
problems. The choices available to the designer are neither very distinct nor clear.
In fact, as we shall see, the 2-step optimization procedure appears to fall somewhat
short of the final goal. However, Taguchis overall philosophy does clarify the
nature of the decisions facing the designer in such problems.

9.1.1 The Problem


A passive network filter is an electronic circuit device that constitutes a key
component of instrumentation systems designed to record small mechanical
displacements, such as those sensed by strain gauges. The complete instrumentation
system consists of a transducer, a demodulator, the passive filter, and a recorder
(Fig. 9.1). In normal use, the transducer produces an amplitude-modulated
displacement signal equivalent on a carrier frequency and passes this signal to the
demodulator. The demodulator in turn transforms the signal into a demodulated full
wave and then transfers this wave to the filter. The filter appropriately attenuates
the wave signal that arrives, and feeds it to a galvanometer-recorder. The total
process thus converts the original (small) mechanical displacement sensed by the
transducer into a recordable deflection of the galvanometer.
123
124 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

i----------------------- 1
Displacement Galvanometer
transducer
(source)
Fig. 9.1 A passive filter interfacing strain gauge output (V,) with recording
instrument output (V0).

The present discussion confines to the design of the passive filter, which
should perform two distinct functions: First, it must re-create the original mechanical
deflection pattern with minimum effect on output. This implies that the filter must
effectively filter out the carrier frequency from the output. Second, the filter must
attenuate the (filtered) output signal to a proper scale.
Typical instrumentation system design projects would consider and evaluate
several alternative configurations for the components. For instance, two different
types of displacement transducers (or a strain gauge bridge) might be considered.
Also, one might employ alternative filter circuit designs consisting of resistors and
capacitors. Also, different types of galvanometers might be used. The designer
facing the present task focussed specifically on the optimum design of the filters
network circuitry. The network consisted of a capacitor C and two resistors R2 and
/?3, as shown in Fig. 9.1.
One may summarize the challenge facing the designer as follows: Since the
parts and components to fabricate the filter and the complete instrumentation system
are to be of industrial rather than precision quality, when one actually assembles
the system and puts it to work, these parts and components may not always have
the exact characteristics as specified by the designer. Generally the price of such
parts and components goes up with the precision of their manufacture. Therefore,
the designer must treat the uncertainty in the parts and component characteristics
as a source of noise (aspects of a design or system not in the designers control,
see Section 5.1), and still design a filter with satisfactory performance.
In the present case, the designer anticipated that each purchased component
(Rs, Vs, Rgi and Gsen [the galvanometers sensitivity]) would vary 0.15% from its
specified nominal (catalogue) value (Table 9.1). Also, the store-purchased parts
used in fabricating the filter (resistors R2 and R^ and the capacitor C ) could
similarly vary from their marked values (Table 9.4). The traditional design of
electrical devices uses sensitivity analysis to help design decisions in such cases,
if the system is amenable to direct analysis [5,19]. Filippone [23] demonstrated the
use of the Taguchi method in such situations. We recall again that the motivation
CASE STUDY 2: PRODUCT OPTIMIZATION PASSIVE NETWORK FILTER DESIGN 125

here was to permit the eventual use, wherever possible, of inexpensive components
in fabricating the complete instrumentation system.

TABLE 9.1
CHARACTERISTIC VALUES AND SUPPLIER TOLERANCES FOR MARKET-
PURCHASED COMPONENTS

Characteristic Nominal Tolerance


Value Value

*,(G) 120 0.15%


V, (mV) 15 0.15%
98 0.15%
Gsen ( J-iV/in) 657.58 0.15%

9.2 FORMAL STATEMENT OF THE DESIGN PROBLEM


A passive filter network to measure displacement signals generated by a strain-
gauge transducer is to be designed and then fabricated usiirg commercially available
components. Figure 9.1 shows the circuit. The filter acts as the interface between
the strain-gauge transducer/demodulator and a light-beam deflection indicator/
recorder. It conditions the signal from the transducer appropiately, to provide a
demodulated and measurable output [13].
The user has specified two functional performance requirements (FRs) for
the filter, as follows:
FRi. .Minimum distortion of output. (This is satisfied if one places the
filters pole or cutoff frequency at 6.84 Hz.)
FR2: A full-scale beam deflection (with appropriate dc gain) of 3 in.
Following Taguchis approach, the designer should attempt to identify the adjustment
and control factors among the three DPs R2, R^ and C. Next, he should seek
the optimum settings of these factors in order to satisfy the two above functional
requirements (FRh FR 2).

9.3 THE ROBUST DESIGN FORMULATION OF THE PROBLEM


Developing a mathematical representation of the functional design of passive filter
type devices is a common enough activity in electrical engineering. Using Kirchhoffs
law one may derive the transfer function (V0 /Vs) for the circuit shown in Fig. 9.1 as

V0 RgR3
K ~ (R 2 + R g) ( R s + * 3) + R 3R , + (R 2 + Rg) R 3R s C s' (9'31)

where s' is the Laplace variable. From this transfer function one finds the filter
cutoff frequency coc and the galvanometer full-scale deflection D respectively as
(R 2 + R )(R + R 3) + R 3R s
co = ------- ------------- -------- (9 3 2)
2 tt(R 2 + R g) R 3Rs C
126 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

IV I \VS\ R R
D = -- --------------------------------------------- 1 :------------------------------ (Q 'i
Gsen C s e J (*2 + * , ) ( * , + * 3> + W
The DPs that the designer is free to specify are R2, P 3, and C. Therefore, one must
first determine which DPs among these have the most influence on the two FRs.
Next, one must find the values of those JDPs that will minimize variability in
performances FRX and FR2.
There are two FRs ( coc and D) to be optimized here. It would indeed be
fortunate if we found two independent DPs, each influencing only one functional
requirement (FRXor FR 2). We could then adjust each FR independently to its own
target value. (Suh [13] treats this desirability as an axiom for ideal design and
presents a formalism towards achieving it.) In the present case the designer used
OA experiments as suggested by Taguchi.
Recall that DPs having little interaction with the noise factors (these DPs
have large SIN ratios, Section 5.2) and a linear relationship with the output response
work best as adjustment factors. Therefore, one has to earnestly seek here possibly
two independently adjustable DPs one to adjust the cutoff frequency (oc, and the
other to adjust the maximum light-beam deflection D.
The designer should use the other DPs not used as adjustment factors to
maximize the SIN ratio (Section 6.3), to help make the design robust. He should
attempt to find DP levels that make the systems response least sensitive to noise,
by statistically experimenting with different levels of control factors. Before
concluding the task, the designer must verify noise sensitivity and linear dependence
over the entire range of possible DP values.
Based on engineering considerations, the designer selected three nominal levels
(treatments) for each of the three DPs (R2, R 3j and C). Table 9.2 shows these levels.

TABLE 9.2
TREATMENT LEVELS FOR DESIGN PARAMETERS

Treatment
1 2 3
*3 (G) 20 50,000 100,000
R2 (Q) 0.01 265 525
C(fi) 1,400 815 231

The broad range of values considered here was intentional and in line with the
Taguchi philosophy (Section 6.5). The designer used two separate OAs in this
problem. He selected the first OA (the inner array specifying the combinations of
the different control factor settings) based on the following considerations: If one
had to test all possible combinations of treatment levels shown in Table 9.2, that
would require to run 33 or a total of 27 full factorial statistical experiments. If one
assumed the additivity of effects (Section 4.2), the task involved running a much
smaller number of experiments. Since the investigation involved three factors at
three-treatments-each with no interactions assumed, one found the total dof
(Section 8.1) as follows:
CASE STUDY 2: PRODUCT OPTIMIZATION PASSIVE NETWORK FILTER DESIGN 127

1 for the overall mean.


3(3 - 1) or 6 for the three main factors.
This showed that one must run here a minimum of 7 experiments. The nearest
standard OA (Appendix B) that accommodates three factors at three levels each is
the L 9 OA. (The reader should verify that Fig. 6.1 would lead to this answer.)
Since there were more columns available in L 9 than what one needed in
assigning the control factors (the three DPs R2, R$, and C ) to the columns of
the OA, one could omit here the first column of L9 without influencing the study
or its results. (Note that one could modify the linear graph for L 9 shown in
Appendix B to study up to four main factors at three levels each.) Table 9.3 shows
the resultant array of combinations for the three DPs, each DP having three
treatments. This array became the Control Array (see Fig. 5.1) in the study. For
each of the nine combinations of DP levels shown in Table 9.3, one would obtain
separate observations for each of the two FRs, coc and D.

TABLE 9.3
COMBINATIONS OF CONTROL FACTOR TREATMENTS IN L9

Experiment *3 (ft) r2 (a) C(/iF)

1 20 0.01 1400
2 50,000 265 815
3 100,000 525 231
4 20 265 231
5 50,000 525 1400
6 100,000 0.01 815
7 20 525 815
8 50,000 0.01 231
9 100,000 265 1400

The designer defined a second OA the outer array (or the Noise Array in
Fig. 5.1 for each row in the L 9 OA of Table 9.3, to measure the variation in
output response that occurred due to the anticipated uncontrolled variation caused
by the industrial quality tolerances of system components and parts (Table 9.1).
The application of the outer array thus would simulate the noise due to the
imprecision of the commercial parts used.
The reader should note that this study did not undertake any physical
experimentation. The designer employed the mathematical models given by
Eqs. (9.3.2) and (9.3.3) to simulate the experimental trials.
A total of seven sources of noise existed in the instrumentation system using
the filter network. Table 9.4 shows the three noise-inflicted levels for each of the
seven sources (the components/parts in the system). Note that the inexactness of
values of the parts (R2, R$, and C) used in fabricating the filter and that of the
system components (Rs, Rg, Gsen, and Vs) would affect their nominal values.
Therefore, each inner array experiment in Table 9.3 was combined with the outer
array to yield a set of noise-affected observations { y h y2>? 3 *. - . ^ 2 7 }* This produced
a mean response value m and an S/N ratio value for each of the two output
128 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

responses {(oc and D). One calculated the applicable nominal is the best S/N ratio
using the equations
E (n 2) = (nm2 - V)/n

E ( cj2) = V = [Z y 2 - {Z y } 2/n]/(n - 1)

S/N ratio = 10 log 1 0 E ( /i 2 /ct2) (9.3.4)


where m = X y j n , with n noise-affected (outer) experiments run for each given
inner experiment. The goal was to determine how the output response varied when
Rj, R2, and C, and the components Rs, Rg, Gsen, and Vy varied over their respective
tolerance range (shown in Table 9.4).

TABLE 9.4
LEVELS FOR NOISE FACTORS

Level
Components 2 3

*3 () *3 - 5% *3 R3 + 5%
R2 (Cl) R2 - 5% R2 R2 + 5%
CQi F) C -5 % C C + 5%
rs m 119.82 120 120.18
Rg (ft) 97.853 98 98.147
Gsen (nV/in) 656.594 657.58 658.566
V,(V) 0.014978 0.015 0.015023

Table 9.5 shows the effect of the outer array on Experiment 1 of the inner
array (Table 9.3). The variation in output (D and (Oc columns in Table 9.5) shows
the sensitivity of the two output responses to noise.
Again, an exhaustive test of every combination of noise possible (i.e., a full
factorial design) would require 37 = 2,187 experiments, an expensive undertaking.
The use of the outer OA here minimized the number of combinations necessary to
investigate, while maintaining a representative sample [5]. Since there are seven
factors with three levels each, the L 9 OA described earlier would not suffice.
Instead, one used an L 2 7 array, which can accommodate up to 13 factors at three
levels each.
As mentioned above, an appropriate outer array would simulate the tolerance
noise (an aspect beyond the designers control) for each inner array experiment.
Table 9.5 shows the L 2 7 outer array for the first experiment of Table 9.3. The
investigator calculated the two output responses coc and D for each of the
27 combinations of noise levels, using Eqs. (9.3.2) and (9.3.3) respectively. The
experimental design used the last seven columns of the OA. For each experiment
shown in Table 9.3, one also calculated the S/N values for D and coc. For example,
for R 3 20 Q, R2 = 0.01 2, and C = 1400 (iF, one used Eq. (9.3.4) and the last
two columns of Table 9.5. The next section shows these calculations.
CASE STUDY 2: PRODUCT OPTIMIZATION PASSIVE NETWORK FILTER DESIGN 129

TABLE 9.5
27 COMBINATIONS OF NOISE FACTORS TESTED FOR EXPERIMENT 1 VALUES
FROM TABLE 9.3 WITH OUTPUT RESPONSES (D AND coc) CALCULATED

Datum *3 *2 C Rs ^sen D (Oc


**
No. (ft) (ft) <HF) (ft) (ft) ( jiV/in) (mV) (in) (Hz)

1 21 0.0105 1,470 84.42 120.18 566.4582 15.075 3.266 7.339


2 20 0.01 1,400 84 120 563.64 15 3.157 7.985
3 19 0.0095 1,330 83.58 119.82 560.8218 14.925 3.045 8.728
4 21 0.01 1,400 84 119.82 560.8218 14.925 3.272 7.715
5 20 0.0095 1,330 83.58 120.18 566.4582 15.075 3.150 8.411
6 19 0.0105 1,470 84.42 120 563.64 15 3.046 7.883
7 21 0.0095 1,330 83.58 120 563.64 15 3.265 8.127
8 20 0.0105 1,470 84.42 119.82 560.8218 14.925 3.164 7.599
9 19 0.01 1,400 84 120.18 566.4582 15.075 3.039 8.282
10 19 0.0105 1,400 83.58 120.18 563.64 14.925 3.021 8.289
11 21 0.01 1,330 84.42 120 560.8218 15.075 3.304 8.113
12 20 0.0095 1,470 84 119.82 566.4582 15 3.146 7.606
13 19 0.01 1,330 84.42 119.82 566.4582 15 3.034 8.714
14 21 0.0095 1,470 84 120.18 563.64 14.925 3.247 7.345
15 20 0.0105 1,400 83.58 120 560.8218 15.075 3.186 7.991
16 19 0.0095 1,470 84 120 560.8218 15.075 3.074 7.889
17 21 0.0105 1,400 83.58 119.82 566.4582 15 3.253 7.722
18 20 0.01 1,330 84.42 120.18 563.64 14.925 3.140 8.396
19 20 0.0105 1,330 84 120.18 560.8218 15 3.169 8.403
20 19 0.01 1,470 83.58 120 566.4582 14.925 3.010 7.896
21 21 0.0095 1,400 84.42 119.82 563.64 15.075 3.291 7.709
22 20 0.01 1,470 83.58 119.82 563.64 15.075 3.174 7.612
23 19 0.0095 1,400 84.42 120.18 560.8218 15 3.057 8.276
24 21 0.0105 1,330 84 120 566.4582 14.925 3.235 8.120
25 20 0.0095 1,400 84.42 120 566.4582 14.925 3.128 7.978
26 19 0.0105 1,330 84 119.82 563.64 15.075 3.062 8.721
27 21 0.01 1,470 83.58 120.18 560.8218 15 3.277 7.352

Similarly, one calculated the sample mean values for (oc and D by summing
the outputs and dividing by the number of data points in Table 9.5. One repeated
the procedure for each of the nine experiments shown in Table 9.3 (see Fig. 5.1).
Table 9.6 shows the S/N ratio and median output response m for output responses
(oc and D.

9.4 DATA ANALYSIS AND ESTIMATION OF EFFECTS


From the above observations one may estimate the absolute main effect (MSS) of
a DP. This requires summing the squares of effect (the effect being the deviation
from the expected mean) on response Y of the different treatments of this DP and
130 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

averaging this sum over the number of dof (= number of treatments minus one) for
this DP. The necessary calculations are as follows:
Suppose that one observes the response Y to be y h y 2, and y 3 when one sets
the DP / ? 3 at level / ? 3 and replicates the observations three times. During replication
all noise factors remain active. Similarly, suppose that by replication one observes
y 4, y5, and y 6 when the DP R$ is set at level (R3 - 5%) and y j f y& and y 9 when R3
is set at level ( / ? 3 + 5%). The effect of the DP / ? 3 on output Y is then the sum of
the following three variations: 1. The total variation in output while R$ is at level
(tf3 - 5%), which is [(>i - m) + ( y 2 - m) + (y 3 - m)]/2. The total variation while
/ ? 3 is at level /?3, and 3. The total variation while is at level (R3 + 5%). Here,
m is the grand average, y,-/9.
The absolute main effect of R$ on output Y is found by squaring the total
variation caused at each level of R 3, dividing each squared total variation by its
respective dof, and then summing the results. Thus, the absolute effect of R 3 on
the output response (evaluated as the mean sum of squares or MSS) is

[()>! - m) + ( y 2 - m) + (y 3 - m)]2/2 + [(y 4 - m) + ( y 5 - m) + (y 6 - m )fl2


+ [(y7 - m) + ( y a - m ) + ( y 9 - m)]2/2
The S/N ratio metric represents a measure of the sensitivity of the response to
noise. The S/N ratio for each experiment in Table 9.3. was calculated by using
Eq. (9.3.4). Table 9.6 shows, for illustration, the calculations of die S/N ratio and
mean m for Experiment 1.
From Eq. (9.3.4),
S/N ratio = 10 log 10 [(Sm - V)/(nV)]

where
Sm = ( I y d 1In
I
V = (2 y- - Sm)/(n - 1)
i

the observations being y h y 2, y3, . . . , y n. One may summarize the calculations


as in Table 9.6.

TABLE 9.6
S /N RATIO AND M CALCULATED FOR EXPERIMENT 1 IN TABLE 9.3

D c
n 27 27
zyi 85.212 216.201
zy? 269.1657 1735.579
Sm 268.9290 1731.217
V 0.009102 0.167770
S /N 30.39093 25.82230
m 3.156 8.007444
CASE STUDY 2: PRODUCT OPTIMIZATION PASSIVE NETWORK FILTER DESIGN 131

The sample mean m shown above estimates the magnitude of the output response
for Experiment in Table 9.3. The S/N value reflects the sensitivity of the response
of this output to noise (the higher the S/N ratio, the lower the noise sensitivity).
Table 9.7 shows the completed calculations of the S/N ratios and the mean
performances (coc and D) for each of the nine experiments in Table 9.3.

TABLE 9.7
S / N AND MEAN OUTPUT RESPONSE FOR EACH CONTROL FACTOR
COMBINATION GIVEN IN TABLE 9.3

D
4

S /N m S /N m
Experiment m (Hz) (db) (in)

1 25.822 8.007 30.391 3.156


2 27.443 2.195 32.218 4.761
3 27.517 6.893 30.234 3.066
4 25.378 42.295 26.756 0.873
5 27.517 1.138 30.233 3.063
6 27.591 3.960 43.857 10.952
7 25.309 11.748 26.069 0.511
8 27.591 13.980 43.858 10.947
9 27.443 1.277 32.220 4.765

From Tables 9.3 and 9.7 one may estimate the relative influence that each of the
DPs (R 2, ^ 3 , and C) had on both the S/N ratio and the mean value for each out
put response ( coc and D ). For example, one may find the effect of control factor
7? 3 on the output S/N ratio by taking the average S/N ratio at each level of / ? 3
(i.e., 20 k n , 50 kCl and 100 kQ.). Similarly, one may calculate the effects of the
other two DPs.
Thus, one may estimate the relative influence of each DP on the mean value,
by looking at the average value for the sample mean m at the three treatment values
of that DP. DPs showing a constant S/N ratio response over the entire range of
levels while showing a linear relationship to the mean output response would best
serve as the adjustment factors (Section 6 .8 ).

9.5 EFFECTS OF THE DESIGN PARAMETERS


As mentioned in Section 9.4, one may measure the absolute effect of a design
parameter on an output response by using the summation of squares method.
The sum of squares metric provides a direct measure of the variance (the effect)
caused by the source of variability, here a DP. We recall that the sum of squares
metric also forms the basis for ANOVA (Section 3.3).
One determines the effect of each DP by beginning with the calculation of
its summation of squares. For example, to find the effect of the factor R 3 on the
S/N ratio of response coCJ one uses the S/N values in the second column of
132 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

Table 9,7. Thus, if one denotes the entries in the second column of Table 9.7 by
SNh SN2, SN 3, ..., SN9, then, using the information in the OA in Table 9.3 and a
relationship similar to Eq. (3.3.2), one obtains
SRi = 1/2[(SN, - m j + (SN4 - msn) + (SN7 - msn)]2
+ \/2[(SN2 - msn) + (SN5 - msn) + (SNs - msn)]2
+ 1/2[(SN3 - msn) + (SN6 - m,) + (SN9 - m j ] 2 - CF (9.5.1)

where msn is the sample mean for the calculated S/N values in column 2 of Table
9.7 and CF is a correction factor. One defines CF as

1 ^ I2
I (S N t - m j
9 1 =1

One repeats these calculations for each DP with both the S/N ratio and the mean
response of each output ( eoc and D ).
Table 9.3 shows that the value of 20 Q. for R3 during experiments 1, 4,
and 7, was used, which yielded S/N ratios of 25.821, 25.378 and 25.309 dB
respectively, for the cutoff frequency output coc,. Table 9.7 shows these S/N
ratios. The mean (25.503 dB) of these three S/N ratio values is the estimated
measure o f the sensitivity of the average coc output to (the tolerance-caused)
noise, at R$ = 20 Q, as shown on Table 9.8. Similarly, one calculates the mean
value for coc when R 3 is 20 Q. by averaging the appropriate data points in
Table 9.7. This yields an average of 20.684 Hz for coc, as shown in Table 9.9.
We have to repeat this procedure for all three control factors (DPs ^ 2 ^ 3 * and
C ) and for both output responses coc and D. The results will be as shown in
Tables 9.8-9.11.

TABLE 9.8
ANOVA FOR S /N RATIO OF (oc

Factor Level Means Sum of


Squares
1 2 3

*3 25.503 27.517 27.517 12.170


r2 27.000 26.755 26.781 0.164
c 26.927 26.781 26.829 0.050

TABLE 9.9
ANOVA FOR THE MEAN OF coc

Factor Level Means Sum of


Squares
1 2 3

*3 20.684 5.771 4.043 753.402


r2 8.649 15.256 6.593 184.358
c 3.474 5.968 21.056 814.464
CASE STUDY 2: PRODUCT OPTIMIZATION PASSIVE NETWORK FILTER DESIGN 133

TABLE 9.10
ANOVA FOR THE S /N RATIO OF D
Factor Level Means Sum of
Squares
1 2 3

*3 27.738 35.436 35.437 177.818


r2 39.368 30.398 28.846 290.378
c 30.947 34.048 33.616 25.392

TABLE 9.11
ANOVA FOR THE MEAN OF D

Factor Level Means Sum of


Squares
1 2 3

*3 1.513 6.257 6.261 67.569


r2 8.352 3.466 2.213 94.684
c 3.661 5.408 4.962 7.411

Given the results in Tables 9.8-9.11, one should be able to select the principal
control factors (i.e., DPs) to adjust the mean performance of the passive filter.
Recall that adjustment factors must be insensitive to noise (i.e., these should have
large and constant S/N ratios) over a broad range of treatment values, and these
factors must have a Unear relationship to the output response (FR ). (Suh [13]
suggests that, additionally, the selection of a control factor as a DP an adjustment
factor in Taguchis terminology should be such that the independence of the FRs
pertaining to their respective DPs is maintained.)
Based on the above considerations, one may choose the DP with a clear,
monotonic effect on the mean value of cutoff frequency coc, while it contributes a
minimum to the mean response of D (Table 9.10), as the adjustment factor for coc.
The DP that best fits this requirement is capacitor C.
Figures 9.2 and 9.3 graphically illustrate the data shown in Tables 9.8-9.11.
These figures easily convey how all three parameters R 2, R$, and C affect the
mean values and S/N ratio of the two output responses.
Figures 9.2 and 9.3 show that C would serve satisfactorily as an adjustment
factor for coc, since its S/N ratio is constant over the broad range of its treatment
levels, and since 6)c shows the desirable dependency on C. The mean output
response for coc does not vary exactly linearly with respect to C, but it does have
a monotonic and almost linear relationship with (oc. For these reasons the designer
selected C to be the adjustment factor for cut off frequency coc.
The results in Table 9.9 show that the contribution to the variation (indicated
by sum of squares) of the cutoff frequency response coc is significantly greater
for / ? 3 than it is for R2. The contribution of R2 to the variation (again suggested
by the sum of squares) in the mean response of D is greater than the contribution
of /?3 , as indicated in Table 9.11. This suggests that R2 is coupled by a lesser
degree to the output coc than is /?3, while also contributing more toward the
variation in the deflection output D. The effect of R2 on mean D is also more
134 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

28

02 26
*-*
1

o
24

co 22

20

18

16

o 14
3
C 12
o
0) 10

2
20 50k 100 k 0.01 265 525 1400 815 231

R* (ohms) Ro ( ohms) C ( |i F )

D e s ig n P a ra m e te rs and treatm en ts

Fig. 9 2 Effect of design parameter settings on 0)c and S/N


1
0
o

CO

15

o
10
c
o
0)

R* ( ohms) R 2 (ohms) C (pF)

D e s ig n p aram eters and treatm en ts


Fig. 9.3 Effect of design parameter settings on D and S/ND.
CASE STUDY 2: PRODUCT OPTIMIZATION PASSIVE NETWORK FILTER DESIGN 135

prominent (Fig. 9.3). It would appear therefore that R2 would serve slightly better
as the adjustment factor for the deflection response D of this filter network, than
would /?3. (However, it should be apparent that the preference for R2 here is only
modest.)
From the sum-of-squares column of Tables 9.8 and 9.10 (as also from
Figs. 9.2 and 9.3) the DP that contributes the most to variation in the S/N ratio can
be identified. Tables 9.8 and 9.10 show that DP R 3 has a significant influence on
the S/N ratio of both D and coc. To achieve robustness, the tolerance of the DP R3
must be reduced so as to increase the S/N ratio to some desirable maximum.

9.6 DISCUSSION ON RESULTS


The case study we have discussed presents a rather difficult robust design
problem. The designer had to seek robustness of two performance aspects (the
cutoff frequency (oc and the maximum galvanometer deflection D) while these
two output responses also had to be adjusted to their respective target levels. In
Section 9.5 we have seen that the answers were not clear-cut, and in the last steps
of choosing the appropriate DPs one had to apply some muddy* judgment. It
should be clear, however, that the application of the OAs made the key issues and
the required tradeoffs visible. Such visibility is impossible to achieve by one-
factor-at-a-time studies, let alone conjecture.
The overall design progressed as follows:
1. One constructed an inner array using the limits of reasonable possible
design values for the three design parameters (R2, R$, and C) as treatments (see
Table 9.2).
2. Two analytical (exact) cause-effect relationships (Eqs. (9.3.2) and
(9.3.3)) linking the two performance parameters <oc and D to the design and other
influencing parameters (RgJ RSJ VSl and Gsen) served as the prototypes in the
experiments. These equations describe the behaviour of the filter under different
design conditions, thus acting as surrogates for or emulators of a real physical
prototype.
3. If there had been no concern for noise (caused by the quality of store-
purchased parts and components), the designer could have determined the
optimum settings for R2, R 3, and C using only two equations, viz. Eqs. (9.3.2) and
(9.3.3) and the inner array. The seven possible sources of noise led to the use of
the L 2 7 outer array.
1

4. One conducted the experiments by combining each row in the inner


array with all the 27 rows of the applicable outer array as depicted in Fig. 5.1. One
evaluated the sensitivity to noise of the two responses ( coc and D) using the nominal
is the best S/N ratio (Eq. 9.3.4). Table 9.6 shows the sample calculations.
5. One estimated the effect that each design parameter had on the average
level of the two performance responses and on their respective S/N ratios
response table type arithmetic (Section 4.4). Figures 9.2 and 9.3 display the results
graphically.
136 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

6 . The capacitor C was chosen to be the adjustment factor for coc based on
its relatively flat effect on the S/N ratio of coc but the marked monotonic effect on
mean coc (Fig. 9.2).
7. Since the effect of R2 on mean D was monotonic and also more prominent
than that of the designer chose R2 to adjust D. The choice here was not clear-
cut, because R2 also showed considerable effect on the S/N ratio of D .
8. Since the effect of R$ on the S/N ratio of D was considerable ( / ? 3 contributed
the highest sum of squares in the S/N data), the designer recommended that one
should tighten the tolerance of / ? 3 (shown in Table 9.4) to improve the filters
robustness.
Since the basic technology of the device being designed contained factor
interactions that one could not eliminate, Filippone [23] adopted a trial-and-error
approach to produce an acceptable solution for design problem above. Perhaps
the reader can appreciate the complex nature of the optimization attempted here.
Many real design problems present similar difficulty on way to quality design, in
particular, on way to robustness. We must point out again the basic difficulty one
faced above. The design decisions did not become trivial when one used the main
factor model in order to apply OAs. Still, the insight into the inherent nature of the
problem one gained by applying the inner and outer OAs was decidedly valuable.
Based on conventional methods it would not be possible to reach the improvement
Filippone achieved. Even a method using the analysis of Taylor series type
sensitivities of Eqs. (9.3.2) and (9.3.3) directly would be less useful (see [14],
p. 203).

9.7 FILTER DESIGN OPTIMIZATION BY ADVANCED METHODS


The objective in Taguchis robust design procedure is to seek out the setting of
the design parameters 0 that minimize the average loss caused by deviations of
the output from target. The foundation of the approach proposed by Taguchi is
the additive or main-factor cause-effect model. Taguchi suggested that the DPs 0
can be generally divided into two groups (d, a). Group d contains the main DPs,
while group a contains the fine-tuning adjustment parameters. One executes the
2 -step optimization as follows:

Step 1. Find the setting d = d * that maximizes the S/N ratio.


Step 2. Adjust the group a to a * while keeping d fixed at d *, to take the
output to target.
Several studies now suggest many variants of Taguchis 2-step procedure
[5, 6 , 14, 17] because the separation of the adjustment a and main DPs d may be
sometimes difficult and even impossible. In this context, some researchers have
proposed certain special performance measures known as PerMIAs (see [14] p. 258
and [21]). Having tested Taguchis two steps on numerous real problems,
investigators now recognize that one may decompose the parameter optimization
procedure into Taguchis two sequential steps for certain models for the product/
process response only [5, 16, 17, 25].
CASE STUDY 2: PRODUCT OPTIMIZATION PASSIVE NETWORK FILTER DESIGN 137

As is true for the passive filter design problem, one sometimes finds that
technological constraints would prevent the 2 -step optimization procedure from
being rapidly and perhaps wholly effective. The difficulties develop because of DP-
DP interaction effects present, and because a clear separation between the parameter
sets d and a may not exist. For the passive filter, an L8 OA with factors / ? 3 assigned
to column 1, C assigned to column 2, and R2 assigned to column 4, with the three
factors set at two levels each uncovers the 2-factor interactions easily. Table 9.12

TABLE 9.12
RESULTS OF L8 EXPERIMENTS FOR THE PASSIVE FILTER
1 2 4
Experiment *3 (A) C ( mF) R2 (Cl) GJC S/Nc D s/ n d

1 20 1400 0.01 7.791 27.687 16.640 40.874


2 20 1400 525 6.814 27.238 2.993 31.087
3 20 231 0.01 47.220 27.687 16.640 40.874
4 20 231 525 41.297 27.238 2.993 31.087
5 100,000 1400 0.01 2.108 29.542 0.012 29.545
6 100,000 1400 525 1.131 29.462 0.004 27.785
7 100,000 231 0.01 12.778 29.462 0.012 29.545
8 100,000 1400 525 6.854 29.462 0.004 27.785

displays the results of a set of L 8 experiments. The relevant linear graph for L8
(Appendix B) was used here to find which columns would contain the 2-factor
interactions. Figures 9 .4 -9 .7 show the estimated interactions, suggesting that

Design pa ra m e ter settings

Fig. 9.4 R3Q R 3 R 2 and R2C interaction effects on cutoff frequency


138 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN
frequency
Cutoff
S/N

Design pa ra m e ter settings

Fig. 9.5 i ? 3 C, R 3R 2 and R2C interaction effects on S /N ^ .

Design p a ra m e ter settings

Fig. 9.6 R$C, R 3R 2 and R2C interaction effects on deflection D.


CASE STUDY 2: PRODUCT OPTIMIZATION PASSIVE NETWORK FILTER DESIGN 139

41
R2 = 0 . 0 1

39 -

37

C = 231 fj,F, 1400 p F


R2 =0.01 O
O

33

Rz = 525

R2 = 525 +

27 1
R3 = 2 0 1 0 0 k R3 =20 100 k C =1400 \1 F 231 [IF

Design param eter settings

Fig. 9.7 j?3 C, R$R2 ^ 2 ^ interaction effects on S/ND.

one would be advised not to ignore interactions R$C, R 3R 2 , CR 2. (We suggest that
the reader verify these estimates using the method shown in Section 8.4.) The
additive or main-factors-only model, therefore, would be inappropriate to apply
here and one should search for robustness minimal sensitivity of on-target
performance to noise by invoking methods beyond Taguchis 2-step approach.
Chapter 10 describes one such approach.
A Direct Method to Achieve Robust
Design
A major simplification of the robust design problem was achieved by Taguchi by
his invocation of the additive (main-factor-only) model. This chapter explores
situations in which factor interactions are significant, in which multiple performance
characteristics are involved, or in which the noise-caused standard deviation a(x)
and the mean /i(x) are related in complex ways. Here a constrained optimization
approach is presented, which is also empirical in character as Taguchis two-step
method is. This approach, due to Bagchi and Kumar [33], limits the search for
robustness to a feasible region in which all target performance requirements are met.

10.1 RE-STATEMENT OF THE MULTIPLE OBJECTIVE DESIGN


OPTIMIZATION PROBLEM
In optimization one maximizes or minimizes a specific quantity, called the objective,
which depends on a finite number of decision variables. These decision variables
may be independent of one another, or they may be related through one or more
constraints [26]. As mentioned in several places in this text, a major simplification
of the design optimization problem was achieved by Taguchi by his use of the
additive or the main-factors-only model (see Sections 4.1 and 4.2). In fact, whenever
additivity exists, the optimization problem should be approached using Taguchis
2-step procedure (Section 6.9) because the task then greatly simplifies. This chapter
addresses situations in which interaction effects are also significant and in which
several different performance characteristics must be made robust simultaneously.
What can one do then to optimize the design toward maximum robustness?
Following Phadke [5], one may state the general robust design problem as
follows: Let 0 be the set of parameters (referred to as DPs in this text) specified
by the designer and let x denote the set of noise factors. Let y(x, 0) denote the
observed performance characteristic for certain particular values of the parameters
(e, x). Let /i(0) and cr2 (0) respectively be the mean and the variance of y , the
performance under the influence of noise x. One may then state the design
optimization problem as
minimize <r2 (0 )
e
subject to the constraint
MB) = M)
Phadke [5, p. 281] has remarked realistically that this is a constrained optimization
problem and is extremely difficult to solve experimentally. Box [17] points out that
this problem may be tackled as one in which one minimizes the logarithm of the
140
A DIRECT METHOD TO ACHIEVE ROBUST DESIGN 141

mean square error due to noise. Taguchi [5] had originally suggested that the above
problem be solved as an unconstrained 2 -step optimization problem, using a scaling
or adjustment factor. In this procedure one design factor Qx is postulated to be an
adjusttnent factor . The response characteristic then splits up into (the product of)
two parts: g(Qi)h(x, 0 '), where h(x> 0 ') is independent of f?i. The optimization
procedure then involves the following steps:
Step 1. Choose 0 ' such that it minimizes

/ * > ')
Step 2. Choose 9i such that
Ik (0 ) = Mo
Phadke [5] suggests that the adjustment factor 0X should be identified by
experimentally examining the effect of all design factors on the S/N ratio and the
mean fj.. Any factor that has no effect on the S/N ratio, but a significant effect on ji ,
can be used as the adjustment factor. However, as noted in several real-life design
situations, this identification is not always clear cut, especially when the relationship
between a h and fih is complex [8 ], or when the effects interact [15]. Resorting to
trial and error is also then ineffective. Such difficulties have been highlighted in the
recent literature (refer [18], p. 19.21). This chapter describes an alternative method,
also empirical as Taguchis 2-step optimization method is, to judiciously exploit
DP-noise interactions to reduce variability to make such designs robust.
This alternative method applicable to design problems in which all
performance characteristics and the DPs are quantitative imposes no additional
assumptions. The method is particularly effective when multiple performance
characteristics (for instance, cutoff frequency coc and deflection D in the passive
filter design problem in Chapter 9) must each be brought to their respective targets
and also made robust This new method cuts down the number of independent DPs
in inner array experiments also.

10.2 TARGET PERFORMANCE REQUIREMENTS AS EXPLICIT


CONSTRAINTS
One must realize that when one searches the design space for optimum values for
the DPs (0), not all parameters in 0 are free to take any value in the real space. The
permissible values for these parameters are constrained by the requirement that
each performance characteristic y must be on target when the design is complete.
For instance, in the design of a passive electronic filter (Fig. 9.1) described in
Chapter 9, the designer was given the target performance requirements as
Cutoff frequency = coc = 6.84 Hz
Galvanometer deflection = D = 3 in.
These are the performance characteristics that the user of the filter would like
the fabricated filter to deliver.
142 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

Multi-objective robust design problems such as the passive filter problem


above may be re-stated and tackled explicitly as constrained optimization
problem s . There are some distinctions between this constrained approach and
the 2-step (Taguchi*s) design procedure in which an adjustment parameter is
employed to adjust performance to target after the appropriate S/N ratio has been
maximized with the help of the control parameters. By contrast, the constrained
approach does not require the designer to hunt for an adjustment parameter. This
makes the design task easier because, in general, performance (the responses to be
made robust) may depend on more than one design parameter and also perhaps
on their interactions in complex ways. Such is the case with the passive filter,
as indicated by Eqs. (9.3.2) and (9.3.3) for coc and D , respectively, the ANOVA
tables (Tables 9.8-9.10), and the DP-DP interaction effecLs detectable by an L8
experiment (see Table 9.11 and Figs. 9.4-9.7).
The constrained procedure would restrict the design parameter search space
to appropriate contours (subsets) on which any point (combination of the DPs)
assures performance on target. The search within these subsets would focus on
idenlilying the design for which robustness is maximum. The optimum design
thus found would also be the true robust design, without requiring the imposition
of the main effects only assumption, unlike the 2-step procedure. Further, the
search for the optimum settings of the DPs would confine to working with the
truly independent D Ps , rather than exploring the effects of all the different DPs
using the original (full) inner array.
First we illustrate the constrained optimization procedure by revisiting the
passive filter example of Filippone [23], summarized in Chapter 9. We then
provide the rationale for this procedure.

10.3 CONSTRAINTS PRESENT IN THE FILTER DESIGN PROBLEM


Two primary requirements stated as the objectives of the passive filter design
problem are: (a) the cutoff frequency coc should equal 6.84 Hz, and (b) the deflection
D should equal 3.00 in. These requirements lead to the two following constraints:

( * 2 + * .) ( * , + i?3) +
(10.3.1)

/> = = 3.00 (10.3.2)


G sen

Following Taguchi, Filippone states that the DPs that the designer is supposed to
specify are /?2, ad C and he sets up his inner array accordingly. However, due
to the presence of the two constraints given by Eqs. (10.3.1) and (10.3.2), only one
of these three DPs is truly determined at the discretion of the designer; the other
two are subsequently determined by the simultaneous solution of the above two
constraints expressed as equations. Suppose that one selects as the independent
DP, the values of C and R2 being determined by Eqs. (10.3.1) and (10.3.2). A
feasible design here would be then the one that satisfies Eqs. (10.3.1) and (10.3.2),
A DIRECT METHOD TO ACHIEVE ROBUST DESIGN 143

for these two equations (constraints) ensure that the performances delivered by
such a design would remain on target*
To achieve robustness, one must next determine the value(s) of R$ that would
maximize (a) the robustness of the cutoff frequency, and (b) that of the deflection.
In general, of course, the optimum value of that maximizes S/Nmc may not
coincide with the value of R$ that maximizes S/ND.
Notice that with this improved procedure, the collective influence of all DPs
on the robustness of the design is maximized, the search for the robust design being
restricted to the points on the contour (or subset) of the total design space on which
target performance is assured. This eliminates the chances of carrying out
unconstrained maximization of the S/N ratio(s), and then finding that the adjustment
parameters perhaps will not be able to restore performance to target(s) see [23,
p. 135]. The constrained procedure achieves also the central objective of robust
design: whenever adjustment is possible, we should minimize the quality loss after
adjustment [5, p. 105]. In the present approach adjustment is fa it accompli as one
restricts the search for optimum DPs on the contour on which performance equals
target performance.
Let us first illustrate these points using the passive filter example. Given
coc ( = 6.84 Hz), D ( = 3.00 in.), and a value of R3, the two remaining DPs, namely
R2 and C, may be obtained as

IVs\R g R s ~ D G senR 3Rs


*> - - f a +~*,) <10

c . _______ + _______________ (,0 3 4 )

The combination of /?3, R 2, and C thus obtained (with D - 3 in. and coc = 6.84) is
a feasible design (though not yet robust or optimum) for it satisfies the target coc
and D requirements given by Eqs. (10.3.1) and (10.3.2). The next step is to
search for a feasible design (here a value of R$) which maximizes the relevant
S/N ratios (or minimizes variability). Equations (10.3.3) and (10.3.4) together define
the contour on which this search has to be conducted. Consideration of non-linear
constraints in optimization problems is well known. Use of Lagranges multipliers,
when the analytical form of the objective function is known, constitutes a standard
optimization procedure [26]. In the present situation, however, the exact mathematical
dependency of the S/N ratios on the DPs is unknown; hence a (constrained) search
would be an acceptable practical approach.

10.4 SEEKING PARETO-OPTIMAL DESIGNS


The passive filter problem falls in the category of multiple objective optimization
problems [26, pp. 663-697]. Since the dependence of S/Nac and S/ND on the design
parameter R$ may be, in general, quite complex, a search or even enumeration
procedure may be adopted here. In general, the value of / ? 3 at which S/Nmc is
maximum may not coincide with the value of R 3 at which S/ND is maximum. This
implies that a design with maximum robustness with respect to (oc may not coincide
144 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

with the design that enjoys maximum robustness with respect to deflection D. One
may seek here Pareto optimality as shown in Fig. 10.1 [26], and by search generate
an acceptable set of designs between the two designs giving either max ( S/Nmc) or
max (S/Nd ).

O b jective 1
Fig, 10.1 Pareto optimality between two optimization objectives.

We shall now make an additional departure from the conventional Taguchi


methodology. The objective of the outer array experiments is to sample the domain
of noise factors so that the mean and variance of performance responses from variations
in the noise factors may be evaluated [5, 14], Taguchi suggested the use of the outer
array-based simulation to sample the noise domain, the other possible approaches
being the conduction of Monte Carlo simulations, or replication. Even though Monte
Carlo simulations may generally require more computational effort than the outer
array approach, certain advantages exist in adopting the former. In particular, if noise
factors do not have symmetric distribution, use of the outer array approach may bias
the mean and inflate variance estimates (see [16]). Therefore, when the mathematical
relationships linking performance to the design parameters and noise are known or
may be developed, the Monte Carlo approach should be followed.

10.5 MONTE CARLO EVALUATION OF S/N RATIOS


Table 9.4 gives the tolerances of the design components (R2, and C), and the
variability in the environment (the other interfacing devices and equipment) that
would potentially generate noise, causing the performance of the filter to vary. The
simulations would assume that each tolerance specified may be translated into a
normal probability distribution with the limits of each tolerance equalling 3<rfor
the factor involved. For each nominal setting of the parameters in the inner array
(here only the component R$ is involved, the nominal values of R2 and C being
determined by Eqs. (10.3.3) and (10.3.4)), a set of Monte Carlo trials would be
performed, sampling random variables from the normal distributions determined
by the tolerances (Table 9.4). One hundred trials with each nominal DP setting.
(a feasible design) were actually run. The resulting data were summarized into
the nominal performance is the best S/N ratio statistic (Section 5.2).
A DIRECT METHOD TO ACHIEVE ROBUST DESIGN 145

Figure 10.2 shows the strong dependence of S /N ^ on the value of the


sole DP at the discretion of the filter designer. Note that this is so due to the

R 3 ( ohms)

Fig. 10.2 Effect of /? 3 on S/N^c and SIND.

strength of the interaction demanded by Eq. (10.3.1) between R$ (the DP) and the
various sources of noise (Table 9.4). Figure 10.2 shows also the dependence of
S/Np on i?3* The point-to-point fluctuations reflect the effect of the random trials
in the Monte Carlo runs. In general, these fluctuations would reduce if larger
samples (for instance, 500 instead of the 100 trials used here) were obtained at each
candidate R$ value.
Finding the optimum R3 (and, therefore, the corresponding R^ and C values)
would amount to finding where S /N ^ (or S/ND) is maximum. In a well-behaved
case such as the present one, visual inspection provides sufficiently good estimates
for optimum /?3. In problems involving multiple DPs, one could use empirical
optimization methods such as the Response Surface Methodology (RSM).
Figure 10.2 indicates that a choice of / ? 3 in the range
100 Q < < 350 Q
would be sufficient to assure robustness of coc. It also indicates that S/N^c is
relatively flat in this range. From the figure it can also be seen that R3 should be
in the vicinity of 300 Q in order that S/ND is maximum. Figure 10.3 shows the
Monte Carlo estimates of the variance of coc and the variance of D obtained as
functions of R3. The final choice of R$ would depend on resolving the multiple
(two) objective problem: Maximize the robustness of cutoff frequency (Qc> also
maximize the robustness of deflection D. Several standard methods are available
146 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

R 3 ( ohms)

Fig. 103 Monte Carlo estimates of Varfflc and VarD as functions of R3.

here [26]. One practical approach is to identify a set of Pareto-optimal designs


(each such design is better than any other design possible at least with respect
to one performance criterion). Other criteria may be then applied to select one
among these Pareto-optimal designs. The two extreme members of the Pareto-
optimal set of designs here appear to be
R 3 = 300, R2 = 29, C = 454.4 nF
(this design maximizes 5 W fflc) and
R 3 = 200, R2 = 106, C = 424.1 MF
(this design maximizes (S/N D).
Both these designs are more robust than Filippones 2-step solutions [23],
One may find even better designs, if necessary, by search, or by enumeration,
between these two extreme designs.

10.6 CAN WE USE C (OR R2) AS THE INDEPENDENT DP INSTEAD OF ft3?


One might speculate that perhaps the results would be different if, instead of selecting
R 3 as the independent design parameter, we had used C (or R2) to be the DP. The
expressions for R2 and R$ in terms of C (and coc and D, as also other parameters)
are, respectively,

R ~ 1 + 2 xcoJijC -[{(1 - 2 n(orC R f - 8 7 tcocC R ^ V } ]V2

4 ncocCR2
s!V
A DIRECT METHOD TO ACHIEVE ROBUST DESIGN 147

where V = W ^ R g R J iD G ^ ).
The resulting values of S/Nac and S/ND as functions of the (alternative)
independent DP (now C) are plotted in Fig. 10.4 confirming that a value of C in
34

__These designs
30 are off-target
,1

</>
26
4
o
o

0)
22
CO
o S/N COC
c
o 18

a
c
O' 14
CO

10

0.0002 0 .0 0 0 4 0 0 0 .0 0 0 6 0 0 0.0008 0.001

C (farads)
Fig. 10.4 Effect of C on S/Nmc and S/ND.

the vicinity of 400-450 (iF indeed provides maximum robustness both for (0 C and
D. One would appreciate that such confirmation is possible only when each S/N
function has a global maximum.
One key practical advantage of being able to optimize the design by
selecting any one of the three components (R 2, rt3, and C ) of the filter as the
independent DP is that this might facilitate the consideration of fabricating the
filter using particular components available at standard values in the market.

10.7 SOME NECESSARY MATHEMATICAL TOOLS


The design optimization method provided in this chapter makes extensive use
of functional (Section 1 .8 ) or empirically developed mathematical models that
explicitly link a quantitative dependent variable (the performance) to certain
independent variables (the DPs). The time-tested approach to build such models
from observed quantitative data is known as regression analysis. In this section
we summarize the regression procedure briefly but with enough details to facilitate
the understanding of our discussion. Further details may be found in references [18]
and [24],
148 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

Regression is probably the most popular statistical tool among engineers. In


any system in which the quantifiable characteristics output, cost, productivity,
level of defects, performance, etc. change, one is generally interested in the
effect that the independent variables exert on the dependent variables. If the
values of all the variables are known exactly, and if no forces other than those
explicitly considered are at work, then a deterministic model can precisely
describe the systems behaviour. In practice, however, repeated measurements of
most quantities under some given conditions produce different values. Thus, the
deterministic behaviour is rather rare.
As stated in Section 2.2, because of the many factors not in control in real
processes, variability results, which is termed as randomness. However, the cause -
effect relationships between the independent and the dependent variables may be
still of considerable interest. Regression analysis provides a method of linking
the dependent variables (e.g., the performance of a product) with the independent
ones (e.g., the design parameters) through a mathematical model, provided we have
some other reason to believe that the underlying cause-effect dependency indeed
exists. (As shown in Chapter 3, statistical experiments can help one establish
whether such cause-effect dependencies exist.)
Doing regression of X against Y, therefore, does not in any way imply that
X causes 7 Regression only forms a convenient prediction model (for predicting
v

y, given X ) that one develops using empirical data, provided indeed there already
is a cause-effect relationship between X and Y.
The simplest form of regression models is known as the simple linear
regression model.
The word simple here means that there is a single independent variable
(say, X ), but the word linear does not have the interpretation that might seem
self-evident. Specifically, it does not (necessarily) mean that the relationship
between the dependent variable (say, Y) and the single independent variable will
be portrayed graphically as a straight line. Rather, it means that the regression
equation is linear in the parameters . For example, the equation
Y = fa + p xX + e (10.7.1)
is a linear regression equation. The equation
Y = p 0 + p f i + p n X2 + e (10.7.2)
is also a linear regression equation even though Eq. (10.7.2) contains the
quadratic term X2, the equation is still linear in the parameters (p 0, Pu Pn).
Thus Eq. (10.7.2) also is a linear regression equation.
If all the {x y,} points are close to the regression line plotted on the (X, Y)
plane, the linear relationship between Y and X is strong. For a regression model
to be a reliable one, i.e., a model using which one can confidently predict Y given
X, such a strong relationship must exist.
The parameters p$ and p i will generally be unknown. A major goal of regression
analysis is to estimate these (regression) parameters. The most common method for
estimating the parameters in a regression equation iis the method of least squares.
For advanced level regression applications, special methods are available [24].
A DIRECT METHOD TO ACHIEVE ROBUST DESIGN 149

The use of the least-square estimators Po and Pi produces the regression


line such that the sum of the squared vertical deviations from each point to this
line is minimized, and the sum of the individual vertical deviations is zero.
Mathematically, for the ith observation (value of X = X value of Y - Yi) in a
sample of n observations, we have

= Po + Pl%i + i
Therefore,

= Yt ~ Po - PiXi
In estimating the parameters /J0 and Pi, the sum of the squares of all errors {,} is
minimized. Now,

= I (Yt - P o - M ,) 2 dO.7.3)

Using calculus, one should differentiate the right-hand side of Eq. (10.7.3) first
with respect to J30 and then with respect to pi. This gives two equations in / ? 0 and
Pu and their solution yields j30 and Pi. The results, with n (X 7/) observed pairs
of data, are
_ Z X i Yi - a X i) C L Y i)/n
2 7
I X? - (X x y i n

Po = (Z Yi)!n - Pi(L Xd/n


In rigorous applications of regression modelling, a lack of fit test is used to
determine whether the regression model estimated from certain {X F J data is
correct. Such a test may indicate that the true relationship between X and Y might
involve higher order terms (X2, X3, ...) etc.
In many practical design situations one often finds more than one design
parameter affecting the response (or performance). Such situations may still use
linear regression to mathematically link the dependent variable to the independent
ones. Termed multiple regression, when p independent variables are involved,
such a regression model becomes
Y = Po + P\X i + P 2 X 2 + ... + ppXp + e (10.7.4)
\4ote that the model (10.7.4) is a natural extension of the simple linear regression
model (10.7.1). Here more than one independent variable {X h i = 1, 2, p]
influences the dependent variable Y.
Most of the principles of simple regression apply to multiple regression.
With the involvement of p independent variables {X(, i = 1 , 2 , . . . , / ? } , however, the
parameter estimation process becomes a drudgery. Specifically, hand calculation
becomes inadvisable when p > 3 because of the amount of work involved.
An efficient method for estimating the model parameters (fi0t p x, pz, etc.)
uses linear algebra, a branch of algebra that concentrates on solving linear equations
involving several variables. The actual computing process is often computerized.
Many engineers write their own routines using tools such as spreadsheet software
that produce multiple regression models very satisfactorily.
150 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

10.7.1 Response Surface Methods


The major goal in design optimization is to find the level of each of the p DPs at
which the response reaches its optimum. Changing one variable at a time in such
situations is not only inefficient, but also usually unsuccessful. A method that is
very effective here and one based on experimenting to determine in which direction
the response improves the best was developed in the 1960s. Known as the Response
Surface Method (RSM), it requires an initial 2P factorial experiment in a region
(perhaps far from optimum). The response is then approximated by a surface as
a linear function (similar to Eq. (10.7.4)) of the DPs X h X2, X3, ..., Xp. The next
step involves finding the path of steepest ascent . Further experiments are conducted
(i.e., the response evaluated) along this path to a point till the response improves
no further. At this tentative optimum point one conducts a more elaborate set of
experiments to fit usually a second order model to the experimental data. The
optimum point is the stationary point of this second order model. For further details
on RSM see reference [18].

10.8 DEVELOPING A MULTIPLE REGRESSION MODEL


Suppose that in a certain situation the applicable mathematical model involves two
independent variables, X x and X2, and one expresses the dependency relationship as

Y = Po + P\X\ + P i%2 + ( 10. 8. 1)


One may rewrite this relationship in matrix notation as
Y = xp + e (10.8.2)
for n observations {F Xu, X2/, 7 = 1 , 2 , . . . , n) on response Y and the independent
variables X { and X2. In Eq. (10.8.2) the four matrices Y, X, (3, and e are, specifically,
t
______ 1

-1 xn xn ' r
^2 1 x 2l x 21 2
r^oi

, x = Pi , e =
11

Pi
_ _

} X nl X 2_
r
1

where Xy- denotes the /th observation on the yth regression variable Xj. Therefore,
Eq. (10.8.2) may be rewritten as

Y\ - Po + M n + fh%n + i
Yi = Po + P\%2i + /y ^ 2 2 + e2

Yn = A) + P\Xn\ + Pl^nl + /i
As in simple linear regression involving one independent variable, the method
A DIRECT METHOD TO ACHIEVE ROBUST DESIGN 151

of least squares stipulates that one should select {/J0, A and /J2} such that the
selection minimizes the sum of the squares of (he errors X e f . For the model in
Eq. (10.8.1) we have
X t l = X (Yi - A, - P iX n - &X 1 2 ) 2 (10.8.3)
The differentiation of the right-hand side of Eq. (10.8.3) with respect to f50, / ? l5 and
$ 2 (separately) and equating the results to 0 gives three equations in the three
unknown parameters, /30, J0i, and /32. These equations are called normal equations,
written in matrix notation as
X 'X j5 = X 'Y (10.8.4)
The solution of this equation is
= (X 'X ) - 1 X 'Y (10.8.5)
The use of a spreadsheet software with matrix inversion capability makes the
computing of {$} from observations {Yh Xy] relatively straightforward. The example
below, which uses empirical data shown in Table 10.1, illustrates the computation steps.

TABLE 10.1
OBSERVED VALUES OF Y, X, AND X 2 ( p = 2)

Y Xx x2
30.1 8 22
32.2 9 19
34.3 11 19
35.4 12 22
34.4 10 18
30.0 8 22
31.4 9 19
32.3 10 18
34.0 12 22
33.1 11 19
33.2 11 19
34.3 12 22
31.1 9 19
30.0 8 22
32.3 10 18
34.4 12 22
30.3 8 22
31.6 9 19
33.3 11 19
32.0 10 18

The data shown in Table 10.1 are the records of observed response values of
Y(Yi) for the indicated values of two independent variables X x and X2. Based on the
prior knowledge of how X x and X2 influence K, the investigator speculates that this
dependency relationship may be modelled by

( 10.8 .6)
152 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

One may use Eq. (10.8.5) directly to estimate the three model parameters
P0, Pi and Pi. A method that reduces the computing effort centres the data first,
by subtracting from each variable the value of the mean (7 bar, Xbar, or X2 bar) of
that variable. With centring, one would need to invert only a 2 x 2 matrix rather
than a 3 x 3 matrix in the present example. The resultant equation that can predict
response Y given values X x and X2 becomes
Y - Fbar = p x(Xx - Xjbar) + p 2(X2 - X2 bar) (10.8.7)
Rearranging Eq. (10.8.7), we get
Y = p 0 + p xX x + p 2X2 (10.8.8)
where Po = 7bar - p xX xbai - p 2X2bar . Using the data of Table 10.1, we have

'- 2 2 n "-2.385'
-1 -1 - 0.285
1 -1 1.815
2 2 , Y = 2.915

Therefore,

A 40 o A _43.0"
XX = , X Y=
0 56 -5 .2

This gives

0
43.0' 1.075*
(J = (X' X) - 1 X' Y = 40
-5 .2 -0.093
0
56

The data given in Table 10.1 produces 7bar = 32.485, X xbax = 10, and X2bar = 20.
Therefore,
Y = 2^.485 + 1.075X* - 0.093X2

becomes the regression model that links the independent variables X x and X2 to the
response variable Y.
To summarize, regression analysis provides a way to develop a mathematical
model from empirical data collected about input factors and the response . When
there is reason to believe that a cause-effect relationship between the response
variable and certain independent variables exists, the regression model links the
response variable quantitatively to the values of the independent variables, facilitating
the prediction of the value of the response, given certain values of the independent
A DIRECT METHOD TO ACHIEVE ROBUST DESIGN 153

variables. Another approach to developing a mathematical relationship among


quantitative variables is given by Box and Behnken [24].
To perform regression analysis properly, the investigator should proceed as
follows [18]:
1. When observations are to be obtained, one should ask whether the
data has to come from an observational study (i.e., happenstance data) or from
a designed experiment? If they are from a mere observational study, the analysis
may be not very reliable. On the other hand, if they are to come from a designed
experiment, the investigator should use an orthogonal or near-orthogonal design.
2. Determine the purpose for which the study is to be performed and focus on
the single dependent (response) variable and the one or more independent variables.
3. If there is only one independent variable, construct a scatter plot of Y vs.
X to see whether there is evidence of a linear relationship. When there is more than
one independent variable, a display of the correlation matrix is one possible
starting point. This will show the linear correlation of Y with each X , as well as the
pairwise correlations for the independent variables. It will show which independent
variables are most correlated with Y and which regressors, if any, are highly correlated
with other regressors (as in an observational study). The objective here is to subject
the candidate variables to an initial screening. This would be particularly
advantageous when there is a large number of potential independent variables.
4. Using perhaps a subset of the potential regressors obtained from the previous
step, the remaining candidate regressors could be examined through all possible
regression, provided that the number of such regressors is not exceedingly large.
5. You may select a particular subset at this point, using both statistical
and nonstatistical (e.g., engineering) considerations.
6. Do a thorough regression analysis for the chosen subset. This would
include residual plots and a lack-of-fit test with repeated observations.
7. Once you have obtained a particular regression equation, you may use
it for an indefinite period of time, provided you avoid inadvertent (extreme)
extrapolation. If conditions change (for example, a change in the correlation between
Y and a particular X), you may have to develop a new regression equation. A
reduction in R (the proportion of variation in Y explained by the regression
equation, see [9], p. 23.103) over time is one sign of the need for a new equation.

10.9 RATIONALE OF THE CONSTRAINED ROBUST DESIGN APPROACH


The primary difficulties in applying Taguchis 2-step approach to certain design
problems are as follows:
1. DP-DP interactions are mostly ignored, to be included later in an expanded
inner array if the verification experiment (Section 4.3) fails to prove satisfactory.
Barker [25] has made some critical observations here.
2. When the design parameters are continuous, the approach does not fully
utilize the structure of the problem.
3. Also transformations aimed at stabilizing the variance of response when
154 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

the variance depends upon the mean, to permit adjustment of the mean to target
(see [14], p. 291) after one has maximized the S/N ratio do not necessarily
make up for disadvantages 1 and 2 .
Alternatively, if a procedure is adopted in which the designer restricts his
search for a robust design among possible alternative designs, in each of which
performance on-target is guaranteed, then disadvantage 3 disappears. Further, if
appropriate experimental designs and mechanistic or regression models are employed,
disadvantages 1 and 2 also disappear. We elaborate this as follows:
Kackar [6 ] showed that when a specific target r 0 is the best value for the
performance Y of the system being designed, one has the following choices to reach
a robust design: First, if the expected performance E(Y) and the variance of Y are
functionally independent of each other, one may achieve the robust design by
reducing the variance, and then adjusting E(Y) to Tq by using one or more adjustment
parameters. Alternatively, if the variance of Y and E(Y) depend on each other
linearly and one can adjust E(Y) to To independent of the coefficient of variation
(^]Var(Yj/E(Y)), one should attempt to reduce the coefficient of variation. Literature
also proposes certain models under which when an adjustment parameter exists,
one may use appropriate PerMIAs [17, 21] to reach robustness.
Let us convert all observed performance values Y to Z when Z = Y - Tq.
Therefore, under the influence of noise, we have
E( Z) = E( Y) - Tb
Now, if we restrict the search for the optimum (robust) design to those design
parameter combinations at which performance (without the disturbing effect of
noise) is exactly r0, then the difference of observed Y and r 0 would be only due to
noise. Therefore, for these designs, E(Z) = 0. Hence, the task of searching for the
optimum design converts to that of seeking out the design within the restricted
search space for which E(Z2) under the influence of noise is minimum.
Note that this restricted search procedure avoids the second step (that of
adjusting the bias to zero) necessary in Taguchis 2-step optimization procedure.
The 2-step procedure seems to work very well provided one is successful in
identifying the adjustment factors DPs that mainly affect the level (location) of
performance, but not its dispersion. Issues pertaining to the difficulties posed by
any dependency of performance variance (dispersion) upon the mean become non
issues in the restricted search method because we are always considering only
those designs which in the absence of noise would give performance at r0. If some
particular noise factor causes undue variability in performance beyond what may
be tolerated, the designer should consider controlling it (i.e., treat it as a DP) to
achieve improved robustness.
One may now summarize the constrained design approach. At the outset we
assume that the designer knows what the different DPs are and the performance(s)
he is attempting to optimize. We assume that the best performance desired
for characteristic Yt is identified by a target value rh The objective is to choose
DP values to reduce the sensitivity of performance to the hard-to-control parameters,
termed noise. The designer should proceed as follows:
Step 1. For each performance characteristic Yt to be made robust while also
A DIRECT METHOD TO ACHIEVE ROBUST DESIGN 155

to be set equal to some target value % establish a quantitative model relating all
the DPs {DPi , DP 2, DP ^ ...} to the performance Yiy as in
Y i = f ( D P l 9 D P 2, D P 3 f ... )
Step 2. Write the quantitative model obtained in Step 1 as a constraint :
f ( D P i, D P 2, D P 3, ...) =
If n different performances (1^, Y2j F3, ..., Yn) are being simultaneously targeted
(to r2, t 3, etc.), then one establishes here a set of n constraints given by
equations such as
f ( D P i* D P 2, D P 3 , ...) = T\
g (D P i, D P 2 , D P 3, ...) = Vi
h(JDPi, D P 2 , D P 3, ..,) = t 3
and so on.
Step 3. Solve the equations formed in Step 2 to obtain certain dependent
DPs in terms of the truly independent DPs. Clearly, if there are m DPs and n
constraints with m > n, one has the choice of treating (m - n) DPs as truly independent
(which can be used as the robustness-seeking variables in Step 5 below) while the
others are dependent (their values are restricted so that the desired performance
targets are achieved).
Step 4. Construct a special inner array using only the truly independent
DPs. Also set up the appropriate outer array, or a Monte Carlo experimental set
up, or the physical arrangements for repeated observations of performance
under the influence of real noise. Conduct experiments at each setting of the inner
array row to observe performance(s) (Ki, Y2j y3, ...) under the influence of noise.
Step 5. Apply search, RSM, or some other technique to find the combination
of the independent DPs that minimize the empirically estimated variance (of each
performance Yu Y2, y3, etc.) under the influence of noise. At this point we are
exploiting only DP-noise interactions to improve robustness. If a unique set of DP
values does not optimize all performance characteristics, develop the Pareto
optimal set of candidate designs.
The quantitative models in Step 1 may be either mechanistic (based on physical
laws relating the response 7,- to D P U D P 2j ...) or it may be a regression model
developed using the Box-Behnken experimental design schemes [11] or some
other similar scheme that can lead to at least a second order regression model
including significant 2-factor interactions. When the DPs are discrete or attributive
rather than continuous, the method above may be modified to an enumerative
search to identify the optimum (robust) design.

10.10 APPLICATION OF THE CONSTRAINED APPROACH TO REAL


PROBLEMS
We have already applied the constrained design approach to the filter design
problem in Sections 10.2-10.6 above. We now recount what we did and relate
156 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

it to the steps listed in Section 10.9. We then apply the approach to two other
problems one a process design problem solved earlier by Barker [25] and by
Tribus and Szonyi [16], and another discussed by Box [17].
The primary objective of Step 1 in Section 10.9 is to establish mathematical
relationship(s) relating the performance chaiacteristic(s) of interest to the DPs.
When mechanistic models are not available from the functional design of the
product or process [5], one may establish the relationship empirically. In the case
of the passive filter, the application of Kirchhoff s laws allows us to derive these
relationships (Eqs. (10.3.1) and (10.3.2)). Therefore, for the passive filter problem
one may directly go to Step 2.
Equations (10.3.1) and (10.3.2) show the results of Step 2. These two equations
help constrain the total design space consisting of all possible values of R3, R2 and
C to the feasible set of solutions for which on-target performance (coc = 6.84 Hz
and D = 3 in.) would be fa it accompli.
Step 3 is realized by writing Eqs. (10.3.3) and (10.3.4). Perhaps one can see
that the filter design problem now has only one DP, viz., is R3i left as the independent
DP: once we select a value of /?3, the meaningful (feasible) values of R2 and C
become known (fixed) by Eqs. (10.3.3) and (10.3.4).
Since only one independent DP value now remains to be determined so as to
optimize the design, one need not really worry about the inner array in Step 4
in this example. Step 5 is accomplished by performing Monte Carlo simulations
using noise conditions consistent with Table 9.4. As shown in Fig. 10.2, the optimum
value of R 3 is near 300 Q. The corresponding (optimum) values of R2 and C may
be now determined from Eqs. (10.3.3) and (10.3.4). Figure 10.5 confirms that when

R 3 (ohms)

Fig. 10.5 Comparison of outer array and Monte Carlo simulations of noise effects.
A DIRECT METHOD TO ACHIEVE ROBUST DESIGN 157

every noise factor has a symmetric distribution (in the present example normal
with the nominal value as the average and a third of the % tolerance as the standard
deviation), outer array experiments and Monte Carlo trials produce comparable
results.
We shall next apply the direct optimization to a second design optimization
example, one initially tackled by Barker [25] and then later by Tribus and Szonyi
[16]. This problem required the optimization of the strength of castings produced
by. a screw moulding process. The six process factors involved are listed in
Table 10.2. The optimum settings were to be found so as to yield minimum variance
in the strength of the castings produced.

TABLE 10.2
WORKING RANGES AND LOW COST TOLERANCES OF DPs

Identifier Design Parameter Range Tolerance

XI Feed rate 1000-1400 g/min 20%


X2 First screw 400 - 800 rpm 10%
X3 Second screw 8 5 0 - 950 rpm 10%
X4 Gate size Nominal 30 thou
X5 First temperature 2 8 0 -3 6 0 F 15%
X6 Second temperature 3 2 0 -4 0 0 F 15%

The target performance (strength) required was 160. Barkers solution in which he
centred the variables and then applied Taguchis 2-step optimization produced a
design (coded as 2, 3, 3, 2, 3, 2) with a mean strength of 161.55 and variance 785.
The Tribus-Szonyi solution used Monte Carlo simulation to simulate the influence
of the uncontrolled noise factors. This solution, represented by code (2, 3, 3, 2.5,
3, 2), predicted a mean strength of 154.7 and a variance of 683.
To apply the direct or constrained optimization method we first convert the
Tribus-Szonyi process model for strength [developed by them using multiple
regression, see [16], Eq. (2)] into the constraint
160 = Strength = 111.67 - 3.43(X1 - 2) - 3.68 [3(X1 - i f - 2]
+ 8.33 (X2 - 2) + 9.52 (X3 - 2) + 4.74 (X4 - 2)
- 2.85 [3 (X4 - I f - 2] + 5.00 (X5 - 2)
- 3.42 [3 (X6 - 2) 2 - 2]
Next, we randomly generate different feasible designs and compute their Monte
Carlo estimates of variance of strength (using Eq. (3) of [16]). A part of the
simulation results is shown in Fig. 10.6. A 15-minute search for the optimum
values of XI, X2, X3, X4, and X5 with a 80286-based microcomputer running at
12 MHz produced a best design one with on-target strength (because the
search was restricted only to those designs that would yield 160 as strength).
This design is representable by code (2, 3.37, 3.5, 2.5, 2.5, 2) and has a variance
of 703.
158 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

2.2

t
O'
c
<D
{ft

O|
s
c o
O -C
o
>
T3
0)
O
T3
0>

0 .

Design s o l u t i o n ------
Fig. 10.6 Sample results of random search for a robust design with on-target
casting strength.

The design might further improve if random search was pursued further or if
one applied the RSM constrained by the equation
X2 = 2 + (1/8.33)[160 - {111.67 - 3.43(X1 - 2)
- 3.68[3(X1 - 2) 2 - 2] + 9.52(X3 - 2) + 4.74(X4 - 2)
- 2.85 [3 (X4 - 2) 2 - 2] + 5,00(X5 - 2)
- 3.42 [3 (X6 - 2 f - 2]} ( 1 0 .1 0 .1 )
When the boundary conditions on the working range of the variables were relaxed
somewhat (a consideration based on engineering knowledge) and RSM was
applied, a significantly more robust design resulted. This design, representable by
code (1.83, 2.63, 4.47, 2.67, 2.03, 1.93), possesses a projected variance of 454
while maintaining the target strength at 160. The RSM objective here was to
minimize moulded part strength variance (see Eq. (3), [16]) given by
a 2 = 193.74 + 47.56(X1 - 2) + 28.11[3(X1 - 2) 2 - 2]
+ 15.20(X2 - 2) + 2.95 [3(X2 - 2) 2 - 2]
+ 19.82(X5 - 2) + 2.49 [3(X5 - 2) 2 - 2]
+ 9.58(X6 - 2) + 29.32 [3(X6 - 2) 2 - 2]
If one uses the total loss (Section 1.3) inflicted by a design as a measure, then
it appears that the constrained approach can yield as much robustness (if not
more) than what the conventional 2 -step design optimization method produces.
A DIRECT METHOD TO ACHIEVE ROBUST DESIGN 159

What is significant here is that the constrained approach assures zero pre-designed
bias (off-target performance) even with a multiplicity of target performance
characteristics.
Recall now the central focus of robust design as stated by Taguchi [4],
Phadke ([5], p. 281), Box [17], and many others. As against the 2-step optimization
method (Section 6.9), in the constrained optimization approach, one looks for
robustness the minima of the noise-caused variance only at points in the
design space at which the different performance characteristics are each on their
respective nominal-the-best targets. Therefore, one loses little by not working
with the (S/N) ratios in the constrained optimization procedure.
We may give yet another example of applying the constrained design
optimization approach. Box [17] lists a class of design problems seeking robustness
about an operating target value T in which the noise-caused standard deviation
cr(x) and the mean jj(x) are linked in certain special ways. He showed that, because
of this special linkage, the DP set x cannot be separated into subsets x x (the
control parameters influencing only dispersion) and x 2 (the vector of adjustment
parameters) by use of the SIN ratio 10 log 1 0 ( j ^ / o 2). In one illustration with
two DPs X! and x 2 in which dispersion P(xj) equals [cr(xb x 2)//Jz (xu x2)]2, Box
suggests that one should select first Xj so as to minimize P(x{) and then iteratively
vary x 2 to adjust ix(xh x 2) to the desired target T ( T = 10 in Boxs illustration). In
his paper Box provided a plot (Fig. la in [17], reproduced in Fig. 10.7) of the
contour lines of <7 (xj, x2) and /x(xi, x^. Availability of these contours makes the
application of the constrained optimization approach to this problem straight
forward. One searches for the optimum design only on the contour on which
fx(xi, x 2) - 10 (see Fig. 10.6). On this contour one would look for the (x1? x2) point
at which c rfa , x2) is minimum. With the contours being as shown, the design
that meets these criteria is the point marked Q on the plot, coincident with Boxs
solution. We remark again that if mechanistic or mathematical models for /i(x) and
c t ( x ) are not available, one should obtain these empirically with the help of

appropriately designed experiments.

10.11 DISCUSSION OF THE CONSTRAINED DESIGN OPTIMIZATION


APPROACH
One may criticize the constrained design approach for the extra effort in computations
it may require, unless one uses, for instance, genetic algorithm-based search [36],
etc. However, when one considers the long term gains of a robust product or process,
such additional effort would be most probably worth it. This would be true, for
instance, for many complex mechanical and electronic devices, and for chemical and
metallurgical processes that must deliver a target performance. However, one should
note that the constrained method works only when the task involves performance
and DPs that are all quantitative to permit construction of the mathematical models
for p, and a needed either from first principles or by multiple regression.
The chief advantage of the constrained approach described in this chapter is
that it significantly cuts down the size of the design search space by focussing
the designers attention to the feasible solutions (those delivering on-target
160 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

X1

(M
X

M= 15

contours of \i
contours of 0"
Fig. 10.7 Contour lines of <r(xi, jc2) [-------] and x2) [----- ] showing
the robust designs location at Q .

performance) to the problem only. The approach works particularly well with
multiple performance characteristics: each characteristic to be made robust is
assured here to be on its respective target. By contrast, application of Taguchi* s
2-step method requires fine tuning adjustments to reach targets [5, 13, 14, 23],
frequently cumbersome with multiple performance characteristics. Further, since
the constrained approach uses models containing higher order terms and inter
actions, it explicitly considers all significant interaction and higher order terms.
This is the third advantage of constraining optimization as presented here. The
fourth advantage which results from the preferred use of Monte Carlo experiments
(replacing the use of outer OAs) to study the S/N ratios is the removal of the
chance of biasing due to distortion of variance and biasing of the mean when the
noise factors may not have a symmetric distribution. Such biasing and distortion
may occur when outer OAs are used to simulate noise (as recommended by Taguchi)
in spite of the presence of asymmetric distributions (for example, see [16]).
We have described an unconventional yet effective method to empirically
A DIRECT METHOD TO ACHIEVE ROBUST DESIGN 161

seek robustness in this chapter. Several refinements to this basic approach,


including extending the experiments to tolerance design, are possible.
Figure 10.8 pictorially conveys the central philosophy in the method
presented in this chapter for successfully seeking the robustness of several
performance features simultaneously.

Fig. 10.8 Pareto-optimal robust designs are sought on the contour on which
performances / and g are both on target.
Loss Functions and Manufacturing
Tolerances
11.1 LOSS TO SOCIETY IS MORE THAN DEFECTIVE GOODS
This chapter presents loss functions, a quantitative statement developed by Taguchi
of how off-target production economically affects consumers and manufacturers.
Loss functions provide the justification missing in conventional QA about
why a manufacturer should minimize the variability in the performance of a
product or a process. Loss functions also guide the setting of manufacturing tolerances
and the allocation of part tolerances between interacting work centres in a factory,
and between suppliers and the consumer. As we show in this chapter, loss functions
can play a major role in minimizing the burden on society of off-target production
and services.
Loss functions communicate an important reality that Taguchi was the first
to articulate: Producing goods that merely meet specifications is not enough for an
enterprise (Section 1.4). Any product that fails to perform on target inflicts a loss
on society. This loss may be in the form of an inconvenience, a material loss,
production stoppage, repair, adjustment cost, or a complete scrapping of the product.
Ill-fitting shoes, a train that leaves late, a reactor hatch that does not completely
close, a chemical reaction with low yield, or a gun barrel short of the required
finish all these inflict such losses on society.
When a product fails to deliver its required characteristics or it performs
below the expected standard, it must be re-machined, adjusted, re-processed, or if
all these actions fail, discarded. When sold, if it fails to function as the customer
expects, the customer must have it adjusted, re-stitched, return it to store, or throw
it into waste basket. In addition, with off-target performance, often the user incurs
an extra cost due to countermeasures that he must apply to compensate for the
off-target performance.
According to Taguchi, the mere fact that a product meets specifications,
or the traditional use of specifications to communicate user requirements is often
not enough. Performance is ideal, best, or most desirable when it is exactly on target.
Being off target has real adverse consequences which are very serious in certain
situations. For instance, even if the components making up a complete system are
individually within specifications, if each of them just meets the tolerances, many
trivial deviations can stack up, leading to catastrophic system failures.
Taguchi showed that whenever product or process performance deviates
from the target, the loss occurring to society may be quantified. If L(y ) represents
the loss caused by a small performance deviation ( y - m) from the target
performance m, then using Taylor series expansion, it is possible to write
L(y) = L\ip + ( y - m)] = L(m) + - m)I \ !

+ L"( m) ( y - m f l l ! + L"'(m )(y - m)3/3 ! + ...


162
LOSS FUNCTIONS AND MANUFACTURING TOLERANCES 163

Taguchi reasoned that at y = m, i.e., when performance is exactly on target, one


does not require any corrective action or countermeasure to obtain satisfactory
performance. Therefore, at this point the loss to society is zero, and this is the
minimum loss society experiences with the perfectly performing product. A ty = m,
the first derivative L'(m) is also zero because L(m) is minimum. Therefore, ignoring
the higher order terms since one is considering only small changes in performance
y near the target m, one may write
L(y) = k ( y - m ) 2 ( 1 1 . 1 .1 )
This defines the loss function. According to Taguchi, the loss function quantifies
the impact on society of the product not performing on target. This loss is all-
inclusive: it includes the cost of re-work, return, repair, adjustment, on-going extra
cost of operation and maintenance, or scrapping an unsatisfactorily performing
product The consumer, or the manufacturer, or some other section of society may
incur this loss; one single loss function given by Eq. (11.1.1) expresses this
all-inclusive loss to society.
Taguchi went on to say that the loss the product imparts from the moment
its producer ships it out may fall into several categories. A product causes some
loss due to its functional variation. It may cause separate, additional losses due to
its harmful effects perhaps not even on its direct users such as that due to
automobile exhaust emission. We will confine the present discussion of loss functions
to the loss a product causes due to its functional performance varying from its
target performance. Generally, the loss function will have a parabolic shape (see
Fig. 1.3).
There is only one unknown (the constant k) in the expression for L(y) given
by Eq. (11.1.1). k may be estimated if we know the coordinates of any one point
on the loss function curve, k may be found, for instance, if we know what it
would cost the customer to rectify a known amount of off-target performance so
that one restores the product to its desired (target) functionality.
Taguchi made a special point that the value of k should be determined by
calculating the loss function as close as possible to the target m [4, p. 33]. The
consumer tolerance points are usually the only points available for this purpose.
The tolerance range may vary from customer to customer. Then, for customer
tolerance one should take the points on either side of m at which 50% customers
would repair or replace what they have bought, or lodge a complaint.
The following example illustrates the estimation of k, and hence the loss
function: If a buyer discovers a store-purchased shirt to be too tight (short by,
say, 1 cm from true neck size) when he takes it home, he would have no alternative
but to have it adjusted at a cost, or return it to the store. If it is too loose, it will
have to be similarly adjusted or returned to the store. If it costs the customer $10.00
to get the shirt collar that is 1 cm off adjusted at a local tailors shop, then from
Eq. (11.1.1), we have
k = .
1 0 0 0 /(1 cm ) 2
Therefore, the equation
L(y) = 10 (y - m f ( 1 1 . 1 .2 )
164 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

defines the loss function for ill-fitting shirts near the true neck size (or target
characteristic) m.
A word needs to be said about tolerances at this point. Unlike how designers
commonly mark them on engineering drawings, tolerances need not always be
symmetric. A shirt neck that is 1.5 cm too tight very likely needs an adjustment,
whereas that is 1.5 cm too wide does not. Thus, loss functions need not be always
symmetric.
What use is the loss function? One major use of loss functions is in determining
manufacturing tolerances. Taguchi used cost reasons to show that in most cases
manufacturing tolerances should be not equal to but tighter (narrower) than customer
tolerances.

11.2 DETERMINING MANUFACTURING TOLERANCES


Determining manufacturing tolerances involves two steps: The first step entails
the (society) loss function. Based on the manufacturers costs, one then determines
the manufacturing tolerances. We shall illustrate the procedure with an example.
Let a garment manufacturers factory cost of setting and stitching a shirts
collar to bring the collar within some target neck size tolerance be $ 2.50 per shirt.
Generally this cost will be a function of the technology applied to achieve control
on tolerance (perhaps an extra measurement and adjustment before the garment
maker finally stitches the collars). This cost will be independent of the actual
tolerance one sets, within the reach of that technology. Where should the shirt
manufacturer now set his own stitching size tolerance? This tolerance is the maximum
allowable deviation in manufacturing from the designated (target) collar size to
be marked on the shirt.
If the customers adjustment cost (at the local tailors shop) equals $ 10.00
as stated in Section 11.1, then the loss to society caused by shirts that deviate
from target size m is given by
Uy)= 10 (y - m) 2
This loss function equation represents the loss imparted to society (customer,
manufacturer, or anyone else affected by the product). Therefore, whenever collar
size deviates from the target size m, the manufacturers tolerance will be given by
the relation
2.50 = 10 (manufacturers tolerance) 2
which leads to
Manufacturers tolerance = ^ [ 2 50/10.00] = 0.5 cm

Observe that this tolerance is tighter than the customer's tolerance, because it is
cheaper here to adjust collar size while one is manufacturing the shirt (this costs
$ 2.50 per shirt in the factory) than re-stitching by a customers local tailor (such
re-stitching costs $ 1 0 . 0 0 per adjustment).
One can see that //o n e is interested in the broader objective of minimizing
the total cost society incurs because of ill-fitting collars, the above procedure for
LOSS FUNCTIONS AND MANUFACTURING TOLERANCES 165

setting the manufacturers tolerance helps achieve this minimum. Such a procedure
for setting manufacturing tolerances equitably balances the cost society incurs in
restoring off-target product performance to target performance.
However, manufacturing tolerances need not always be tighter. What
magnitude they should be depends on the relative adjustment costs of the
manufacturer and the customer.
One may use similar procedures to set engineering tolerances across
departments for dimensions, hardness (and other similar characteristics), impurity
in raw materials, magnetic strength, vibration control, balancing of vehicle wheels
and tires, etc. to keep unit costs minimum.

EXAMPLE 11.1: Determining Manufacturing Tolerance f o r Microphone Cable


Impedance. Customer specifications for voltage drop in standard length micro
phone cables sold are 120 10 mV. The factorys cost to adjust the impedance of
a cable is $ 50.00, which effects a deviation of 4 mV in voltage drop. What
should be the manufacturing specifications limits (around the target 120 mV) on
voltage drop?
Solution . The loss function about the target specifications of 120 mV needs to be
determined first Since the factorys costs to adjust impedance to produce a drop
of 4mV are given as $ 50.00, the equation
U y ) = k(y - m)2
gives
k = L( y ) / ( y - m f = 50/42 = $3.125/(mV)2

Therefore, L(y) = 3.125(y - 120) 2 describes societys losses. Hence, when the
manufacturer delivers a cable just within the edge of the customer specification
limits (120 10 mV), he inflicts effectively a loss of 3.125(10)2 or $ 312.5 on
society (this time the customer). Recall that the loss function is the quantitative
statement of the loss society incurs whenever performance deviates from target.
Thus the cable being just within the customer specifications does not mean zero
loss to the customer as can be seen from Fig. 11.1. Reducing manufacturing
variability to 4 mV would save society $ (312.50 - 50.00) or $ 262.50 per cable
sold.
On the other hand, one cannot justify reducing manufacturing tolerances
to narrower limits below 4 mV, because then the manufacturers adjustment
cost ($ 50.00/adjustment) would exceed the consumers new loss (which one
finds again from the loss function curve) at the edge of the new limits. Above the
4mV manufacturing tolerance, the consumers loss becomes higher than the
manufacturers cost. Hence, manufacturing tolerances should be 4mV.
Note again the fundamental distinction between the traditional perception of
customer specifications and the notion based on loss functions put forth by Taguchi.
In the traditional viewpoint, meeting specifications implies that everything is
OK. By contrast, in the loss function viewpoint, society incurs a loss of $ 312.50
even i/th e sold cable falls just within the edge of the 120 10 mV specification
limit (see Fig. 11.1).
166 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

mb? max
Performance characteristic

Fig. 11.1 Contrast between the conventional view and Taguchis view of
societys losses when performance deviates from the target.

E X A M P L E 11.2 : Determining Optimum Neck Size Interval f o r Offering Shirts


f o r Sale. A manufacturer provides shirts with neck sizes in intervals of 1 cm
(37, 38, 39, 40, 41, etc.). Should he change the size interval?
Solution. Before proceeding to the solution we note an important point here, A
customer with 40.5 cm neck size at the existing size offerings is forced to purchase
a shirt that is 0.5 cm too loose or 0.5 cm too tight. Not getting a shirt of his exact
neck size causes an inconvenience on the customer perhaps a stiff neck and
the associated headache. Otherwise he incurs the added cost of having the collar
adjusted.
If the exact, required size can be bought, the loss (discomfort or monetary
loss) caused by size difference is 0 as one needs no tailoring or any other adjustments.
Clearly, the greater the deviation from the required size, the greater is the loss.
Further, market research conducted by the manufacturer indicated that customers
feel that buying a shirt 1.00 cm too tight is worse than one 1.00 cm too loose. This
suggested that customer specifications are asymmetric.
The solution consists of the following two steps:
Step 1: Finding the loss function. First, one needs to define customer
specifications (m + 8 2, m - Si), where m is some customers exact neck size, S\
his lower tolerance, and S 2 his upper tolerance. One sets <$i and S 2 at values at
which market research says 50% people with neck size m will refuse to buy the
shirt marked size m.
Let y be the actual neck size of the shirt that the manufacturer marks size m.
Then one gives the loss due to the customers not receiving the exact size m by
U y ) = k (y - m)2
LOSS FUNCTIONS AND MANUFACTURING TOLERANCES 167

Since one has two different (asymmetric) tolerances here ( 8Xand S 2), one also will
have two different loss functions one for loose shirts and the other for tight shirts.
Let the cost of re-stitching, delay, and the travel to get a loose shirt adjusted,
etc., together be D x, and that for a tight shirt be D2. Fr simplicity one may
assume D x = D 2 = $ 4.00 per adjustment. Hence the two sides of the loss function
L( y ) involves constants k x(= 4.00/(8{)2) and k2(= 4.00/(<52)2), see Fig. 11.2.

i
target

(custom ers actual neck size)

Fig. 11.2 Loss function for loose and tight shirts.

Figure 11.2 shows that k x and k2 define the two sides of the asymmetric loss
function. If market research establishes that 8X = 0.5 cm and 5 2 = 1 0 cm, then
the loss function will be
U y ) = 16(y - m)2, y < m
= 4 ( y - m)2, y > m (1 1 2 .1 )
Note here that a customer with neck size (m, + S2) is free to buy the shirt of neck
size marked m,*+i (the next higher size). Here the difference between mI+i (the size
of the next higher size shirt) and (m/+ 1 - 8X) (the customers actual neck size)
causes his loss. Figure 11.3 shows the appearance the loss function L (y ) takes here.
At customer neck size (mf + 82) [which also equals (m,+ 1 - 8 x)]y the loss with
the tight shirt is 4(52)2, and the loss with the loose shirt is 16(5i)2.
The transition relationship between 8X and 8 2 may be found as follows: It
turns out that since these two losses should be equal at transition size (m, + <52),
if one produces shirts at 1 cm intervals, say 39, 40, 41, 42, then the losses for the
customer who has the neck size 40.33 cm, regardless of whether he buys a shirt of
40 cm size or one of 41 cm size, would be equal. This evaluation of 8X and S 2
provides us a way to determine at what actual neck size customers would move up
to the next higher stamped neck size (see Fig. 11.3).
Step 2: Determination o f the manufacturing size interval Before one pro
ceeds, one needs to make here the rationality assumption. This assumption
implies that every customer will (a) choose to reduce his inconvenience to minimum,
168 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

Neck size mj A Neck size m i+1

Transition point

Fig. 11.3 The shape of the loss functions determines the transition point r.

and (b) have the purchased shirts collar adjusted when the size deviation exceeds
S\ or 8 2 tolerance.
If customer neck sizes are distributed uniformly between say 40 cm and
41 cm, then with 1-cm size interval, the average loss per shirt purchased is
,40.33 o ,41.00 t -> ^
16 j ( y ~ 40) dy + 4 J (41 - y f dy = 16(0.3373) + 4(0.6773)
40.00 40.33

= $ 0.593
In the above calculations one assumed shirts to be available at sizes 40.00 cm and
at 41.00 cm. Note that for all shirts not at the exact customer neck size y\ one
assumes a loss, equal to the quantity given by the loss function equation ( 1 1 .2 . 1 ).
With the availability of shirts reduced to 2-cm size intervals (e.g. 40,42, 44),
the average loss per shirt purchased (using new 8X and S 2) changes to
,40.67 7 42.00 *
(1/2)16} ( y - 40) dy + (1/2)4 J (42 - y f d y = $2.37
40.00 40.67

The multiplier (1/2) comes from the density of people at different neck sizes
spread now over 2 -cm intervals.
Suppose now that the manufacturers extra cost of stitching and selling one
extra neck size is $ 1.80/shirt. (This extra cost consists of the additional inventory
and distribution cost per extra size.) Therefore, the loss to society per shirt sold at
1-cm intervals would equal (0.593 + 1.80) or $ 2.393, and the loss with 2-cm
intervals would equal $ 2 .3 7 .
By trial and error with other size intervals, it may be shown that the ideal
manufacturing interval is 1.80 cm. Note that this interval (1.80 cm) exceeds customer
LOSS FUNCTIONS AND MANUFACTURING TOLERANCES 169

tolerance decisions (5! + <52). Comfort guides decisions about customer tolerancc.
Therefore, some shirts purchased by customers will now need re-tailoring. But, at
the 1.80-cm size interval, societys total cost will be minimum.

Consumers buy custom-cut replacement window glass panes


E X A M P L E 11.3:
from a store. How close (precise) should the stores cutting tolerance be?
Solution. Since a glass cutter normally cuts a glass pane to length and to the
required width in independent operations, we will consider the sizing decisions
one-dimension-at-a-time.
Let ni\ be the fall-out dimension (glass too small) as it concerns length.
Similarly, let m2 be the too large dimension for length (Fig. 11.4).

i
11
11

JZ

------Glass length -
........... J
m. m2
Length
Fig. 11.4 Glass pane size variations.

To simplify the problem, it will be assumed that customer tolerances are


symmetrical around the ideal size m required by the customer. We will define
m = (mi + m2 ) / 2 and 8$ = (m2 - m{)/2, which will imply that the customer will find
a glass cut in length in size m 5q satisfactory for his use. Thus, 5q is therefore
the customers tolerance for variations in length.
The approach for specifying a tolerance for the width of the glass would
be similar.
Except by coincidence, the glass cutting tolerance of the store will be different
from Sq. The costs of the store related to its cutting tolerance include transporting
extra raw sheets to take care of glass found unfit and returned by customers,
keeping the extra sheets in storage, and the technology applied in cutting glass, etc.
The costs a customer incurs in handling an unfit glass include the extra trip
to the store, getting glass re-cut, inconvenience, etc. If A0 is the customers loss per
glass if he finds the store-cut glass brought home too large or too small, then
K y ) = k (y - m)2,

where k = A0 /5 02, specifies the loss function. Now if the store incurs a loss equal
to A in replacing a glass that is returned and re-cut or replaced, then the stores
cutting tolerance may be found as follows:
U y ) = [A0 /So2](;y - m ) 2 = A
170 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

This gives cutting tolerance { y - m) = 5o^(AJAo).


If A0 = $ 15, A = $ 3, and 8q = 3 mm, then the cutting tolerance of the store
equals 1.34 mm. Therefore, to minimize loss to society, the cutter should not cut
glasses he calls length = m mm outside the tolerance
m 1.34 mm

This tolerance shows the quality o f supply from the store.


A similar analysis would help in establishing the cutting tolerance for
width.
The loss function approach thus helps in equitably distributing the costs of
adjustment and in keeping the total cost of the product to society at its minimum
possible level. Perhaps the reader can also see how one may equitably distribute
tolerances between two interacting work centres in a factory, or between a supplier
and an industrial buyer. The buyers plant may have to be shut down if the incoming
material is unacceptable.

11.3 LOSS FUNCTIONS FOR MASS-PRODUCED ITEMS


When performance varies, one obtains the average loss to customers caused by
mass produced items by statistically averaging the quadratic loss. The average loss
is proportional to the mean squared error of Y (the performance characteristic)
about its target value r.
If one produces n units of a product at performance levels y ^ y 2, yn
respectively, then the average loss caused by these units due to their not being
exactly on target t is

-W^i)
n
+ U y 2) + - + Uy ) }

=
/I
Kyi - t + (y
) 2 2 - + - + (y - *)2]

=k (11.3.1)
n

where \x = y-Jn and a 2 = Z (y,- - //)2/(n - 1 ).


As pointed out in Section 1.4, this average loss, which is caused by variability,
has two components:
A

1. Loss k(fi - r) . This is contributed by the average performance of


production (/j) being different from the target r.
A

2. Loss k a . This results from the performance {y,} of the individual items
being different from their own average /i.
Thus the fundamental measure of variability is the mean squared error of Y
(measured from the target r), and not the variance a 2 alone. Therefore, the reader
should note that ideal performance requires perfection in both accuracy (implying
that ji be equal to t) as well as precision (implying that a 2 be zero).
LOSS FUNCTIONS AND MANUFACTURING TOLERANCES 171

11.4 SUMMARY
t

Analysis of societys overall losses requires the cost of inconvenience, adjustments,


or scrapping by the customer. It also requires consideration of the extra cost the
manufacturer incurs due to returns, repair, and scrapping. According to Taguchi,
this overall loss increases at a geometric (parabolic or quadratic) rate (= k( y - m)2)
as quality y deviates from the target performance m. The constant k in the loss
function formula is found by the cost o f the countermeasure taken by the
manufacturer (or the customer) to get the product within tolerance. The loss function
translates the deviation from target into a cost estimate useful in optimizing
manufacturing, marketing, and purchasing decisions in an enterprise.
Losses to society caused by off-target mass production are a sum of (a) loss
due to the average quality being different from target quality; and (b) the loss due
to the item-to-item variability.
In mass production, therefore, the focus should be on the minimization of two
variabilities. First, the average quality performance of mass produced items should
be on target. Second, the item-to-item variability in performance should be minimum.
Several experts have proposed that loss functions should form the basis for
robust design development for problems requiring the simultaneous optimization
of more than one performance characteristic [5, 16].

EXERCISES
A manufacturer of quartz watches uses inspection to screen defective watches
before they are shipped. Watches rejected by the inspector are reset by the factory.
If a sold watch turns out to perform outside the warranted 5 s/month tolerance,
the customer is entitled to a replacement. However, a replacement costs the customer
$ 25.00/watch net in postage and inconvenience.
1. Determine the loss function.
2. If the cost of setting a watch is $ 2.00 at the factory, verify that the inspector
should use the tolerance limit 1.41 s/month. Discuss why he should not use the
5 s/month limit.
3. If the mass produced watches submitted for inspection show an average deviation
of +10 s/month from perfect performance, with a standard deviation of 5 s/month,
and if the factory produces 1 0 , 0 0 0 watches per month, estimate the total monthly
loss caused to society if the factory ships the watches uninspected.
4. Suppose that the manufacturer uses a different production method that reduces
the average deviation to 0 s/month while retaining the 5 s/month standard deviation.
If the manufacturer now ships the watches uninspected, verify that the reduction in
societys total loss will be $ 1 ,0 0 0 ,0 0 0 /month.
5. If the performance of the watches produced has the normal distribution, estimate
the number of watches produced/month under the new method (yielding 0 s/month
average and 5 s/month standard deviation) that will fail to meet the 1.41 s/month
tolerance limit. How many of these watches will exceed the 5 s/month customer
tolerance?
Total Quality Management and
Taguchi Methods
This concluding chapter sums up the motivations, opportunities, and methods in
todays renewed push toward quality, termed as Total Quality Management (TQM)
by industry and Quality Loop in the ISO 9000 Quality Standards system [10].
The chapter then locates Taguchi methods within the overall framework of TQM.
One finds certain distinct attitudes, principles, and methods in TQM that are
fundamentally different from how one has managed quality traditionally. The
distinctions are four-fold. First, TQM demands top management commitment to the
quality of the products and services the organization offers. Secondly, it requires
a high sensitivity to what the customer demands. Thirdly, TQM needs use of
systematic and superior methods which usually include statistical methods to
solve quality problems. Finally, it requires a company-wide participation
integration of activities in design, development, production, procurement, and
customer service groups, as these relate to the quality. Here one would resolve
problems not only with methods and technologies, but also by giving people a
chance to contribute. The operators on the line probably know more about what
can go wrong with production than anybody else.

12.1 WHY TOTAL QUALITY MANAGEMENT?


Improved communication, rising living standards, and competition that aggressively
crosses national boundaries have made the marketplace of the 90s distinctly different.
One finds today an unprecedented growth in the volume and variety of products
and services available. This growth has also brought variability, not only in price
but also in the quality of what the market now offers. Buyers expectations have
risen. Both consumers and industrial buyers have become sophisticated and vocal
about what they want As a rule, customers now compare before they buy. It is not
surprising then that to win customer confidence firms now add a quality product
to their label.
Today, traditional management and engineering methods are fast becoming
inadequate. This is highlighted in what is regarded today as the standards or the
new benchmarks in quality systems [25, 30]. Businesses seeking growth must
now define/document their commitment to quality. This happens when total
company productivity, rather than that of the workers in the factory, becomes the
goal. In todays leading enterprises, customer focus and satisfaction form the core
of corporate values. The quality management system that is most effective converts
performance data and customer expectations into organizational objectives and
technical specifications.
Reaching out to international markets is now natural because for many
producers the domestic market is too limited to support growth and profits.
172
TOTAL QUALITY MANAGEMENT AND TAGUCHI METHODS 173

TQM provides some methods for translating these new realities into
opportunities and business results. TQM is the companywide integration of quality
development efforts, quality maintenance efforts, and continuous quality
improvement efforts (see Table 12.1, summarized from [30]). The essence of TQM
is reflected in the contemporary quality management standards and norms of
excellence [10, 27]. TQM draws together all activities pertinent to quality. It directly
engages marketing, R&D, engineering, production, and after sales service for the
common goal of satisfying the customer [9].

TABLE 12.1
HOW TQM CONTRIBUTES TO QUALITY, EFFICIENCY AND RESPONSIVENESS

QUALITY
Management Leadership by Customer Total Systemati
Commitment Strategic Planning Focus Participation Analysis

Policies, Visions, Define Participative Emphasize


organization missions product management prevention
attributes at all levels
Common Objectives/ QFD All processes Analyze
business plan strategies good/bad
Goals/Tactics Cost of Minimize
ownership variation

EFFICIENCY
Management Motivation Internal Resource Quantitative
Support Customers Utilization Analysis

People Communication Everything Supplier Common


is a process participation method
Material Need recognition
Next process Quality teams SQC
Equipment Rewards is customer
Employee Facts, not
Training Customer suggestion emotions
defines system
quality

RESPONSIVENESS

Quality Management Customer Teamwork Iterative


Deployment Reviews Feedback M ethod

Quality focus Regular review Changing Quality team Understand


environment reviews
Individual Analysis Customer Consensus Select/
responsibility needs change Analyze

Institutiona Decision Continuous Individual Plan, do,


lization Implementation increment reviews check, adopt
174 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

12.2 WHAT REALLY IS QUALITY?


As the drive for quality intensified, a quality product got defined as one that is fit
for its consumers use. Juran [9] defined this fitness in five distinct dimensions.
First, the product must perform as expected by its user. Second, it must be reliable:
it should sustain its quality in the long run. Third, it should be easily serviceable.
Fourth, it should be easy to maintain. Lastly, it should possess the preferred aesthetics.
Thus one may say that quality is the total composite of product and service
characteristics of marketing, engineering, manufacturing, and maintenance services
an enterprise offers to the customer. Through these characteristics the product sold
or service rendered meets the expectations of the customer. Notice that this new
view requires that the enterprise must re-orient itself. It should move away from
merely filling a market need to satisfying the customer's expectations .
Taguchi was among the first to formally say that off-target performance
inflicts a loss to society. He went on to re-define quality as the loss a product
inflicts on society from the time the producer ships it out.

12.3 WHAT IS CONTROL?


Surprisingly, over the years the notion of control has not changed. Rather, it has
become more clearly defined. Control is a systematic and intelligent procedure
with which we react to deviations and re-orient our efforts more sharply toward the
goal.
Control of quality presupposes standards set on performance, safety, cost, and
reliability. Control begins with the appraisal of current performance relative to
these standards and recording any measurable deviation that may be thus observed.
It demands that one take corrective actions when one finds that deviations justify
them. The very attempt to achieve control helps also in planning ahead planning
for improvements in a continuing effort to reduce deviations in future. These
improvements influence product/service performance, safety, serviceability, and
reliability.
What product/process characteristics should one measure, how often should
one look for deviations, and when exactly one should take a corrective action have
been now thoroughly studied. Experts have provided us with some important answers.
In the context of industrial processes, these answers come from sound statistical
theories.
Control however is reactive: It is after the fact. This is a fact whose true
appreciation has only recently been driven home; it suggests that the effort to
deliver quality should begin with design, since design fixes 80% of a products
lifetime cost [1, 13].

12.4 QUALITY MANAGEMENT METHODS


Beginning about 1930, interest grew in understanding the nature of variations that
occur in quality, especially of factory production. This led to the development of
control charts (Fig. 12.1), initiating comparison of quality deviations to some standard
and taking control action when the deviations became too large. Control charts
TOTAL QUALITY MANAGEMENT AND TAGUCHI METHODS 175

Subgroup--------
Fig. 12.1 Xbar and range control charts to achieve statistical control of
manufacturing defects.

have since been much studied and used and their utility documented and illustrated
[9, 22].
Since materials and parts form a key input toward the effective performance
of many products, in the late 30s experts also devised methods to guide acceptance/
rejection decisions when one received a large number of goods truckloads of
raw materials, steel plates, electronic components, supplies, artillery shells, etc.
Generally one could not inspect each item in these supplies economically. Sometimes
the required test would even destroy the item. One termed the methods created here
which are statistical in nature sampling plans.
Thousands of factories use control charts and sampling plans as their mainstay
in QA even today [9, 28]. One should note, however, (hat these two methods aim
primarily at appraisal (of what one produces or buys) and on-line adjustment (of
the process parameters) to assure quality. Embodied in SPC, these two QA methods
provide some but only limited prevention of quality problems. Prevention, we have
since learned, is done best through robust design.
Designing products and processes with the explicit objective of preventing
quality problems is relatively new as a QA procedure. This approach takes aim at
the roots of variability , the primary cause of poor product or process performance.
As explained in this book, this approach requires systematic experimentation with
design and process parameters to uncover how sensitive the quality characteristic
are to the parameters the plant operator controls, and those uncontrolled, regarded
as noise. Such experiments may involve the designers, R&D, manufacturing
176 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

engineering, production, and even customer service groups. Developed mostly in


Japan by Taguchi and dubbed concurrent engineering in the West, these methods
are now being rapidly adopted by companies seeking a competitive edge through
quality design. The first notable application of Taguchi methods outside Japan
occurred in the Bell Laboratories, U.S.A., whose engineers used this approach to
optimize their initial designs of complex computer chips [5].
Besides putting specific techniques and methods to use, an organization must
be obsessed with quality in order to deliver quality. Total Quality is hard, practical,
business reality. Experts describe it as a distinct company culture. It requires everyone
to evaluate constantly the quality of their work and how one reflects this quality in
the products, services, or information the company offers to customers or uses
internally. But one must first appreciate here why a concerted and companywide,
rather than segmented, effort is essential for TQM to be effective. This appreciation
can begin with a recount of who does what in an enterprise.
Marketing, the primary contact with customers, projects and evaluates
customers expectations. Engineering reduces these evaluations to performance
targets and tolerances, and later to designs and drawings. Purchasing selects and
develops vendors for parts and materials needed by the design. Manufacturing/
process engineering then selects the jigs, tools, and processes. Production
manufactures the parts and assembles the product. The Quality Loop of ISO 9004
[ 1 0 ] shows these interdependencies clearly.
If process capability (Cpk) is still poor, inspection must evaluate what is
produced for functional and conformation-to-specifications checks. Shipping then
completes packaging and transportation. Lastly, installation and product service
groups ensure proper performance of the product at the customers site.
It should be clear then that in order for the product to perform on-target with
minimum variability, all the above groups must closely interact and cooperate to
optimize the efforts and the improvements. Everyone, not just the QC department,
must therefore shoulder its share of the responsibility for quality.
TQM also requires certain non-compromising attitudes of the top management.
As already mentioned, total quality management requires integration of an
organizations/companys efforts to deliver quality. Top management alone can
conduce this integration not through trite slogans, but through a change of
attitudes, skills and other related factors. Top management must itself find the link
of product/process quality to the companys business objectives and then explain
it first hand to others. Ideally, people at every level should know what will satisfy
the customer and how their jobs are linked to it. Further, the organizational structure
and climate should allow continuous improvement rather than impede it. People
actually enjoy working for a company that is constantly trying to improve. Such a
company responds to their ideas and ensures that these ideas are tried out, put into
place, and not dismissed.

12.5 THE BUSINESS IMPACT OF TQM


Putting TQM in place is clearly hard work but experience shows that an enterprise
gains much from it.
TOTAL QUALITY MANAGEMENT AND TAGUCHI METHODS 177

As shown by the Quality Loop [10], TQM helps enhance the saleability of
products and services. It balances the quality levels and the costs of maintaining
them. The product meets customer wants in (a) satisfactory performance, and in
(b) price. The second gain (b) results from the optimization of efforts to deliver
quality at the lowest justifiable cost. This would contribute to the companys growth
and long term survival.
TQM increases producibility. Quality experience (acquired from product
performance feedback from the field) guides design and manufacturing engineers.
This feedback enables them to systematically achieve what the customer needs
and produce it repeatedly, at an acceptable cost. Also, the relationship between
product design standards and the quality capabilities of the plant becomes consciously
designed and established.
TQM increases productivity. A positive and commanding control of quality
results rather than after-the-fact reaction to deviations and re-work of failures.

12.6 CONTROL OF VARIABILITY: KEY TO QA


Though a minority, some still feel that assuring quality is the business of the
inspectors. If there are enough of them, they can prevent defective items from
getting to customers. The electronics industry still does QA on advanced function
VLSI chips in this fashion because of the multitude of factors involved in VLSI
manufacture whose influence is not yet completely understood.
But inspection does not reduce scrap or re-work a part of total unit cost.
It also does not improve functional performance, manufacturability, serviceability,
or reliability. Field data reported from a wide spectrum of industries suggests that
80% of the performance problems of a product are caused by its design, and 2 0 %
by its method of manufacture [29, pp. 22-37].
The control o f variations in product quality characteristics should be the
design/process engineers focus. Fortunately, as Taguchi showed, one can reduce
performance variations substantially by making the product/process design robust,
rather than by using more expensive technologies, parts, and components.
Product design must go beyond manufacturability, etc. It should formally
aim to make the field performance of a product robust (Section 1.9).
Process design should minimize effect of all uncontrolled factors that might
impact the production, packaging, shipping, and installation processes.
People , rather than technology, should be viewed as the major resource when
we attempt to resolve quality problems.
Participative management practices (through quality circles, small group
activities, suggestion boxes, etc.) can involve people and draw out the best in ideas,
skills and commitment. One may view participative management as a bold step that
lets the controls go However, its success in creating an environment in which
quality problems can be effectively tackled is now well established. Many
organizations with quality circles in place affirm that workers can effectively
troubleshoot and solve problems that often baffle plant engineers and foremen.
Participation is also the key to effectively implementing solutions and systems.
178 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

12-7 HOW IS STATISTICS HELPFUL?


Statistical methods are effective in TQM endeavours in many different ways.
The first effective statistical device created to help control quality deviations
was the Shewhart control chart [22] a plot of the quality characteristic of
interest against time, showing the fluctuations in quality as production progresses
(Fig. 12.1). Control charts provide signals to help decide when a production
process should be adjusted and when it should be left alone. Shewhart showed
that control charts could effectively help separate the two types of factors that
generally cause quality deviations. Shewharts random factors are factors such
as vibration, ambient temperature variations, small differences in the manner in
which the process operator holds his tools, etc. Assignable factors, on the other
hand, are those factors that become active occasionally but have a large effect on
the process, such as a shift change, a bag of new materials dumped in the reactor,
a tool break, etc.
Shewhart reasoned that simpler devices such as histograms also could describe
the nature and extent of variability of observed data in a graphic illustration,
but this was not enough. If we record the diameter of certain bars rolled, then a
histogram may show roughly what fraction of production is out of specification.
Histograms, however, provide no temporal information: we cannot tell by looking
at a histogram when someone produced the off specifications bars. Without the
knowledge of the time when/how off-quality products were made, it would be
difficult to start any meaningful investigation of the causes of quality variability
to rectify those causes.
Shewhart also placed control limits on his control charts to help detect
the emergence of assignable factors and thereby facilitate investigation of quality
deviations [7].
Once a control chart shows that a major deviation in quality has occurred,
one may apply several problem solving methods to help pin down the real cause
for trouble and then to remove it [9].
A second significant use of statistics in the assurance of quality is in the
construction of acceptance sampling plans . The testing of all the goods every
transistor, every bag of cement, every vial of vaccine that a customer receives
in a large shipment is often not feasible or even desirable. As stated earlier, the test
itself may destroy the product. On the other hand, if we do not inspect the items
in the lot individually, we must assume the risk of accepting some defective items
that may be present in the lot. Sampling plans, such as MIL-STD 105D, provide
standard inspection schemes. With these schemes the inspector can adjust the
extent of inspection (the number of sample items picked at random from the lot and
individually inspected) to match the level of risk that the management is willing to
accept. Generally it is possible to devise a statistical inspection scheme that will
limit the risk of rejecting good items and also the risk of accepting bad items
to known levels [9, 22]. Manufacturing industries and the military routinely use
such schemes.
A third and perhaps the most significant use of statistics is in quality problem
prevention through optimized product and process design.
Taguchi methods have a great impact on the quality of product/process
TOTAL QUALITY MANAGEMENT AND TAGUCHI METHODS 179

design. Based on the feedback from the pioneering applications, on field data,
these methods appear to have the potential to provide industry the largest pay-offs
among all known QA methods. Though new in the domain of QA, these methods
use sound and well-established theories of statistics to design and develop high
performance products and reliable processes that would cost less to use and
operate over their lifetime. Few understand Taguchi methods yet. However, what
the pioneers have achieved has already spurred others working in chemical, electrical
and electronic, mechanical, and metallurgical engineering industries [7, 19].
Starting about 1984, AT&T has used Taguchi methods in product/process
development. The fabrication of the WE32100 microprocessor, the WE4000
microcomputer, the WE 256k chip, and other VLSI products are among the
examples here [5, 7, 14, 19, 29]. Taguchi methods helped cut the response time of
UNIX V by 3. Developers have also used Taguchi methods to create effective
personnel appraisal systems.

12.8 PRACTICAL DETAILS OF PLANNING A TAGUCHI PROJECT


It is common experience that a company mobilizes priority action on quality only
when scraps and off-speciflcations production or customer/dealer returns reach
a high level. Quality may get attention when falling market share shakes up the
company because competitors products perform much better. The recent trend
worldwide has been on ISO 9000 certification [10]. This is another reason why
quality suddenly is getting considerable attention. By contrast to this band wagon
syndrome, Taguchi-type studies are part of kaizen , a broad strategy for seeking
quality through continuous and incremental quality improvement . Taguchi studies
are much easier for companies with a high level of quality consciousness already
in place. These organizations are routine users of quality circles (QC), quality
function deployment (QFD), and other quality management innovations.
Experience shared by enterprises who have successfully used Taguchi
methods suggests that it is unrealistic to expect overnight answers from crash
projects styled after robust design experiments. References [4, 5, 7, 14] provide
real case studies of how many non-Japanese industries got themselves initiated
into Taguchi methods. We should remember that many of them were fence-sitters.
Experimental statistical methods were new to most of their engineers. Some of
these companies got themselves initiated into Taguchi methods as an experiment,
having already failed to optimize a difficult process, or to deliberately improve the
robustness of a certain product (see Table 12.2).
One major obstacle in exploiting the power of Taguchi methods appears to
be the high level of ignorance among the technical and supervisory employees
the practising engineers and R&D scientists (who would be the key players in such
projects) of statistical methods, in particular that of the design of experiments
methods. To many engineers statistics is shrouded in complex formulas and mystery.
Hundreds of these engineers privately confess their ignorance, yet they continue to
carry out process, plant, and R&D studies and sensitivity analysis by varying
factors one-at-a-time and plotting one-factor regression lines [12,19]. Such methods
are not only unreliable and inefficient but may be absolutely wrong and
180 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

TABLE 1X2
INDUSTRY EXAMPLES OF TAGUCHI APPLICATIONS OUTSIDE JAPAN

Organization Objective M ethod Used Results

United Improving high tension S/N ratio Improved di-electric


Technologies, ignition cable strength and process
USA capability
Eaton Yale, Reducing leaf spring S/N ratio Variability
USA free height variability reduced by 82%
ITT, USA Optimizing insulated S/N ratio $100,000 saved
wire strip force per year
Flex Prods., Emission control, S/N ratio Inspection
USA harness durability cost cut by
$ 100,000/year
3M Co., USA Injection moulding Accumulation Cycle time cut by
optimization analysis 20%; $ 2 million
saved per year
Baylock Mfg., Assembling nylon S/N ratio Tool cost cut by
USA tube 75%; productivity
raised by 20%
Austin Rover, Ensuring vehicle Structured Company-wide
UK reliability experiments kaizen

GEC Telecom, Optimization of S/N ratio Load monitoring


UK software parameters optimized .
ISI, India Improvement of S/N ratio Loss reduction of
insulator tensile $ 1 million/year
strength

AT&T, USA Router life S/N ratio Two-fold increase


improvement in router life

counterproductive. One has to get over this rut because one-factor-at-a-time studies
have no scientific basis. In certain aspects of process troubleshooting and design
optimization, there is no substitute for sound quantitative model building. Statistics
provides the only valid tools to establish reliable cause-effect relationships
empirically.
A refresher course in the elements of statistics and the design of experiments
for its design and manufacturing engineers and R&D scientists is where an enterprise
should start [7]. Mere rudimentary knowledge of statistics is decidedly not enough
to effectively complete a Taguchi-type project. If it lacks in-house expertise, the
company should hire an external expert who can answer the technical questions and
guide the planning and conducting of the first project.
A programme to initiate the use of Taguchi methods in an enterprise consists
of the following preparatory steps in training and orientation [7 ]:
A simple explanation of quality loss.
TOTAL QUALITY MANAGEMENT AND TAGUCHI METHODS 181

A sound explanation of statistical experiments and OAs.


A first-hand example of an application by someone who has run a designed
experiment.
Identification of the performance criteria for the product / process one
plans to study or optimize.
After the refresher course is over, the engineers should be charged with
generating the motivation for Taguchi studies. They should estimate the company's
potential gains from possible improvements in product/process design* To begin
with, one should apportion scraps, re-works, rejects, returns, and downgraded
production, and production stoppage, into off-quality materials, technological factors,
process methods, or a shortcoming in the products design. Besides, engineers
should evaluate competitors* products: Do those other designs have features that
give them an edge in performance or robustness? One should look also through the
log of customer complaints about performance, reliability, serviceability, or
maintainability. Chances are that some of these complaints will be rooted in the
products design, or in its choice of materials or parts, rather than how the factory
made the product.
After one has identified a ideal performance criteria of a product or a
process, one seeks answers to some critical questions. Are these performances
quantifiable and measurable? Is one seeking to maximize it, minimize it, or bring
it close to a target? Can one identify an appropriate S/N ratio for the planned
process/product design optimization (see Section 5.?)? From this point onwards,
the study would proceed along the parametric optimization steps outlined in
Section 1.9.
Table 12,3 gives the key aspects that make Taguchi methods distinct from
traditional quality assurance methods.
Before a project team launches the design or process optimization project,
it is imperative that the team acquires sufficient competence to understand the
purpose of the project and carry out the details of the task. These details are as
follows:
1. Selection of the desired outcome, control, adjustment, and noise factors
for the product/process under study.
2. Choice of the appropriate range for both the external and internal factors
involved.
3. Selection of an appropriate OA to study the effect of the above factors,
including their interaction.
4. Conduction of necessary experiments as defined by the OA conducting
each experiment more than once if necessary to increase accuracy.
5. Calculation of the proper S/N ratio (closeness to nominal or target,
more is better, or less is better) for each factor for each level (treatment)
selected.
6 . Doing ANOVA to determine which factors contribute to the variability
of the final product.
182 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

7. For each factor, selecting the level that maximizes the S/N ratio and hence
reduces variability and optimizes the design/process.
8. Using the factors (possibly more than one) that do not increase the
variability (i.e., have a flat S/N response) to adjust the mean performance to the
desired target.
9. Performing the confirmation/verification experiment using factor levels
selected in Steps 7 and 8 .
10. If the results appear satisfactory, one stops here. Otherwise, one may pick
different factors or treatment levels and redo Steps 1-9. In certain cases the study
might require use of RSM to improve robustness.

TABLE 12.3
THE WHEN, HOW, AND WHY OF TAGUCHI PROJECTS

Reference

When is a Taguchi Before one finalizes drawings Chapter 1


project done? and specifications.

The objective To dampen undesirable Chapters 1,


of a Taguchi environmental effects; to 4, and 5
project reduce variability in
performance.

What is the 1. Parameter design. Chapters 1-5


procedure? 2. Tolerance design. and 11

What is the cost Starts from low-cost material Chapters 4,


of the project? and component parts; a few 5, and 6
carefully planned experiments
rather than scores of trials
and tests.

Continuation Continuing optimization of Chapters 5 and 12


plan process/product design to
evaluate new materials and
methods to keep unit cost
minimum.

Attitude Optimization avoids Chapters 1 and 12


future problems and
reduces unit cost; reduce
ongoing SPC effort.

Philosophy Discover control/design factors Chapters 1 and 5


whose effects are important.
Adjust these factors to
consistently deliver on-target
performance with minimum
variability.

Taguchi methods consciously push quality back to the design stage [7]. A
robust product design be it an electric motor or a software minimizes defects
TOTAL QUALITY MANAGEMENT AND TAGUCHI METHOOS 183

caused by design, materials, production, and the uncontrolled factors present in the
field. A robust process design cuts the ongoing effort in process controls (Fig. 12.2)
required to minimize manufacturing imperfections [14].

Process controls
Fig. 12.2 An improved (robust) process design cuts a companys
on-going efforts in process control.
Appendix
and F-T
CUMULATIVE WM

Z 0 I 4

.0 .5000 5 0 m
.1 .5398 54
.2 .5793 5S 32
.3 .6179 .6217
.4 .6554 .6591
.5 .6915 .6 9 9
.6 .7257 .7291
.7 .7580 .7611
4
.8 .7881 79 >

.9 .8159 .8 1 *
1.0 .8413 .843*
1.1 .8643 8665
1.2 .8849 .8 8 0
1.3 .9032 .9 0 *
1.4 .9192 .9207
1.5 .9332 .9X 5
1.6 .9452 .9463
1.7 .9554 .9564
1.8 .9641 964S
1.9 .9713 .9719
2.0 .9772 .9771
2.1 .9821 .982*
2.2 .9861 .9864
2.3 .9893 9896
2.4 .9918 .992
2.5 .9938 9SMB
2.6 .9953 .9955
2.7 .9965 .9966 i
2.8 .9974 9975
2.9 .9981 9982
3.0 .9987 99911

P robability [ - < C,
Appendix A: Standard Normal, t, Chi-square,
and F-Tables
TABLE A1
CUMULATIVE DISTRIBUTION O F STANDARD NORMAL RANDOM VARIABLE z

(Second place o f decim al o f z)


z 0 1 2 3 4 5 6 7 8 9

.0 .5000 .5040 .5080 .5120 .5160 .5199 .5239 .5279 .5319 .5359
.1 .5398 .5438 .5478 .5517 .5557 .5596 .5363 .5675 .5714 .5753
.2 .5793 .5832 .5871 .5910 .5948 .5987 .6226 .6064 .6103 .6141
.3 .6179 .6217 .6255 .6293 .6331 .6368 .6406 .6443 .6480 .6517
.4 .6554 .6591 .6628 .6664 .6700 .6736 .6772 .6808 .6844 .6879
.5 .6915 .6950 .6985 .7019 .7054 .7088 .7123 .7157 .7190 .7224
.6 .7257 .7291 .7324 .7357 .7389 .7422 .7454 .7486 .7517 .7549
.7 .7580 .7611 .7642 .7673 .7703 .7734 .7764 .7974 .7823 .7852
.8 .7881 .7910 .7939 .7967 .7995 .8023 .8051 .8078 .8106 .8133
.9 .8159 .8186 .8212 .8238 .8264 .8289 .8315 .8340 .8365 .8389
1.0 .8413 .8438 .8461 .8485 .8508 .8531 .8554 .8577 .8599 .8621
1.1 .8643 .8665 .8686 .8708 .8729 .8749 .8770 .8790 .8810 .8830
1.2 .8849 .8869 .8888 .8907 .8925 .8944 .8962 .8980 .8997 .9015
1.3 .9032 .9049 .9066 .9082 .9099 .9115 .9131 .9147 .9162 .9177
1.4 .9192 .9207 .9222 .9236 .9251 .9265 .9278 .9292 .9306 .9319
1.5 .9332 .9345 .9357 .9370 .9382 .9394 .9406 .9418 .9430 .9441
1.6 .9452 .9463 .9474 .9484 .9495 .9505 .9515 .9525 .9535 .9545
1.7 .9554 .9564 .9573 .9582 .9591 .9599 .9608 .9616 .9625 .9633
1.8 .9641 .9648 .9656 .9664 .9671 .9678 .9686 .9693 .9700 .9706
1.9 .9713 .9719 .9726 .9732 .9738 .9744 .9750 .9756 .9762 .9767
2.0 .9772 .9778 .9783 .9788 .9793 .9798 .9803 .9808 .9812 .9817
2.1 .9821 .9826 .9830 .9834 .9838 .9842 .9846 .9850 .9854 .9857
2.2 .9861 .9864 .9868 .9871 .9874 .9878 .9881 .9884 .9887 .9890
2.3 .9893 .9896 .9898 .9901 .9904 .9906 .9909 .9911 .9913 .9916
2.4 .9918 .9920 .9922 .9925 .9927 .9929 .9931 .9932 .9934 .9936
2.5 .9938 .9940. .9941 .9943 .9945 .9946 .9948 .9949 .9951 .9952
2.6 .9953 .9955 .9956 .9957 .9959 .9960 .9961 .9962 .9963 .9964
2.7 .9965 .9966 .9967 .9968 .9969 .9970 .9971 .9972 .9973 .9974
2.8 .9974 .9975 .9976 .9977 .9977 .9978 .9979 .9979 .9980 .9981
2.9 .9981 .9982 .9982 .9983 .9984 .9984 .9985 .9985 .9986 .9986
3.0 .9987 .9990 .9993 .9995 .9997 .9998 .9998 .9999 .9999 1.0000

Probability [-o < zobServed - - tabulated cum ulative probability


185
186 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

TABLE A2
CRITICAL VALUES OF TH E STUDENT /-DISTRIBUTION

a - 0.1 0.05 0.025 0.01 0.005 0.001


dof
1 3.078 6.314 12.706 31.821 63.657 318.310
2 1.886 2.920 4.303 6.965 9.925 22.327
3 1.638 2.353 3.182 4.541 5.841 10.215
4 1.533 2.132 2.776 3.747 4.604 7.173
5 1.476 2.015 2.571 3.365 4.032 5.893
6 1.440 1.943 2.447 3.143 3.707 5.208
7 1.415 1.895 2.365 2.998 3.499 4.785
8 1.397 1.860 2.306 2.896 3.355 4.501
9 1.383 1.833 2.262 2.821 3.250 4.297
10 1.372 1.812 2.228 2.764 3.169 4.144
11 1.363 1.796 2.201 2.718 3.106 4.025
12 1.356 1.782 2.179 2.681 3.055 3.930
13 1.350 1.771 2.160 2.650 3.012 3.852
14 1.345 1.761 2.145 2.624 2.977 3.787
15 1.341 1.753 2.131 2.602 2.947 3.733
16 1.337 1.746 2.120 2.583 2.921 3.686
17 1.333 1.740 2.110 2.567 2.898 3.646
18 1.330 1.734 2.101 2.552 2.878 3.610
19 1.328 1.729 2.093 2.539 2.861 3.579
20 1.325 1.725 2.086 2.528 2.845 3.552
21 1.323 1.721 2.080 2.518 2.831 3.527
22 1.321 1.717 2.074 2.508 2.819 3.505
23 1.319 1.714 2.069 2.500 2.807 3.485
24 1.318 1.711 2.064 2.492 2.797 3.467
25 1.316 1.708 2.060 2.485 2.787 3.450
26 1.315 1.706 2.056 2.479 2.779 3.435
27 1.314 1.703 2.052 2.473 2.771 3.421
28 1.313 1.701 2.048 2.467 2.763 3.408
29 1.311 1.699 2.045 2.462 2.756 3.396
30 1.310 1.697 2.042 2.457 2.750 3.385
40 1.303 1.684 2.021 2.423 2.704 3.307
60 1.296 1.671 2.000 2.390 2.660 3.232
120 1.289 1.658 1.980 2.358 2.617 3.160
OQ 1.282 1.645 1.960 2.326 2.576 3.090

a = probability [/observed - tabulated critical f-value]


APPENDIX 187

TABLE A3
CRITICAL VALUES O F THE chi2 DISTRIBUTION

p = 0.995 0.975 0.050 0.025 0.010 0.005


dof
1 0.03927 0.09821 3.84146 5.02389 6.63490 7.87944
2 0.010025 0.050636 5.99147 7.37776 9.21034 10.5966
3 0.071721 0.215795 7.81473 9.34840 11.3449 12.8381
4 0.206990 0.484419 9.48773 11.1433 13.2767 14.8602
5 0.411740 0.831211 11.0705 12.8325 15.0863 16.7496
6 0.675727 1.237347 12.5916 14.4494 16.8119 18.5476
7 0.989265 1.68987 14.0671 16.0128 18.4753 20.2777
8 1.344419 2.17973 15.5073 17.5346 20.0902 21.9550
9 1.734926 2.70039 16.9190 19.0228 21.6660 23.5893
10 2.15585 3.24697 18.3070 20.4831 23.2093 25.1882
11 2.60321 3.81575 19,675 21.9200 24.7250 26.7569
12 3.07382 4.40379 21.0261 23.3367 26.2170 28.2995
13 3.56503 5.00874 22.3621 24.7356 27.6883 29.8194
14 4.07468 5.62872 23.6848 26.1190 29.1413 31.3193
15 4.60094 6.26214 24.9958 27.4884 30.5779 32.8013
16 5.14224 6.90766 26.2962 28.8454 31.9999 34.2672
17 5.69724 7.56418 27.5871 30.1910 33.4087 35.7185
18 6.26481 8.23075 28.8693 31.5264 34.8053 37.1564
19 6.84398 8.90655 30.1435 32.8523 36.1908 38.5822
20 7.43386 9.59083 31.4104 34.1696 37.5662 39.9968
21 8.03366 10.28293 32.6705 35.4789 38.9321 41.4010
22 8.64272 10.9823 33.9244 36.7807 40.2894 42.7956
23 9.26042 11.6885 35.1725 38.0757 41.6384 44.1813
24 9.88623 12.4001 36.4151 39.3641 42.9798 45.5585
25 10.5197 13.1197 37.6525 40.6465 44.3141 46.9278
26 11.1603 13.8439 38.8852 41.9232 45.6417 48.2899
27 11.8076 14.5733 40.1133 43.1944 46.9630 49.6449 ,
28 12.4613 15.3079 41.3372 44.4607 48.2782 50.9933
29 13.1211 16.0471 42.5569 45.7222 49.5879 52.3356
30 13.7867 16.7908 43.7729 46.9792 50.8922 53.6720
40 20.7065 24.4331 55.7585 59.3417 63.6907 66.7659
50 27.9907 32.3574 67.5048 71.4202 76.1539 79.4900
60 35.5346 40.4817 79.0819 83.2976 88.3794 91.9517
70 43.2752 48.7576 90.5312 95.0231 100.425 104.215
80 51.1720 57.1532 101.879 106.629 112.329 116.321
90 59.1963 65.6466 113.145 118.136 124.116 128.299
100 67.3276 74.2219 124.342 129.561 135.807 140.169

P = probability [chi20bserved > critical chi2 value tabulated]


188 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

* o ro N ro SO r- ro ro TM iH
Tf m in SO * ro so SI $ o> r- ro in 9 \
s m On 00 in fO
a

On CO s6 ro
a

r4

rj Tf (N fO
r4 so
O oo t"* rj fO
^<N $
m in
3
CO
rf
%
s
so s
CN <> s g # %
\n
On

in On oo in ro Os ro VO ro in cs ro
CN
vo S 8
rf- M ON On Tf 00 ^n N 00 f- On in

ro so
in m SO ro \o 7 <N Os O n
r i 10
w *
A-

<+i
a a

(N \c O n 00 in ro ^r O n ro SC ro i/i r4 c4 T# <N ro
O ^H
TJ
On On l> ON
CnJ (/>
Vm \D ro oo oo SO
O in in 8 r- ON rf in O
<N <0 On
# #

oo IT) f*i Os ro so fn V) <N


ft a
w

r4
A

rj
w

VO 8
o
o m ^ 00 O n 00 (N t- rs fS Os 00 Q g r^
t *3 in in n in a SO r^ a fN* a O r- # NO a
o} <*> ON 00 SO in ro On ro h* rn in c**> V) (N rr r-i Tf
a so g! N fH
x> ^ 3a> (N M 00 o in CS cn o ii
C
oin in
cn| m f in9 r- 3 3 s ro % O i 00% s
d- 3x> NO ON oo in fO cn r - ro m ro in rvi tt <M
8t 8
O iS
fl k . o in Tf r - r^
On r^ Tf
t*~
^t
CO s
m

<N
oo to ft
SO
Al
T3 T3
TT (N
so 00 SO in ^5 ON
a

cn t-'
a

ro VI ro i/j
a a

<N
A

rj TT
G
O >
8
u
cc
o
ro
o oo
*n lO 5? $
r-j r-
so U5
^ fO
oe oR
in ^H
00 a 00
ro 8 8 00
O
SO
oo 3 %
o
r*
CA <N <M oo SO in OS CO r-
w

ro IT)
A

CO VO
A

r4 rt fN -*T
9

NO Si g r*
kf
- f

aD
>r*
i

Tf On ^
rt
^ 3
3 $
r-
r -*
in
os*
f*>
vn
f*> CO Tf
Ov
a
Tf -H
00 fO
CO t>

w
*T ^4 a
r rJ
ro so CO o
A
o
O n r-
CN rf
Tf fO
r-
<*4
2 S
T3
3d sj
c ocs oo
cm
vo On
05 so
O rJ
oo SO V)
in
A
r^
oo
Os
5en so/) in
i s ro
O n ?. A
r~-
r~ Tf a

5 wS *t

so O rn rn lo ri rs
CO 8! " ^
a
H
O I so 0\
so
n NO
9 5
on j!
Onro
VO 0^
00
i/>
OO vN
in ^
s O$
#
(N
ON
ro f-
O n h-
c*
CO NO
A

ro
9a

w(
00
On
r4
8 (OOrNjJ T*N
A a

ft f-H 5&\
n
3
Tf Q
< ti
J3 O 9 Sf
fS
O'
*
N
r- On
oo so

<n
3 9
t^
t*
Os
sO
ON
ro r-
sw
<N 1/3
in fO
ro SO
a
ro SO
<N U-j
ro to
8 OO s
SO
ro o r4 TT
-O fe rg SO S!
- M

.b i
w
cs 5 S
3
fS
Tt Tf
On
^t
r-
oo
H l~
ON
m, rf
00 ON
VO 00
Tf On Tf
M
8 l> a
h-
m "rr
ro SO
a
oo
cs
ro
GA
B
ro lO
A
On
H Tf
A

^ fcC
D -C -g
~ o o~i N SO fO
r* ^^ m On Tf ac Tf oo
ore
3.31

Tf l> 9 o r- ts ON t '
> ^ fS
2 I OO ^
N
m ^r rl* On rj- h-

SO io fS ^
a ' r ; so 0> SO N fS ro r~ JO
C X3 /) cn ON Ul 90 so CO 00 ON 00
r-i On in Tf SO CO lo
A

ro lO

r i ^r
#

H a s a w*
3u Os
00
CO s 8 00
r-
i-i/) O 00 00 H
On r->
ON rH
CO ON
00
A
s <^N
^ so ^ rr c? tt r- so ro lO CO lo CO Tf
$ tH
ll ^
00 On
m C^J s ^
CMt^ m
oo <S *i ^ h- 3 ?
m ro
rs s s
8 I <N If} vd ^ Tf rf 00 SO ro NO' ro lO CO lO
^ r * . *
rN
^

2 S w4
a 8,
l> oc i/) so ON O OS On w
cd l> cn s g 00 Tf CMN r- in pH CM s
r-i s
.a ^ in On S SO rr rr 06 cn

CO SO CO lO
A A

CO lo
73 >
C
SO TJ- <3N cn ro SO n r- oo t r- O n oo r- r- CS On
cn U5 - <i as so

r-i rf oo in f<5 CO A
CM A

M i/) 2 S SO Uj ^ oo t> CO VO CO lO CO *o
<*>
<u
. .

> o vo <S in On i/
m r- r- O n f+i oo to
.
IN xn S 3
rs r^
CNl ^ o o\ ON S? A
SO NO A
rj- $ ro 3
On so CO lO
w

&o in in
w 06 r ro SO co VO
<N
*n in os 00 ON n i/) tt CO N oo
<N
rsj
jq cn On < fo in *h % 00 VO A
8t a

vd v 5 ^n w rr on CO to
#

t- ro r^ ro SO
O
25 S
T3 \0 fQ oo so
vo r- O' o> SO X r^ ON
6.99

to SO
6.55
3.71


O
i ro m \o ^ r*- r- ro Tf o a
oo
On On vd SO in ri Tf ON 00 i> ro
2 S5 M
o
a *n ph T N rr to so v
<n oe On p** lO 5? (2 CnJ 'O
VI
r4
2 1 05 s SO X ^n 4

'* O n
A

00
A

tT 00
a

tT
%

.ia
r4 Os m N N On <N SO ra NO
4.96
10.04

on
X3 SO V^) ^ r- n SO fS On l> in ro <s A

o in
a

oc 00 X r- - h vd in ro M in
?3 On 2
w
g0) i <N in SO p- 00 Ov
10
APPENDIX 189

oTf o VO cn r-
o
^OHt* vO u%
On VO
CN
ON
00 9 \ CN 00 i-H
CN 9
(<J (N 00 'f t 00 Tt
C-i pn CN CN CM fO rs CN ri
%

PS

a A

fS
A
9A A

r<
CN X ^^ 5 oo ON
00
CN h-
O h
r- r--
On v
m ^00 ? (N ooo rn
f<5
(D
r4
n
m
CN
CN
a

cm ri cn ri cm rl - rl 2 S ri
00 13r* -H fS
rs Tf O <N m ri c-> Tf m h
m
CN <1 ~H 0\ s s s ON VO On oo Tf oo O0 tri
CM fO cn r i cm - ri ri -* r l ri ^ n
in VTi SO t'
1\ w
0
w CNf O 00
CN 'O
O t-
oo
On @ S. O pn t- l> Tf M
T ^ CM H
O n 00 Tf 004
cm pn
T - l

CM cn ri CNCN cn ri

CM fO CM K)
ri <
N N -H N PN

i> ^ 1-H Tf NO CN \C On H o
T ^
O 00
CM CM S sCMa a sg t- On ON If) 00 If) 00#
(N (?) CM CM CM CNcn cn ri oi ri *~*4 3 N n f<

n
05
O CM iH
CM N
00 m
s 00
o S S S vO
On V
m 00
ON *0 On VJ
ff)
c4 ff) <N m CM

n a

CM fO
a

CM fO CM cn n rs rs N
v


*N
f

^4 PN

-H N

cn v CS N r-*
cm sc
n pn so
sj
s VO ON Cn NO m 00
in a
(N <*>
Tf
r4
m
CM
Tf
CM <*$
CM
CN rt cn ^
M 0\
<s r i
SS?
<n r
r*
PN
O'+ \o
PN
On
>o O'
CN ' i-H r i

in in <S oo r^

2.07

2.04
2.77
o

3.00 Z91
00
S % vO
m
*0 CM N A CN a
f- On 'O
CN pn

CM CM CN CM CN r 4 ri ~ ri
< N in pn oo w-i m in

2.92
On Tf 00 ON

2.15
o CS OS

2 .8 6
so

2 .1 1
in t- Tf cn Tf CM <N CN fH 0 oc O t-
cm Tf (N

CM

fO CNrn CM i-O CN CM CM 01 04 ri
n
*C> ^
t O h- On w m OO (0 m ON t" in s CM Tf c* ro
O 00
N V
n
CM
A

CM
A
cn in
cn m
m n
CM
CN
CN
CM
CM

<N rt CM rn
9 \
CM rl
s?
CNfS <N fS

O *< o 9P X
* 4
Tt n Os m r-> o> in o CN oo
f-H %
m rt
^cN <i
Tf
o
CN
A A
ITi h-
<N
A

CN fTi CM

cn n

CM P0
CM
CM

CN
CN*5
CN
CN rt CN cn "(NSr i (N ri
9\

Tf
r- 00 m VO
m IT)
r- m ON \o o cn
CN fH S OO M
s
(M
s
Tf
in
CM
'i: T ^n
<N P
rf *0
CM f^>
%

CM f0
m
CM

CN
CN m
CN
CN rt H rn
8
CN fO

(N fO
On cn CO CM vn 00 0CN
0m vri r~~
2.34

2.31

On
^ T VO n X Tf m n ih cn w
rs Tf CM
A

CM <<) cn p*i CNr i

CM CN #

fn **)
% (N a (N K H rt
cm v SO )0 i-H m in w i N r- 3 Tf V w 00 ^ NO
cn
CM rj m rj
oo ^ r- VO q V) % in%h- cn ^ cn ^ ^ CN PN CN
CN Tf CM Tf CM Tf rs pn (N PO cn m cn pn cn pn CN t+> cn m cn pn CM

vO ,f
00 l/l
\o C^ osO0\ m a
m
ON
^1 VO
m 9\
Tf *>
OO pn
cn T
in r->
cn n
CM w4
m
VO
m <1
r- so
A *0 a

c4 Tf CM Tf CM Tf CM Pn CM pn CN pn CM pn CM fO CM pn CN cn CM m CM pn
p*v O
00
9v CM in pn
SO On
in
00
in ??
vO cn PM O cn
in
m
r n

r- * *}
CM ?pn
n
3 % 4 A *

cn CM Tf CM Tf cn ^r CM cn CM pn CM cn CM P<> CN pn CM pn CM cn
cn
on r-
in
00
f- or-w vO
Os On
in
in
in h
o i

m
00 >n CM l-H
Tf
in
r- n
CN Tf T T
CM Tf

cn ^ CM CN * t CM cn CM fn CM 55 CM t*i CN pn CM pn CM pn

o^ Tf VO pn CM pn 00 ir> in t> CM fH O IT)


TfN NO t*- ON
sCN 3Tf
CM l/>
CN A
r- VO# VO ON
in 00 m in IN Tf
cn t CN Tf CN "ft CN Tf CM Tf CM pn CM pn (M pn CN pn CM pn CM pn

g > 8
CN n m v
oo # ON <N
r-
Tf
r- N

r- *
vO iH
'O
cn 5
vO Os
O
VO
r-
in 9
in vo
in r>
**
Ov VO
cn i/) m

Tf CN

cm CM Tf CM CM CM Tf CM cn CN pn (N pn CM cn
O M
<N m CN VOOl
ON so
VO
On
in
00 3 00#
r^
r-4 <N
Tf h
r- H
CO
vO ?
r5 in
1 4
a
$
I/)
O
m
A
9
Tf CN ^

CN CM Tf CM Tf CM
r-
CN

CM

CM
8 s
CM cn
no h
cn v vO 00
< f*
) 'P
O#
0N r-
o h
vO r*
ON >
cn
ON
o
On
r-
00
fn Tf r-
00* n
CM
00 m
CM nA #
cn t/5 CO in i^ cn in cn Tf m CN CM CN Tf CN ^r CM Tf CM Tf

ON PM ON in Tf v O' *s O 00 m w n fs
in <1 5 m CNt CM M 2 8 GJ j O 00 o oq
cn V cn in cn IS> cn m m IA
%

cn in

cn in m i/>
A A

cn ia m Tf
A

r'j tjt f^5 ^


00 00 <<) O t-h 00 cn cn Ov in CM pn ON r- X
On PM
oo On oo r-* VOk VO i in# vH in+ in On
Tf * r^
cn cn v cn >e m m m V m VO m 'O cn l/> cn i/i m m ir>

Tf in
00t '
in t*i
r- n r-
NO
O
vO X S $ tj: m
on
^ m Tf s A
<
X) X
cn h
in
m
(N CM
cn 4

m
Tf On Tj- Ov Tf

rj-

90
*

T f 06 Tf 00 rf Tf X Tf 00
A

Tf X Tf X
a

Tf

<S in NO r- 00 On
22
20

CM
TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

'O \fi
r- ps COfs
c- H 00 ON cn 00
g
- H - H CM 1H
V o#
- h ^

> 9\
r*
r- % * 3
- (N ocn fS NO <-HI/)
9
t- M < ?H s CS 1
H

CS
On pM
r-- o V
r*O
* t^- r- fM oo cn
Tt PO VM
O C
(N Os r-
! PS C s|
o rs ~ r4 iM f-H
* *

"O
.ohi r00
-J P
h-O O fO
oo m r- 9\
ps CM On CM 00
cn <N (N TJ
*
cs
S
O
- rl - ^ -J H
ao>
d rJ
o
r- oo ^ O
oo PS
po tn
in rM in
cn <
N
m O
cn 00
CN^vH
2 - ri -J H -H
*o
JO 4 'O
oo 3rr o..o-A C
^M
t 00
T3
*or
00
oo OO s C^) VO
cn in
rn c
m
WJ
.5 &
'w' g
O ps
r 3 ^n Ifi
fO
c k, O
oon \ r- V) T
*
A!
O ^
-I r4
00 TP in in
rf CM
4 O Ov
vH
#
*0
C S
O s N
8 x4>* om OO
NN'O rj-
ONX I/) cm O
NOn
4 *n*
<
inN ON 0\O
5? N
- ri ~ H 2^j 1-H
W X! h N
a t!
t}-
CS r4 5
On OO
nPS
NO Tt cn
VO+ r^
in Tn
^f-
in C-S 9
W t^.\
- H
XJ3 3
Z C OXcd) ocj P rJ
o rrr-* oo oo
so CM
VO oo r- r-
O -O ri ri H <s
8
cs ri 9 s m v-', oe
O
B3 VO O v
T-. oc g S8 S 00 in
00 in
r-a oO
V> VO ^
von
cs r* r4 ps + 3 S
IB

b r-4 ps
as
TT HW 1 s & co po
* OS
<o oOn O
r-N CM or- C\ O
< Q l j CS PS (N H r4 H r- 9 vo
w .b -g *N
j CM oCS r- OO VO in in O
CO a
CS f*>
r4 CS fS
o 00 o00
9 r-O vo
4 -H
r *n o
r-
< O 73 o ^ ^ f4
H w -G ^ 13 o
-* O Tj- TI
CS <N OO OO cn
CN a s On OO 00 OO O
oo ON
3 r i p<5 (N f^5 r- fM
< s *Z fN PO T-^ fH 9

3 -3
> 3 O O w* vO
J G X ) CS N CM^ Tf PO
fS 8 CM
On
r-
oo in cn cm
oo oo
< as <N PO r4 CM <*> CM 9
oo cn
fS|

U
HH
H g ON Cr*-S>
^ ocn 00 H
CM N
r-
o
r-*
ONa
CM
ON o
Os
ON
oo
OO w*
2
10
ri po rJ po CM

CM

# #
OO ^
i CM

u II
a I
OO <oonw rf VO NO
CO m f^>
cn
Ot
00
ON
VO
On
in
ON TT w*
ON 1/)
r-i po r-i po CM f*> fS

CM N
*
%
9

r4
I a v ^) mo
r- Tf IT o
CS O in
o
cn
O o o 3
XJ -H cs po (N PO fS PO

CM rsi CM CM CS cs ri
.5 || m) r*h
o a NO V *. so ON PO
Tf \c ON
CMt Ov
M Tt <Nf O
8 S
ri po H m rj po CM CM CM
w

ri
<

CS r-i ri
2.80 2.64
3.94

2.78 2.62

5 O
4.22 3.90

o S ? m VO
CM
cn
CM
CS n
w>

fS PO

(N

r4
a

CM
CS cs
CN CM CS ri po
9Z'P

NO 00 VO ON 00 ps
fH o 5? cn cn co PO
Cn| "T

(N

CM
A

CM CM rj ri po
O
TJ w
*4
u. co NO
O t^ - o f2 on ge
ON ^ On O in CM Q ae
cn 't cn CM Tf
r-*

CM
r-
CM
VO VO NO 'O h'
CM CM CS CS PO
oG
(N (N
Tf# A OO tN 00 CM
9 cn g S O 8 On
Vi cn / cn l/v cn i/) m

cn

cn cn cn
On
ri
b
00 rr t^
7.88

,^J-
4.26
7.82

X! <N CM l>
cn
O o> VO in
O %

h-

Tr
ON 00 00 oo S 3
cO S cn cn cn cn cn vc
PJ
H S CO
1000
200

400

n 8
50

<N rJ CM 8
Appendix B: Selected Orthogonal Arrays
and Their Linear Graphs
TABLE B1
L4 C23) ORTHOGONAL ARRAY
'

C olum ns

Experim ent 1 2 3

1 1 1 1
2 1 2 2
3 2 1 2
4 2 2 1

1 o o 2
Fig. B1

TABLE B2
L8 (27) ORTHOGONAL ARRAY
C olum ns

Experim ent 1 2 3 4 5 6 7

1 1 1 1 1 1 1 1
2 1 1 1 2 2 2 2
3 1 2 2 1 1 2 2
4 1 2 2 2 2 1 1
5 2 1 2 1 2 1 2
6 2 1 2 2 1 2 1
7 2 2 1 1 2 2 1
8 2 2 1 2 1 1 2

7
O

(a) (b)
Fig. B2
191
192 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

TABLE B3
L9 (34) ORTHOGONAL ARRAY

Columns
Experiment 1 2 3 4
1 1 1 I 1
2 1 2 2 2
3 I 3 3 3
4 2 1 2 3
5 2 2 3 1
6 2 3 1 2
7 3 1 3 2
8 3 2 1 3
9 3 3 2 1

1 o V - K ) 2

Fig. B3

TABLE B4
L u ( 2 11) ORTHOGONAL ARRAY

Columns

Experiment 1 2 3 4 5 6 7 8 9 10 11

1 1 1 1 1 1 1 1 1 1 1 1
2 1 1 1 1 1 2 2 2 2 2 2
3 1 1 2 2 2 1 1 1 2 2 2
4 1 2 1 2 2 1 2 2 1 1 2-
5 1 2 2 1 2 2 1 2 1 2 1
6 1 2 2 2 1 2 2 1 2 1 1
7 2 1 2 2 1 1 2 2 1 2 1
8 2 1 2 1 2 2 2 1 1 1 2
9 2 1 1 2 2 2 1 2 2 1 1
10 2 2 2 1 1 1 1 2 2 1 2
11 2 2 1 2 1 2 1 1 1 2 2
12 2 2 1 1 2 1 2 1 2 2 1

Interaction between any two columns is confounded partially with the remaining nine columns
Do not use this array if you are aiming to estimate interactions.

1 O - 0 2
Fig. B4
APPENDIX 193

TABLE B5
Lie (215) ORTHOGONAL ARRAY
Columns
Experiment 1 2 3 4 5 7 8 9 10 11 12 13 14 15
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
2 1 1 1 1 1 1 2 2 2 2 2 2 2 2
3 1 1 2 2 2 2 1 1 1 1 2 2 2 2
4 1 1 2 2 2 2 2 2 2 2 1 1 1 1
5 2 2 1 1 2 2 1 1 2 2 2 1 2 2
6 2 2 1 1 2 2 2 2 1 1 2 2 1 1
7 2 2 2 2 1 1 1 1 2 2 2 2 1 1
8 2 2 2 2 1 1 2 2 1 1 1 1 2 2
9 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2
10 2 1 2 1 2 1 2 2 1 2 1 2 1 0 1
11 2 1 2 2 1 2 1 1 2 1 2 2 1 2 1
12 2 1 2 2 1 2 1 2 1 2 1 1 2 1 2
13 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1
14 2 2 1 1 2 2 1 2 1 1 2 2 1 1 2
15 2 2 1 2 1 1 2 1 2 2 1 2 1 1 2
16 2 2 1 2 1 1 2 2 1 1 2 1 2 2 1

1 2 3 4 5 6 7 8
a O O O O O O O
Fig. B5
TABLE B6
L j6 (45) ORTHOGONAL ARRAY
Columns
Experiment 1
1 1 1 1 1 1
2 1 2 2 2 2
3 1 3 3 3 3
4 1 4 4 4 4
5 2 1 2 3 4
6 2 2 1 4 3
7 2 3 4 1 2
8 2 4 3 2 1
9 3 1 3 4 2
10 3 2 4 3 1
11 3 3 1 2 4
12 3 4 2 1 3
13 4 1 4 2 3
14 4 2 3 1 4
15 4 3 2 4 1
16 4 4 1 3 2
To estimate the interaction between columns 1 and 2, keep all other columns unassigned

3 ,4 ,5 ,6
10---- ---------- 0 2
Fig. B6
194 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

TABLE B7
L lg (21 x 37) ORTHOGONAL ARRAY

Columns
Experiment 1 2 3 4 5 6 7 8
1 1 1 1 1 1 1 1 1
2 1 1 2 2 2 2 2 2
3 1 1 3 3 3 3 3 3
4 1 2 1 1 2 2 3 3
5 1 2 2 2 3 3 1 1
6 1 2 3 3 1 1 2 2
7 1 3 1 2 1 3 2 3
8 1 3 2 3 2 1 3 1
9 1 3 3 1 3 2 1 2
10 2 1 1 3 3 2 2 1
11 2 1 2 1 1 3 3 2
12 2 1 3 2 2 1 1 3
13 2 2 1 2 3 1 3 2
14 2 2 2 3 1 2 1 3
15 2 2 3 1 2 3 2 1
16 2 3 1 3 2 3 1 2
17 2 3 2 1 3 1 2 3
18 2 3 3 2 1 2 3 1

Interaction between columns 1 and 2 can be estimated without sacrificing any column.
Columns 1 and 2 can be combined to form a 6-level column. Interactions between any other
pair o f columns are confounded partially with the remaining columns.

15

(a) (b)

Fig. B7 (cont.)
APPENDIX 195

Oco TJ

Fig. B7
ID

OJ CD o O
ro
o
Is- O O

in o O 2
in

oo

O O oj
ro
196 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

TABLE B8
L25 C56) ORTHOGONAL ARRAY

Columns

Experiment 1 2 3 4 5 6
1 1 1 1 1 1 1
2 1 2 2 2 2 2
3 1 3 3 3 3 3
4 1 4 4 4 4 4
5 1 5 5 5 5 5
6 2 1 2 3 4 5
7 2 2 3 4 5 1
8 2 3 4 5 1 2
9 2 4 5 1 2 3
10 2 5 1 2 3 4
11 3 1 3 5 2 4
12 3 2 4 1 3 5
13 3 3 5 2 4 1
14 3 4 1 3 5 2
15 3 5 2 4 1 3
16 4 1 4 2 5 3
17 4 2 5 3 1 4
18 4 3 1 4 2 5
19 4 4 2 5 3 1
20 4 5 3 1 4 2
21 5 1 5 4 3 2
22 5 2 1 5 4 3
23 5 3 2 1 5 4
24 5 4 3 2 1 5
25 5 5 4 3 2 1
To estimate the interaction between columns 1 and 2, keep all other columns unassigned.
Glossary

Additivity: An approximate representation of a cause-effect phenomenon in


which one assumes the effect of independent input factors on the response variable
to be separable and independent of each other so that the effects are added together
to find the total effect of all factors present. One assumes that no interaction
effects are present when one assumes additivity.
Adjustment Parameter: A control parameter used to fine tune a performance
characteristic to bring this performance to target. The adjustment parameter has
little effect on variability in performance but it has a pronounced effect on average
performance.
ANOVA: A statistical procedure that uses mean sum of squares calculated from the
response data obtained in a statistically designed experiment to separate and then
compare the variability attributable to the different controlled and uncontrolled
factors influencing the response data. ANOVA uses the F-test.
Brainstorming: An information gathering/generation session in which design
engineers, production technicians, R&D scientists, and marketing (or customer) and
supplier representatives participate to help plan the Taguchi design optimization
experiments. Brainstorming establishes the basic objective (the performance
improvement one is seeking), collects the relevant process information, factors, and
theories that need verification, and also identifies the noise factors that may
influence performance.
Control Array: An array whose rows specify the settings of the control parameters
in a statistically designed experiment. This is the inner array in parameter
optimization experiments.
Control Parameter: A factor or a design parameter (DP) that influences process/
product performance and whose nominal setting can be selected by the designer or
process engineer.
Design of Experiments: A systematic, efficient, and statistically reliable procedure
for planning experiments to lead to the empirical discovery and estimation of
cause-effect relationships. The investigator sets several factors in these experiments
simultaneously and changes the factor settings from experiment to experiment in a
specified manner. On completion of all experiments the investigator uses ANOVA
to analyze the observed data.
Error Sum of Squares: The sum of the squares of the deviation of each piece
of individual observation in a statistical experiment from its expected average, the
deviations being attributable only to noise (or uncontrolled) factors. Fisher in 1925
197
198 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

showed that for observations influenced by the different treatments of a single


controlled factor,
Error Sum of Squares = Total Sum of Squares - Treatment Sum of Squares
F-test: The F-test develops an F-statistic from the mean sums of squares in the
ANOVA table. The F-statistic is able to tell with a stated degree of confidence
which factors have a significant influence on the response data and which do
not.
Functional Characteristic: The basic characteristic of a product that influences
its functional performance, i.e., how the product functions in the hands of its user.
Hypothesis Testing: An experimental observation and data analysis procedure
used in empirically establishing the acceptability of a speculation, hypothesis, or
theory.
Inner Noise: Process factors usually not controlled during production that lead
to piece-to-piece variation in performance. (Note that this has no relationship with
the inner array])
Kaizen: The Japanese word meaning continuous searching for incremental
improvement. Kaizen integrates R&D effort with actual production operation,
devoting resources to getting the process right and then continually making it
better. Kaizen makes the pervasive use of statistical methods.
Loss Function: A quantitative statement of the adverse effect a product inflicts
on society in the form of inconvenience, monetary loss, or side effects, whenever
the product does not perform on target , as expected by the user/customer. Loss
functions express the unquality of a product, in the foijn of a quadratic function
of the deviation from target performance, to provide information necessary to
justify improvement. A key application of loss functions is in the setting of
manufacturing tolerances after parametric design is complete.
Mean Adjustment Parameter: An adjustment parameter controls the average
value of the products functional characteristic.
Mean Sum of Squares (Mean SS): The average variability among observations
obtained by dividing the corresponding sum of squares by its degrees of freedom
(dof). The Mean SS of the observed values of a random variable estimates the
random variables variance. The F-test in ANOVA uses the Mean SS statistic.
Noise: Variations that cause a products performance to fluctuate while control
parameters remain fixed at their nominal settings. The sources are imperfections in
raw materials, variations in manufacturing environment, variations in customers
environment.
Noise Array: The rows of this array specify the settings of noise factors in an
experiment. The noise array is the outer array in parameter optimization
experiments.
Noise Parameter: A source of disturbance that can be systematically varied or
observed to be at distinct levels in a parameter design experiment. (Not all noise
factors may be in the investigators control.)
GLOSSARY 199

Null Hypothesis: Often it is impossible to determine if a speculation about some


characteristic of a population is true or false. Statistical tests evaluate the acceptability
of such speculations, hypotheses, or theories, based on observed data. The original
speculation (such as the average monthly income of steel workers in the city
equals N) that one tests in such an analysis is called the null hypothesis.
Off-line Control: Control steps taken during product or manufacturing process
design before actual production begins. Off-line quality control methods are
technical aids for quality and cost control in manufacturing process and product
design. Quality control here includes quality planning, quality engineering, and
quality improvement.
On-line Control: Control steps taken during manufacturing to improve or maintain
product quality.
Orthogonal Array: An array (matrix) of numbers whose columns are pairwise
orthogonal. In every pair of columns all ordered pairs of numbers occur an equal
number of times. One uses orthogonal arrays (OAs) to provide the treatment
settings at which one conducts the all-factors-at-once statistical experiments.
Outer Noise: The variations imposed by the environmental conditions and
circumstances that occur after the product leaves the producers shop. Examples
are heat, dust, humidity, wear-and-tear, storage effects, etc.
Parameter: A measure (such as the mean age \x of all humans living) that describes
a single characteristic of a population, often called a population parameter. One
calls the same measure describing a sample a statistic.
Parametric Design: A procedure involving extensive empirical investigation to
identify systematically the best settings of (a) process parameters to produce a
product meeting the required performance, (b) product design parameters such that
the products performance will be robust while the product is in actual field use.
Parametric design uses OAs, statistical experiments, and Signal-to-Noise (S/N) ratios
to discover efficiently the optimum parameter settings. Parameter design exploits
design parameter and noise (DP-noise) interactions to maximize robustness.
Performance Statistics: These quantify quality. Design optimization experiments
aim at empirically obtaining performance statistics under different conditions of
factor and noise settings to help reach the optimum performance. On*target
performance and minimum variability about this target are the two key performance
objectives in Taguchi methods.
Population: A term used broadly to designate the members of a group to be
studied. A population consists of all data and individual characteristics that may be
observed. One calls the complete set of all observations about a single characteristic
of interest (such as age, thickness, delay, defects, etc.) a statistical population. Any
thing less than this complete set is a sample.
Quality Engineering: Measures taken during product/process design to achieve
(a) a manufacturing process that will deliver products on target , and (b) a product
that has robust performance and therefore continues to perform near its target
performance at the actual site throughout its nominal life of use.
200 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

Quality Function Deployment (QFD): A practical technique that translates


the voice of the customer (customer requirements) into physical design
parameters through its key documents, namely, the product and component
deployment matrix, the product planning matrix, and the operating instruction
sheet.
Randomization: Randomization attempts to spread the effect of uncontrolled
factors and uncontrolled disturbances evenly over the different observations to
minimize any bias attributable to uncontrolled factors that might be influencing
outcomes of a statistical experiment.
Regression: A procedure in which one assumes that a mathematical relationship
of the type
yi = Po + Pi X< + ;
can express the relationship between x h the explanatory variable or regressor and
y h the response variable. The procedure results in least-squares estimates of the
parameters /J0, Pu etc. It is recommended that one should establish the cause-effect
relationship between each of the regressors and the response before attempting
regression.
Replication: A common technique in which one repeats experiments under
identical controlled conditions to average out any extreme effect of the uncontrolled
factors on a single experimental run. (See also randomization.) Replication in
statistical experiments aims at capturing noise variability.
Response Surface Methodology (RSM): A collection of mathematical and
statistical techniques useful for analyzing problems in which several independent
variables influence a dependent variable or response, and the goal is to optimize
this response.
Robust Design: A design approach that emphasizes reudction of performance
variation by reducing sensitivity to sources of variation.
Sample: A sample is a representative subset of the population, easier to handle,
count, and observe. It reflects all the characteristics typical of the larger set
the population. Sampling is the scientific, statistical procedure for collecting
observations from a population being investigated. Sampling enables one to obtain
a representative sample from the population to support the making of valid inferences
about the characteristics of that population.
Signal Parameter: A variable (a design parameter [DP]) that one uses to
change the value of the performance to attain a desired value of performance.
The designer tries to make the product very sensitive to changes in the
signal parameter. The designer does not however choose the setting of this
parameter.
Signal-to-Noise ( S / N ) Ratio: A mathematically transformed form for quality/
performance characteristic, the maximization of which minimizes quality loss and
also improves (statistically) the additivity of control factor effects.
GLOSSARY 201

Standard Normal Variate: An independently distributed random variable x


that can take values ranging from - > to +o and has a probability distribution
given by

1
/(*) = exp { - [(x - j x ) t o f 12}

is said to be normally distributed, commonly written in short as X of the order o f


a], with mean n and standard deviation <x The special random variable Z,
defined as 2 = (X - is also normally distributed, with mean zero (0) and
standard deviation 1 . Z is known as the standard normal variate or the random
variable having the standard normal distribution, written as Z ~ N[ 0, 1].
Statistic: A statistic is a measure that describes some characteristic of interest
about a sample. One obtains a statistic by summarizing the observations taken
from each member in a sample in some suitable way. For example, the sample
average Xbar is a statistic; so are sample standard deviation s and range
(max{jc/} - min {*,-}). Since one rarely knows the values of the real population
parameters, one uses a sample statistic (e.g., Xbar) as the estimator of a corresponding
parameter (here ji) belonging to the population.
Statistically Designed Experiments: Several specially planned individual
experiments conducted together to observe the response of a system. Experimental
factors speculated to be influencing the system are set at certain pre-planned levels.
One analyses the observed data using ANOVA. The objective of such experiments
is to obtain evidence efficiently to guide the acceptance or refuting of certain
cause-effect relationships.
Statistically Significant: Statistical hypothesis test procedures tell us how
likely it is that a particular sample result (an observed data or statistic) has
originated from a certain population. If this likelihood is small but one has
actually observed the data or statistic in question, it is said that the results of the
test are statistically significant and cannot be treated to have occurred randomly
if the null hypothesis is correct. The term significant implies that based on the
observed data the investigator feels confident enough to reject the null hypothesis
that the sample (the source of the statistic) originated in the said population.
Sum of Squares: The sum of squares is the sum of the squares of deviations of
individual observations from their respective expected averages. When calculated
in a statistical analysis, it reflects the effect of a certain factor (or variability)
influencing these observations. In the ANOVA procedure one often uses sums of
squares to establish whether a particular variability is significant when viewed in
the background of uncontrolled noise.
System Design: The first step in design that utilizes technical knowledge to
reach the initial design to deliver functional performance (the desired functional
features of the product or process, not yet made robust). The aim of classical
engineering design is system design. The technology of a special field often plays
a major role in this step to reach the initial settings of the DPs.
202 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

Target Value: The desired value of a performance characteristic.


Tolerance Design: The procedure for setting the tolerances (variations from the
optimum parameter settings that one can accept) on product design parameters. It
keeps in perspective the loss to society that would be caused should the products
performance deviate from the target.
Variability: The variability within a set of observed data or among the members
of a population indicates how f a r a typical single observation or member deviates
from its expected average (value or characteristic). For a group of observations,
variability is collectively determined by summing up the squares of the differences
of the individual observations from the average. One measures the variability in the
values of a randomly distributed variable by its variance.
References

[1] Taguchi, G. and Don Clausing (1990): Robust quality, Harvard Business
Review, January-February, pp. 65-75.
[2] Fisher, R.A. (1925): Statistical Methods fo r Research Workers , Oliver &
Boyd, Edinburgh.
[3] Rao, C.R. (1947): Factorial experiments derivable from combinatorial
arrangements of arrays, /. Roy. Stat. Soc.t Supply vol. 9, pp. 128-39.
[4] Taguchi, G. (1986): Introduction to Quality Engineering, Asian Productivity
Organization, Tokyo.
[5] Phadke, Madhav, S. (1989): Quality Engineering and Robust Design,
Prentice Hall, Englewood Cliffs, New Jersey.
[6 ] Kackar, R.N. (1985): Off-line quality control, parameter design and the
Taguchi Method Journal o f Quality Technology, vol. 17, pp. 176-209.
[7] Bendell, Tony (1989): Taguchi Methods Proceedings o f the 1988 European
Conference, Elsevier Applied Science, New York.
[8 ] Taguchi, G. and Yu-In Wu (1979): Introduction to Off-Line Quality Control ,
Central Japan Quality Control Association, Magaya.
[9] Juran, J.M. and Gryna, F.M. (1988): Juran's Quality Control Handbook,
4th ed., McGraw-Hill, New York.
[10] ISO 9000 International Standard (1987): International Standards Organization,
Geneva.
[11] John, Peter, W.M. (1990): Statistical Methods in Engineering and Quality
Assurance, Wiley Interscience, New York.
[12] Caulcutt, R. (1990): Putting process changes to factorial test, Process
Engineering, July, vol. 71, pp. 46-47.
[13] Suh, Nam, P. (1990): The Principles o f Design, Oxford University Press,
New York.
[14] Dehnad, K. (1989): Quality Control, Robust Design , and the Taguchi Method,
Wadsworth, CA.
[15] Lochner, R.H. and Matar, J.E. (1990) : Designing f o r Quality, ASQC Press,
Milwaukee, WI.
[16] Tribus, M. and Szonyi, G. (1989): An alternative view of the Taguchi
approach, Quality Progress, May, vol. 22, pp. 46-52.
[17] Box, G.E.P. (1988): Signal-to-noise ratios, performance criteria, and
transformations, Technometrics, February, vol. 30, pp. 1-31.
203
204 TAGUCHI METHODS EXPLAINED PRACTICAL STEPS TO ROBUST DESIGN

[18] Wadsworth, H.M. (1989): Handbook o f Statistical Methods fo r Engineers


and Scientists, McGraw-Hill, New York.
[19] Noori, Hamid (1989): The Taguchi methods Achieving design and output
quality, Academy o f Management Executive, vol. 3, p. 322.
[20] Ishikawa, K. (1976): Guide to Q uality Control, Asian Productivity
Organization, Tokyo.
[21] Leon, R.V., Shoemaker, A.C. and Kackar, R.N. (1987): Performance
measures independent of adjustment: An explanation and alternative to
Taguchis signal to noise ratio, Technometrics, vol. 29, pp. 253-85.
[22] Duncan, A.J. (1974): Quality Control and Industrial Statistics, 4th ed.,
Richard D. Irwin, Homewood, Illinois.
[23] Filippone, S.F. (1989): Using Taguchi methods to apply to axioms of design,
Robot, Computer Integrated Manufacturing (U.K.), vol. 6 , pp. 133-42.
[24] Box, G.E.P. and Behnken, D.W. (1960): Some new three level designs for the
study of quantitative variables, Technometrics, November, vol. 2, pp. 477-82.
[25] Barker, T.B. (1986): Quality engineering by design Taguchis philosophy,
Quality Progress, December, vol. 19, pp. 32-42.
[26] Nemhauser, G.L., Rinnooy Kan, A.H.G. and Todd, M J. (1989): Optimization,
North-Holland, Amsterdam.
[27] ASQC (1993): Award Criteria, Malcolm Baldrige National Quality Award,
Milwaukee, WI.
[28] Philipose, S. and Venkateswarlu, P. (1980): Statistical quality control in
Indian industries, Quality Progress, April, vol. 13, pp. 34-37.
[29] Rosenblatt, A. and Watson, G.F. (1991): Concurrent engineering, IEEE
Spectrum, July, vol. 28, pp. 22-37.
[30] Shores, Dick (1989): TQC Science, not witchcraft, Quality Progress,
April, vol. 22, pp. 42-45.
[31] Gryna, F.M. (1977): Quality Costs : User vs. manufacturer, Quality Progress,
June, vol. 10, pp. 10-13.
[32] Nair, Vijayan, N. (1993): Analyzing data from robust parameter design
experiments, in Quality through Engineering Design , W. Kuo (ed.), Elsevier,
Amsterdam, pp. 191-98.
[33] Bagchi, Tapan, P. and Kumar, Madhu Ranjan (1993): Multiple-criteria robust
design of electronic devices, Journal o f Electronics Manufacturing , vol. 3.
[34] Washio, Y. (1993): Steps in developing new products viewed from the
standpoint of Total Quality Control, in Quality through Engineering Design,
W. Kuo (ed.), Elsevier, Amsterdam, pp. 65-70.
[35] Kackar, R.N. (1993): Stratified replications, in Quality through Engineering
Design, W. Kuo (ed.), Elsevier, Amsterdam, pp. 191-98.
[36] Goldberg, David, E. (1989): Genetic Algorithms in Search, Optimization,
and Machine Learning, Addison Wesley, Reading (Mass.).
Index
Absolute main effect, 130 Columns (of orthogonal arrays), 81
Acceptance sampling plans, 178 Comparison of variances, 37
Additivity of factor effects, 46, 61, 62, Completely randomized experiment, 45
83, 84, 85, 93, 98, 100, 197 Computer chip design, 176
Adjusting mean to target, 112 Computer simulations, 94
Adjustment (scaling) parameter/factor, Concurrent statistic, 82
104, 141, 154, 197, 198 Conditional probability, 28
All-at-once experimentation, 42 Confidence interval, 23
American customers, 6 Confirmation (verification) experiment,
ANOVA (Analysis of Variance), 33, 34, 65, 98, 108, 111
49, 51, 72, 77, 122, 197 Constrained optimization, 142
Appraisal cost, 3 Constraints, 140
Asahi, 7 C ontinuous Q uality Im provem ent
Assignable factors, 178 (CQI), 2
AT&T, U.S.A., 1, 59 Contrast, 91
Average Control
of a random variable, 25 charts (Shewhart), 174
variability, 47 definition of, 174
Axiomatic approach to design, 87 factor (DP), 100
Axioms for ideal design, 126 limits, 178
OA (control array), 81, 197
of variation in quality, 177
Bagchi, T.P., 140 parameter, 94, 197
Balancing properties of OAs, 90 Correction Factor (CF), 56
Barker, T.A., 155, 156 Correlation matrix, 153
Behnken, D.W., 153 Cost of countermeasure, 171
Bell Laboratories, U.S.A., 176 Covariance, 26
Blocks, 43 Cumulative distribution, 25
Box, G.E.P., 140, 152, 156, 159 Customers tolerance, 164
Brainstorming, 95, 197
Business impact of TQM, 176
Degree
of freedom (dof), 24, 37,47, 57, 75,
Cause-effect models, 32, 61 114
Cause-effect relationship, 33, 147 of predictability, 103
Central limit theorem, 23 Dehnad, K., 87
chi-square distribution, 32 Design
chi-square statistic, 32 of experiments, 41, 197
Classical statistical experiments, 72 parameter (DP), 79
205
206 INDEX

Dispersion, 26 Independent random variables, 26


Dow Jones industrial average, 30, 31 Influence space, 44
Dynamic systems, 101 Information, 8 8
Inner array (control OA), 81, 197
Interaction effects, 16, 43, 61, 62, 85,
e (chance caused error), 3 3 , 35 92, 116, 119, 142
Efficiency, 173 Ishikawa, Kauru, 96
Empirical evaluation of performance, 42 Ishikawa (Cause-effect) diagrams, 1 1 ,
Energy transfer and S/N ratios, 8 6 12, 95, 96, 97
Error sum of squares, 47, 50, 51, 197 ISO 9000, 4, 172, 179
Estimator, 23 ISO 9004, 176
Expected mean SSeitineilt, 53
Experimental error, 47
Experimental units, 44 Juran, J.M., 174

Factors affecting performance, 10 Kackar, R.N., 94, 153


Failure cost, 3 Kaizen, 2, 3, 179, 198
Feasible region, 140 Kirchhoff current law, 13
Feasible set of solutions, 155 Kumar, Madhu Ranjan, 140
Filippone, S.F., 123, 124
Fisher, R.A., 72
Flowcharting, 95 L4 OA, 191
Fractional factorial designs, 90, 92 L8 OA, 58, 6 6 , 70, 8 8 , 104, 139, 191
Full factorial designs, 9, 65, 92, 93 U OA, 63, 64, 104, 127, 192
Functional performance, 163, 198 L 12 OA, 192
Functional Requirements (FRs), 87, L 16 OA, 109, 193
125 Llg OA, 194
F-ratio, 54 L2 5 OA, 196
F-statistic, 30, 37, 39, 54, 58 L2 7 OA, 129
F-test, 41, 53, 55, 74, 198 L 3 2 OA, 78
Lagranges multipliers, 143
Laplace variable, 13
Genetic algorithms, 159 Larger-the-better criterion, 84
Graphic evaluation of effects, 6 8 , 73, Latin square designs, 16
103,112,113,133,134,137-140 Level (see treatment)
Gryna, F.M., 5 Lifetime cost, 5, 11, 177
Linear cause-effect models, 33
Linear graphs, 116
Histograms, 178 Linearity, 46
Hypothesis Loss
alternative, 28 after adjustment, 82, 83
null, 28, 34, 35 functions, 6 , 13, 82, 162, 163, 198
testing, 28, 34 functions for mass produced items,
171
Ina Tile Company, 11, 71 imparted to society, 7, 13
in d ex 207

Main effects (main factor effects), 16, Parameter design, 12, 14, 15, 199
55, 61, 90, 92, 93, 142 Parametric
Manufacturing experiment plan, 83
cost, 5 optimization experiments, 109
size intervals, 168 Pareto optimality, 144, 155
tolerances, 164, 165 Partial factorial designs, 62
Mathematical models, 123, 147 Performance
Mazda, 81 process, 10
Mean product, 10
^treatment* 54 PerMIAs, 136, 153
Sum of Squares (Mean SStor), 39, Phadke, M.S., 140, 159
48, 54 Population, 18, 199
Sum of Squares of deviations (Error Prediction model, 121, 122
Sum of Squares), 47 Prevention
MIL-STD-105D, 178 by quality design, 11
Monte Carlo simulation, 100, 144, 155, cost, 3
156, 159 Probability, 18
Multiple objective optimization, 143 Process
Multiple regression, 150 control, 182, 183
design, 178
performance, 176
Nair, Vijayan, N., 99 Product
New York Stock Exchange, 30 design, 178
Nippon Denso Company, 94 performance, 10, 176
Noise factor array (noise OA), 81, 198 producibility of, 177
Noise factors, 79, 101, 198 saleability of, 176
Nominal-the-best criterion, 83 Productivity, 177
Non-linear effects, 98
Normal distribution, 21, 22
QFD (Quality Function Deployment),
4, 173, 199
Off-line quality control, 2, 199 Quadratic loss function, 81
One-factor designed experiment, 44 Quality
One-factor-at-a-time studies, 179 definition of, 1, 6, 173
One-line quality control, 2, 199 engineering, 2, 199
On-target performance, 4 in design, 79
Operating cost, 5 loop, 176
Optical filter manufacturing, 107 management methods, 174
Optimization, 41, 70, 77, 79, 84, 107,
123
Orthogonal Random
Arrays (OA), 42,44, 72, 86, 90, 114, factors, 178
199 sample, 18, 22
matrix experiments, 63 variable, 21, 22
Orthogonally designed experiments, 9 Randomization of experiments, 34, 45
Outer array (noise OA), 95 200
208 index

Range, 19 Statistical experiment, 43


Regression, 33, 122, 147, 200 Statistical Process Control (SPC), 1
Relative likelihood, 45, 58, 70, 94, 200 Statistically significant observations, 29,
Representative subset, 18 201
Response(s) Statistics, 18, 177
selection of, 43 Steepest ascent, 150
Surface Method (RSM), 98,150,200 Suh, Nam, P., 87, 123, 126, 133
tables, 65 Sum of squares, 55, 201
Responsiveness, 173 Sum of squares of deviations, 47
Robust System (Functional) design, 12, 13,
design, 4, 5, 10, 13, 41, 94, 99, 154, 201
200 Szonyi, G., 155, 156
performance, 12, 14
Robustness, 81
Rows (of orthogonal arrays), 81 Taguchi applications outside Japan,
Rules for selecting OAs, 91, 92 180
R&D cost, 5 Taguchi, G., 94,98,100,103,115,116,
123, 136, 157, 159, 162, 163,
165
Saleability of products and services, Taguchi methods, 1
176 Taylor series expansion, 162
Sample Temporal information, 179
mean, 19, 37 Tolerance, 81
standard deviation, 19 design, 12, 201
statistic, 23, 29 Tolerance-caused noise, 132
Sampling, 21 Total
Sampling plans, 175 company productivity, 172
Scaling (levelling) factors, 102 Total Quality Management (TQM), 172,
Sensitivity analysis, 179 173
Separability, 46, 61, 93 Total sum of squares, 39, 50, 51
Signal factors, 102, 200 Treatments) or level, 35, 43, 46, 97
Signal-to-noise (S/N) ratios, 7, 16, 68, Trial-and-error experiments, 9
74, 79, 81, 83, 84, 94, 99, 103, Tribus, M., 155, 156
110, 128, 144, 200 uTwo-step optimization procedure, 71,
Smaller-the-better criterion, 84 84, 103, 153, 159
Societal loss, 94, 162, 164 Two-way table, 120
Sony-Japan, 6, 81 Type I and Type II errors, 28
Sony-U.S.A., 6, 81 f-statistic, 24, 30
Sources of noise, 94
Specification limits, 7
treatment* 52 Unbiased estimator, 23
Standard normal variate, 21, 201 Uncertainty, 18
Standard OAs, 114 Unconstrained maximization, 143
Static systems, 101 Uncontrolled factors, 10, 40, 45
Statistical design of experiments, 9, 34, UNIX V, 179
41, 42, 49 Upper a percentage point, 54
INDEX 209

Variability Variation
between-factor, 40, 47 between treatments, 40
between-treatment, 50 within treatment, 40
in quality, 8, 104, 175 Verification experiments, 65, 98
of performance, 70, 79, 202 VLSI manufacture, 4
within-factor, 40, 47
within-treatment, 48, 50
Yates procedure, 67
Variance
definition of, 19, 26
effects, 70 Z-statistic, 29
pooled, 27 Z-test, 31
Divided into 12 easy to read chapters, the
book distills the methods and experience of
those in industry who introduced and then
embraced Taguchi Methods as a regular
part of their own product/process innovation
effort. It ends by linking Taguchi Methods
with TQ M (Total Quality Management), and
by providing an improved process design
to upgrade product/process engineering
capability within an organization.

Tapan P. Bagchi is Professor, Industrial and


Management Engineering Programme, Indian
Institute of Technology, Kanpur. Earlier, he
distinguished himself as Staff Corporate
Planner and Engineer at the well-known
Exxon Corporation, U SA , and the Esso
Refinery Petrochemical Plant, Canada.

Professor Bagchi has presented several


papers at various conferences held in India
and abroad and has published extensively in
national and international journals, including
Opsearch, Canadian Operations Research
Society Journal, Journal of Applied Probability,
J. Engg. Math, and AICHE. He is the author
of two books, Interactive Relational Data Base
Design: A logic programming implementation
and Numerical Methods in Markov Chains
and Bulk Queues.

Prentice-Hall of India
U infitod]
New Delhi
View publication stats

You might also like