You are on page 1of 822

SIX SIGMA

AND
BEYOND
Design for
Six Sigma
SIX SIGMA AND
BEYOND
A series by D.H. Stamatis
Volume I
Foundations of Excellent Performance
Volume II
Problem Solving and Basic Mathematics
Volume III
Statistics and Probability
Volume IV
Statistical Process Control
Volume V
Design of Experiments
Volume VI
Design for Six Sigma
Volume VII
The Implementation Process
ST. LUCIE PRESS
A CRC Press Company
Boca Raton London New York Washington, D.C.
D. H. Stamatis
SIX SIGMA
AND
BEYOND
Design for
Six Sigma

This book contains information obtained from authentic and highly regarded sources. Reprinted material
is quoted with permission, and sources are indicated. A wide variety of references are listed. Reasonable
efforts have been made to publish reliable data and information, but the author and the publisher cannot
assume responsibility for the validity of all materials or for the consequences of their use.
Neither this book nor any part may be reproduced or transmitted in any form or by any means, electronic
or mechanical, including photocopying, microlming, and recording, or by any information storage or
retrieval system, without prior permission in writing from the publisher.
The consent of CRC Press LLC does not extend to copying for general distribution, for promotion, for
creating new works, or for resale. Specic permission must be obtained in writing from CRC Press LLC
for such copying.
Direct all inquiries to CRC Press LLC, 2000 N.W. Corporate Blvd., Boca Raton, Florida 33431.

Trademark Notice:

Product or corporate names may be trademarks or registered trademarks, and are
used only for identication and explanation, without intent to infringe.

Visit the CRC Press Web site at www.crcpress.com

2003 by CRC Press LLC
St. Lucie Press is an imprint of CRC Press LLC
No claim to original U.S. Government works
International Standard Book Number 1-57444-315-1
Library of Congress Card Number 2001041635
Printed in the United States of America 1 2 3 4 5 6 7 8 9 0
Printed on acid-free paper

Library of Congress Cataloging-in-Publication Data

Stamatis, D. H., 1947-
Six sigma and beyond : design for six sigma, volume VI
p. cm. -- (Six sigma and beyond series)
Includes bibliographical references.
ISBN 1-57444-315-1 (v. 1 : alk paper)
1. Quality control--Statistical methods. 2. Production management--Statistical
methods. 3. Industrial management. I. Title. II. Series.
TS156 .S73 2001
658.5


62--dc21 2001041635

SL3151 FMFrame Page 4 Friday, September 27, 2002 3:14 PM

To

Christine

SL3151 FMFrame Page 5 Friday, September 27, 2002 3:14 PM

SL3151 FMFrame Page 6 Friday, September 27, 2002 3:14 PM

Preface

A collage of historical facts brings us to the realization that concerns about quality
are present not only in the minds of top management when things go wrong but also
in the minds of customers when they buy something and it does not work.
We begin the collage 20 years ago, with Waynes (1982) proclamation in the

New York Times

of management gospel gone wrong. Wayne quoted two Harvard
professors, Hays and Abernathy, as saying, You may have your eye on the wrong
ball. In a discussion of the cost differential between American and Japanese com-
panies, Wayne said that American business executives argue that the Japanese advan-
tage is largely rooted in factors unique to Japan: lower labor costs, more automated
and newer factories, strong government support, and a homogeneous culture.
The professors, though, argue differently, Wayne said. They claim that Japanese
businesses are better because they pay attention to such basics as a clean workplace,
preventive maintenance for machinery, a desire to make their production process
error free, and an attitude that thinks quality.
Other authors writing in the early 1980s made similar points. Blotnick (1982)
wrote, If its American, it must be bad. The headline of an anonymous article in

The Sentinel Star

(1982) referred to retailers relearning lesson of customers always
right. Ohmae (1982) wrote an article titled Quality control circles: They work and
dont work. Imai (1982) wrote that unless organizations control (eliminate) their
waste, they would have problems. He identied waste as:
1. The waste of making too many units
2. The waste of waiting time at the machine
3. The waste of transporting units
4. The waste of processing itself
5. The waste of inventory
6. The waste of motion
7. The waste of making defective units
Imai pointed out some of Toyotas advantages, specically its autonomation
system. Autonomation means that the machine is equipped with human wisdom to
stop automatically whenever something goes wrong with it.
Wight (1982) urged management to learn to live with the truth. When Honda
rolled out its rst American-built car, Lewin (1982) wrote, Japanese bosses ponder
mysterious U.S. workers. Among other things, the Japanese wondered why Amer-
icans have so many lawyers, Lewin pointed out. Lohr (1983) wrote that its just
wishful thinking to say that Japan cannot catch up software. That is what a lot of
people were saying about semiconductor industry a few years ago and the auto
industry a decade ago.

SL3151 FMFrame Page 7 Friday, September 27, 2002 3:14 PM

Holusha (1983) wrote of the U.S. striving for efciency. Serrin (1983) described
a study that showed that the work ethic is alive but neglected. Holloran (1983) wrote
that an army staff chief faults industry as producing defective materials.
Almost twenty years later, Zahary (2001) reported that Toyota strives to retain
its benchmark status by continuing its focus on the Kaizen approach and

genchi
genbutso

(go and see attitude). Winter (2001) wrote that GM is now trying to show
it understands importance of product. McElroy (2001) wrote, Customers dont
care how well your stock is performing. They do not care you are the lowest producer.
They do not care you are the fastest to market. All they care about is the car they
are buying. That is why it all comes down to product.
Morais (2001), quoting OConnell (2000), claimed that over 100,000 focus
groups were elded in 1999, even though marketing and advertising professionals
have mixed feelings about their value. Steel (1998 pp. 79 and 202205) expressed
industrys ambivalence about focus groups. Among other things, he claimed that
they are not very representative at all. The odd thing about focus groups is that we
still use them to predict the sales potential of new products primarily because of
their instant judgments, non-projectable conclusions, and comparatively low costs,
even though we know better that is, we know that we could do better by learning
about consumers product needs and attitudes and understanding their lives.
In the automotive industry, the evidence that something is wrong is abundantly
clear, as Mayne et al. (2001) have reported. Here are some key points:
1. National Highway Trafc Safety Administration records showed more
than 250 vehicle recalls as of mid-June 2001 well on pace to exceed
the previous years record 12-month total of 483. The 2000 total broke
the previous high of 370 set in 1999 and shattered the next-highest
mark of 328, set the year before.
2. Numbers of recalled vehicles have risen correspondingly 23.4 million
in 2000, 19.2 million the previous year, and 17.2 million in 1998.
3. The number of vehicles snared by non-compliance recalls issued for
failure to meet the Federal Motor Vehicle Safety Standards increased
to 4.5 million in 2000. This represents a 61% hike compared to 1999s
2.8 million, and it is nearly three times the 1.6 million recorded in 1998.
4. A total of 18.9 million vehicles were recalled in 2000 because of safety-
related defects. That is 81% of the overall recall total and a sharp increase
compared to 1999 and 1998, when safety-related defects prompted recalls
of 16.4 million and 15.6 million vehicles, respectively. Even more telling,
perhaps, it is 9% more than the 17.3 million new light vehicles sold last
year in the U.S.
5. A supplier executive, who wants to remain anonymous, bristles at the
suggestion that quality problems fall at the feet of suppliers. He says
quality has suffered because of the Big Threes relentless pursuit of cost
reduction. He also suggests that buyers at the Big Three are evaluated
primarily on the basis of cost savings rather than on the quality of the
parts they procure. In the nal analysis, Americans build to print and
specication, whereas Japanese build to function.

SL3151 FMFrame Page 8 Friday, September 27, 2002 3:14 PM

Powerful statements indeed, yet I could go on with examples involving home
appliances, food, electronics, health devices, and many other types of products.
However, the point is that the problems we are having are not new. The actions
necessary to x these problems are not new. What we need is a new commitment
to pursue customer satisfaction and mean it. We must put quality in the design of
all our products and services in such a way that the customer sees value in them.
We must become like a philologist who believes that there is truth and falsehood in
a comma. The pleasures of philology are such that by merely changing the placement
of a comma, you can make sense out of nonsense; you can claim a small victory
over ignorance and error. So, we in quality must learn to persevere and learn as
much as possible about the customer. We must make strides to identify what cus-
tomers need, want, and expect and then provide them with that product or service.
We must do what the French philosopher Etienne Souriau observed:

pour inven-
ter il faut penser a cote.

To invent, you must think aside that is, slightly askew.
Or we must follow the lead of Emily Dickinson when she wrote, My business is
circumstances, and her readers understood the serendipity of ideas and the rewards
of looking

aside

to see those ideas unlikely, or at least less than obvious, connec-
tions.
This is the essence of Design for Six Sigma (DFSS). The upfront analysis and
investigation of the customer is of paramount importance. So is trying to identify
what is really needed (trade-off analysis) to make the difference. The DFSS approach
is based on a systems overhaul and a new mindset to cure the ailments of organi-
zations (protability) and provide satisfaction to the customer (functionality and
value). It is a proactive approach rather than a reactive approach, unlike the regular
six sigma methodology. DFSS is a methodology that works for the future, rather
than the present or past.
DFSS is a holistic system that is based on challenging the status quo and
providing a product or a service that not only is accepted by the customer but is
nancially rewarding for the organization. To do this, of course, managers must take
risks. They must allow their engineers to design

robust designs

and that means
that the traditional Y = F(x) is not good enough. Now we must look for Y = F(x,n).
In these equations, x is the traditional customer characteristic (cascaded to smaller
and precise characteristics), but now we add the n, which is the noise. In other words,
we must design our products and services in the presence of noise for maximum
satisfaction.
The best way to predict the future is to invent it. This suggests that the best way
to know what is coming is to put yourself in charge of creating the situation you
want. Be purposeful. Look at what is needed now, and set about doing it. Action
works like a powerful drug to relieve feelings of fear, helplessness, anger, uncertainty,
or depression. Mobilize yourself as well as the organization because you will be the
primary architect of your future.
One of the keys to being successful in your efforts is to anticipate. Accept the
past, focus on the future, and anticipate. Consider what is coming, what needs to
happen, and how you can rise to the occasion. Stay loose. Remain exible. Be light
on your feet. Instead of changing with the times, make a habit of changing a little
ahead of the times. This change can happen with Designing for Six Sigma and

SL3151 FMFrame Page 9 Friday, September 27, 2002 3:14 PM

beyond. The only requirement is that we must take advantage of the future before
we are ready for it. I am reminded of Flints (2001), Visnics (2001), and Maynes
(2001) comments, respectively. American automotive companies, for example, have
abandoned the car market because they do not make money on cars. They forget
that the Japanese companies not only sell cars but make money from them. So what
does Detroit do to sell? It focuses on price rebates, discounts, 0% nance, lease
subsidies, and so on. What does the competition do? Not only have they developed
an engine/transmission with variable valve breathing, they are already using it. We
are trying to perfect the ve-speed, and the competition is installing six speeds; we
talk about CVTs, and Audi is putting one in its new A4. We are focusing on 10
years and 150,000 miles reliability, and our competitors are pushing for 15 years
and 200,000 miles reliability.
In diesel technology, the Europeans and Americans are worlds apart. Even in
this age of globalization, the light duty diesel markets in Europe have become more
sophisticated and demanding to the point where policy makers have recognized the
environmental advantages of diesel and have allowed new diesel vehicles to prove
themselves as efcient, quiet, and powerful alternatives. What do we do? Our policy
makers have created a regulatory structure that greatly impedes the widespread use
of diesel vehicles. Consequently, Americans may be denied the performance, fuel
economy, and environmental benets of advanced diesel technology.
A third example comes again from the automotive world in reference to fuel
economy. One of the issues in fuel economy is the underbody design. Early on,
American companies paid great attention to the design of the underbody. As time
went on, the emphasis shifted to shapes that channel airow over the bodywork,
instead of what lies beneath. But while U.S. automakers were accustomed to being
on top, BMW AG was redening airow from the ground up. Underbodies have
been a priority with the Munich-based automaker since 1980. That is when BMW
acquired its rst wind tunnel and began development of the 1986 7-series code
named E32. Today, underbodies rank second behind rear ends, wheel housing and
cooling airow. As of right now, the initiative for BMW has gained them 2 miles
per hour.
When we talk about customer satisfaction we must do certain things that will
help or improve the image of the organization in the perception of the customer. We
are talking about

prestige

and

reputation

. Prestige and reputation differ from each
other in three ways:
1. Reputation applies to individual products or services, while prestige is a
characteristic of the organization as a whole.
2. Reputation can be measured on absolute scales, but prestige can only be
judged in relative terms.
3. Prestige is judged relative to other organizations; reputation is not.
It is prestige that we are interested from a Design for Six Sigma perspective.
The reason for this is that prestige compels each organization to perform better than
its competitors, thereby promoting excellence and continuously raising industry

SL3151 FMFrame Page 10 Friday, September 27, 2002 3:14 PM

standards not only for the customer but also for the competitors. To achieve prestige,
we must be cognizant of some basic yet inherent items, including the following:


Be ready to engage our customers in conversation every second of the
day.

In the digital age, this means having an interactive medium where
people can tell you what they think about your brand and your product
or service whenever they have an idea, a complaint, or a compliment, or
when they just want to air some ideas with somebody who knows where
you are going. Easy places to start are always-open discussion boards and
focus groups. When you get more sophisticated, you can try regularly
scheduled special events or special meetings. The best solution? Set up
an Internet communication structure that lets you have a 24


7 open line
of communication.


Make customer relations a two-way street.

Todays customers not only
want to be heard, they want to respond. They want to engage you in
conversation, brainstorming, and relationship building. To facilitate this,
you may want to consider two-way communication into your Web site
that provides means for real-time sharing of ideas, debate, and interaction.
Another way to facilitate this is through moderated chat rooms or other
more organized techniques. Online events and presentations allow you to
show off new ideas or development to customers, then take questions in
a moderated and controlled manner, across time zones and around the
world. Online meetings allow you to have customers attend by invitation
only. Keys to success: make sure your communication is honest and
credible and that the idea ow is going both ways. In todays world, an
organization can design digital communication systems that can provide
instant information. This system can be used to brainstorm, to test con-
cepts and features, and more importantly, to consider trade-offs.


Get your customers to help design your products and services.

Most
organizations ignore the best product and service designers and
consultants people who know your product or service inside out and
know intimately what the market needs, more often than not. They are
your customers. They can tell you a lot more than just what is right and
wrong with your current products. They can tell you what they really need
in future products in

functional

terms.


Let your customers get to know each other.

Word of mouth is a concept
that no one should ever underestimate in the Internet age. The power of
conversation has the lightning-quick ability to create trends, fads, and
brands. People talking to each other in a moderated environment and
sharing unprompted, honest opinions about your brand of product or service
remains the number one way for you to get new satised customers.


Make your customers feel special.

When you get down to it, we have
been talking about delighting the customer for at least 20 years, but that
is where we have stopped. We have forgotten that relationships with
customers should not be any different from relationships you have with

SL3151 FMFrame Page 11 Friday, September 27, 2002 3:14 PM

close friends. You need to keep in touch. You need to be honest. You need
to tell people they matter to you. To facilitate this special attitude, an
organization may have special days for the customer, special product
anniversaries and so on. However, in every special situation, representa-
tives of the organization should be identifying new functionalities for new
or pending products and shortcomings of the current products.


Never try to understand your customer.

(This is not a contradiction
of the above points. Rather, it emphasizes the notion of change in expec-
tations.) Customers are ckle. They change. As a consequence, the orga-
nization must be vigilant in tracking the changes, the wants, and the
expectations of customers. To make sure that customers are being satised
and that they will continue to be loyal to your products and services, make
sure you have a system that allows you to listen, listen, and then listen
some more to what they have to say.


Shrink the globe.

The world is shrinking. It has become commonplace
to discuss the information revolution in terms of the creation of global
markets. To think global is in vogue with the majority of large corpo-
rations. But global thinking presupposes that we also understand the
global customer. Do we really understand? Or do we merely think we
do? How do we treat all our customers as though they live right next
door? One way, of course, is through a combination of modern commu-
nication technology and old-fashioned neighborliness. You need good,
solid, two-way conversation with someone half a globe away that is as
immediate, as powerful, and as intimate



as a conversation with someone
right in front of you. This obviously is difcult and demanding, and in
the following chapters we are going to establish a ow of disciplines that
perhaps can help us in formulating those global customers with their
specic needs, wants, and expectations.


Design for customer satisfaction and loyalty.

Some time ago I heard a
saying that is quite appropriate here. The saying goes something like
everything is in a state of ux, including the status quo. I happen to
agree. Never in human history has so much change affected so many
people in so many ways.
The winds of change keep building, blowing harder than ever, hitting more
people, reshaping all kinds of organizations. Incredible as it may sound, all these
changes are happening even in organizations that think that they have understood
the customer and the market. To their surprise, they have not. How else can we
explain some of the latest statistics that tell us the following:
1. Business failures topped 400,00 in the rst half of the 1990s and exceeded
500,000 by the end of the decade. That is double the number of the
previous decade. The same trend is projected for the rst decade of the
new century.

SL3151 FMFrame Page 12 Friday, September 27, 2002 3:14 PM

2. Eighty-ve percent of all U.S. organizations now outsource services once
performed in house.
3. More than three million layoffs have occurred in the last ve years.
What can be done to reverse this trend? Well, some will ride the wind based only
on their size, some will not make it, and some will make it. The ones that will make it
must learn to operate under different conditions conditions that delight the customer
with the service or the product that an organization is offering.

The organization must
learn to design services or products so that the customer will see value in them and
cannot stand it until it has possession of either one.

As the desire of the customer
increases for the service or product, the demand for quality will increase.
Designing for six sigma is not a small thing, nor should it be a lighthearted
undertaking. It is a very difcult road to follow, but the results are worthwhile.
The structure of this volume is straightforward and follows the pattern of the
model of DFSS, which is Recognize, Dene, Characterize, Optimize, and Verify
(RDCOV). Specically, with each stage of the model, we will explain some of the
most important tools and methodologies.
Our introduction is the stage where we address the basic and fundamental
characteristics of any DFSS program. It is our version of the Recognize step.
Specically, we address:
1. Partnering
2. Robust teams
3. Systems engineering
4. Advanced quality planning
We follow with the Dene stage, where we discuss customer concerns by rst
explaining the notion of function and then continuing with three very important
methodologies in the pursuit of satisfying the customer. Those methodologies are:
1. Kano model
2. Quality function deployment (QFD)
3. Conjoint analysis
We move into a discussion of

Best in class

by discussing benchmarking. We
continue the discussion with advanced topics relating to design, specically:
1. Monte Carlo
2. Finite element analysis
3. Excels solver
4. Failure mode and effect analysis (FMEA)
5. Reliability and R&M
6. DOE
7. Parameter design
8. Tolerance design

SL3151 FMFrame Page 13 Friday, September 27, 2002 3:14 PM

We continue with relatively short discussions of manufacturing topics, speci-
cally:
1. Design for manufacturing/assembly (DFM/DFA)
2. Mistake proong
Our discussion on miscellaneous topics is geared to enhance the overall design
function and to sensitize readers to the fact that the pursuit of DFSS is a team
orientation with many disciplines interwoven to produce the optimum design. Of
course, we do not pretend to have exhaustively identied all methodologies and all
tools, but we believe that we have identied the most critical ones. Specically, we
discuss:
1. Theory of constraints
2. Design review
3. Trade-off analysis
4. Cost of quality
5. Reengineering
6. GD&T
7. Metrology
We follow with a chapter on innovative methodologies in pursuing DFSS such
as signal process ow, axiomatic designs, and TRIZ, and then we return to classic
discussions on value analysis, project management, an overview of mathematical
concepts for reliability, and Taylors theorem and nancial concepts.
We conclude our discussion of Design for Six Sigma and Beyond with a formal
summary in a matrix format of all the tools used, following the model of DCOV:
1. Dene
2. Characterize
3. Optimize
4. Verify

REFERENCES

Anon., Retailers Relearning Lesson of Customers Always Right,

The Sentinel Star

, Jan. 17,
1982, p. 4.
Blotnick, S., If Its American, It Must Be Bad,

Forbes

, Feb. 1, 1982, p. 146.
Flint, J., Wheres the Cars? You Can Make Money on Cars If You Really Want To,

Wards
AUTOWORLD

, Sept. 2001, p .21.
Halloran, R., Chief of Army Assails Industry on Arms Flaw,

The New York Times

, Aug. 9,
1983, p. 1.
Holusha, J., Why G.M. Needs Toyota: U.S. Striving for Efciency,

The New York Times,

Feb.
16, 1983

,

p. 1 (of business section).
Imai, M., From Taylor to Ford to Toyota: Kanban System Another Challenge from Japan,

The Japan Economic Journal

, Mar. 30, 1982, p. 12.

SL3151 FMFrame Page 14 Friday, September 27, 2002 3:14 PM

Lewin, T., Japanese Bosses Ponder Mysterious U.S. Workers,

The New York Times

, Nov. 7,
1982, p. 2 (of business section).
Lohr, S., Japans Hard Look at Software,

The New York Times,

Jan. 9, 1983, p. 3 (of business
section).
Mayne, E., Bottoms Up! Fuel Economy Pressure Underscores Underbody Debate.

Wards
AUTOWORLD

, Sept. 2001, p. 58.
Mayne, E. et al., Quality Crunch,

Wards AUTOWORLD

, July 2001, p. 14.
McElroy, J., Rendezvous captures consumer interest,

Wards AUTOWORLD,

Jan. 2001, p. 12.
Morais, R., The End of Focus Groups,

Quirks Marketing Research Review

, May 2001, p. 154.
OConnell, V., advertising column,

Wall Street Journal

, Nov. 27, 2000, p. B21.
Ohmae, K., Quality Control Circles: They Work and Dont Work,

The Wall Street Journal

,
Mar. 29, 1982, p. 2.
Serrin, W., Study Says Work Ethic Is Alive But Neglected,

The New York Times

, Sept. 5,
1983, p. 4.
Steel, J.,

Truth, Lies and Advertising

, Wiley, New York, 1998.
Visnic, B., Super Diesel! Anyone in the Industry Will Tell You: Forget Hybrids; Diesels Are
Our One Stop Cure All,

Wards AUTOWORLD

, Sept. 2001, p. 34.
Wayne, L., Management Gospel Gone Wrong,

The New York Times,

May 30, 1982, p. 1 (of
business section).
Wight, O.W., Learning To Tell the Truth,

Purchasing

, May 13, 1982, p. 5.
Winter, D., One last speed,

Wards AUTOWORLD,

July 2001, p. 9.
Zachary, K., Toyota Strives To Retain Its Benchmark Status,

Supplement to Wards AUTO-
WORLD,

Aug. 610, 2001,



p. 11.

SL3151 FMFrame Page 15 Friday, September 27, 2002 3:14 PM

SL3151 FMFrame Page 16 Friday, September 27, 2002 3:14 PM

Acknowledgments

I want to thank Dr. A. Stuart for granting me permission to use some of the material
in Chapter 14. The summaries of the different distributions and reliability have added
much to the volume. I am really indebted for his contribution.
As with the other volumes in this series, many people have helped in many ways
to make this book a reality. I am afraid that I will miss some, even though their help
was invaluable.
Dr. H. Hatzis, Dr. E. Panos, and Dr. E. Kelly have been indispensable in review-
ing and commenting freely on previous drafts and throughout this project.
I would like to thank Dr. L. Lamberson for his thoughtful comments and sug-
gestions on reliability, G. Burke for his suggestions on R&M, and R. Kapur for his
valuable comments about the ow and content of the material.
I want to thank Ford Motor Company and especially Richard Rossier and David
Kelley for their efforts to obtain permission for using the introductory material on
robust teams.
I want to thank Prentice Hall for granting me permission to use the material on
conjoint and MANOVA analysis in Chapter 2. That material was taken from the
1998 book

Multivariate Data Analysis,

5th ed., by J.F. Hair, R.E. Anderson, R.L.
Tatham, and W.C. Block.
I want to thank McGraw-Hill and D.R. Bothe for granting me permission to use
some material on six sigma taken from the 1977 book

Measuring Process Capability,

by D.R. Bothe.
I want to thank J. Wiley and the Buffa Foundation for granting me permission
to use material on the Monte Carlo method from the 1973 book

Modern Production
Management,

4th ed., by E.S. Buffa.
I want to thank the American Supplier Institute for granting me permission to
use the L8 interaction table as well as some of their OA and linear graphs.
I want to thank M.A. Anleitner, from Livonia Technical Services, for his con-
tribution to the topic of function in Chapter 2, for helping me articulate some of
the key points on APQP, and for serving as a sounding board on issues of value
analysis. Thanks, Mike.
I also want to thank J. Ondrus, from General Dynamics Land System Divi-
sion, for introducing me to Value Analysis and serving as a reviewer for earlier drafts
on this topic.
I want to thank T. Panson, P. Rageas, and J. Golematis, all of them certied
public accountants, for their guidance and help in articulating the basics of account-
ing and nancial concerns presented in Chapter 15. Of course, the ultimate respon-
sibility for interpreting their guidance is solely mine.
Special thanks go to the editors at CRC for putting up with me, as well as for
transforming my notes and the manuscript into a user-friendly product.

SL3151 FMFrame Page 17 Friday, September 27, 2002 3:14 PM

I want to thank the participants in my seminars for their comments and recom-
mendations. They actually piloted the material in their own organizations and saw
rsthand the results of some of the techniques and methodologies discussed in this
particular volume. Their comments were incorporated with much appreciation.
Finally, as always, this volume would not have been completed without the
support of my family and especially my navigator, chief editor, and supporter
my wife, Carla.

SL3151 FMFrame Page 18 Friday, September 27, 2002 3:14 PM

About the Author

D. H. Stamatis, Ph.D., ASQC-Fellow, CQE, CMfgE,

is president of Contemporary
Consultants, in Southgate, Michigan. He received his B.S. and B.A. degrees in
marketing from Wayne State University, his masters degree from Central Michigan
University, and his Ph.D. degree in instructional technology and business/statistics
from Wayne State University.
Dr. Stamatis is a certied quality engineer for the American Society of Quality
Control, a certied manufacturing engineer for the Society of Manufacturing Engi-
neers, and a graduate of BSIs ISO 9000 lead assessor training program. He is a
specialist in management consulting, organizational development, and quality sci-
ence and has taught these subjects at Central Michigan University, the University
of Michigan, and Florida Institute of Technology.
With more than 30 years of experience in management, quality training, and
consulting, Dr. Stamatis has served and consulted for numerous industries in the
private and public sectors. His consulting extends across the United States, Southeast
Asia, Japan, China, India, and Europe. Dr. Stamatis has written more than 60 articles
and presented many speeches at national and international conferences on quality.
He is a contributing author in several books and the sole author of 20 books. In
addition, he has performed more than 100 automotive-related audits and 25 preas-
sessment ISO 9000 audits, and has helped several companies attain certication. He
is an active member of the Detroit Engineering Society, the American Society for
Training and Development, the American Marketing Association, and the American
Research Association, and a fellow of the American Society for Quality Control.

SL3151 FMFrame Page 19 Friday, September 27, 2002 3:14 PM

SL3151 FMFrame Page 20 Friday, September 27, 2002 3:14 PM

List of Figures

Figure 2.1 Paper pencil assembly.
Figure 2.2 Function diagram for a mechanical pencil.
Figure 2.3 Ten symbols for process ow charting.
Figure 2.4 Process ow for complaint handling.
Figure 2.5 Kano model framework.
Figure 2.6 Basic quality depicted in the Kano model.
Figure 2.7 Performance quality depicted in the Kano model.
Figure 2.8 Excitement quality depicted in the Kano model.
Figure 2.9 Excitement quality depicted over time in the Kano model.
Figure 2.10 A typical House of Quality matrix.
Figure 2.11 The initial what of the customer.
Figure 2.12 The iterative process of what to how.
Figure 2.13 The relationship matrix.
Figure 2.14 The conversion of how to how much.
Figure 2.15 The ow of information in the process of developing the nal House
of Quality.
Figure 2.16 Alternative method of calculating importance.
Figure 2.17 The development of QFD.
Figure 3.1 The benchmarking continuum process.
Figure 5.1 Trade-off relationships between program objectives (balance design).
Figure 5.2 Sequential approach.
Figure 5.3 Simultaneous approach.
Figure 5.4 Tomorrows approach if not todays.
Figure 5.5 The product development map/guide.
Figure 5.6 Manufacturing system schematic.
Figure 5.7 Approaches to mistake proong.
Figure 5.8 Major inspection techniques.
Figure 5.9 Function of mistake-proong devices.
Figure 6.1 Types of FMEA.
Figure 6.2 Payback effort.
Figure 6.3 Kano model.
Figure 6.4 A Pugh matrix shaving with a razor.
Figure 6.5 Scope for DFMEA braking system.
Figure 6.6 Scope for PFMEA printed circuit board screen printing process.
Figure 6.7 Typical FMEA header.
Figure 6.8 Typical FMEA body.
Figure 6.9 Function tree process.
Figure 6.10 Example of ballpoint pen.
Figure 6.11 FMEA body.

SL3151 FMFrame Page 21 Friday, September 27, 2002 3:14 PM

Figure 6.12 Transferring the failure modes to the FMEA form.
Figure 6.13 Transferring severity and classication to the FMEA form.
Figure 6.14 Transferring causes and occurrences to the FMEA form.
Figure 6.15 Transferring current controls and detection to the FMEA form.
Figure 6.16 Area chart.
Figure 6.17 Transferring the RPN to the FMEA form.
Figure 6.18 Action plans and results analysis.
Figure 6.19 Transferring action plans and action results on the FMEA form.
Figure 6.20 FMEA linkages.
Figure 6.21 The learning stages.
Figure 6.22 Pen assembly process.
Figure 7.1 Bathtub curve.
Figure 7.2 A series block diagram.
Figure 7.3 A parallel reliability block diagram.
Figure 7.4 A complex reliability block diagram.
Figure 7.5 The Weibull distribution for the example.
Figure 7.6 Control factors and noise interactions.
Figure 7.7 An example of a parameter design in reliability usage.
Figure 9.1 An example of a partially completed shbone diagram.
Figure 9.2 An example of interaction.
Figure 9.3 Example of cause-and-effect diagram.
Figure 9.4 Plots of averages (higher responses are better).
Figure 9.5 A linear example of a process with several factors.
Figure 9.6 Contrasts shown in a graphical presentation.
Figure 9.7 First round testing.
Figure 9.8 Second round testing.
Figure 9.9 Linear graph for L4.
Figure 9.10 The orthogonal array (OA), linear graph (LG), and column interaction
for L9.
Figure 9.11 Three-level factors in a L8 array.
Figure 9.12 Traditional approach.
Figure 9.13 Nominal the best.
Figure 9.14 Smaller the better
Figure 9.15 Larger the better.
Figure 9.16 A comparison of C

pk

and loss function.
Figure 9.17 Plots of averages (higher responses are better).
Figure 9.18 ANOVA decomposition of multi-level factors.
Figure 9.19 Factors not linear.
Figure 9.20 Plots of the average standard deviation by factor level.
Figure 9.21 Factor effects.
Figure 9.22 Factor effects.
Figure 10.1 Quality cost: The quality control system.
Figure 10.2 Costs.
Figure 11.1 A typical branching using signal ow graph.
Figure 11.2 A simple example with signal ow graph.
Figure 11.3 A hypothetical design process.

SL3151 FMFrame Page 22 Friday, September 27, 2002 3:14 PM

Figure 11.4 The graph transmission.
Figure 11.5 First few terms of the probability.
Figure 11.6 The effect of a self loop.
Figure 11.7 Node absorption.
Figure 11.8 Order of design matrix showing functional coupling between FRs and
DPs.
Figure 11.9 Relationship of axiomatic design framework and other tools.
Figure 12.1 Relationship of savings potential to time.
Figure 12.2 Project identication sheet.
Figure 12.3 Cost visibility sheet.
Figure 12.4 Cost function worksheet.
Figure 12.5 A form that may be used to direct effort.
Figure 12.6 Second step in the FAST diagram block process.
Figure 12.7 A partial cost function FAST diagram.
Figure 15.1 Life cycle of a typical company or product.
Figure 15.2 A pictorial approach of duPonts formula.
Figure 15.3 Breakeven analysis.
Figure 16.1 The DFSS model.

SL3151 FMFrame Page 23 Friday, September 27, 2002 3:14 PM

SL3151 FMFrame Page 24 Friday, September 27, 2002 3:14 PM

List of Tables

Table I.1 Probability of a completely conforming product.
Table 1.1 Customer/supplier expanded partnering interface meetings.
Table 1.2 A typical questionnaire.
Table 1.3 A general questionnaire.
Table 2.1 Characteristic matrix for a machining process.
Table 2.2 Benets of improved total development process.
Table 2.3 Stimuli descriptions and respondent rankings for conjoint analysis of
industrial cleanser.
Table 2.4 Average ranks and deviations for respondents 1 and 2.
Table 2.5 Estimated part-worths and factor importance for respondents 1 and 2.
Table 2.6 Predicted part-worth totals and comparison of actual and estimated
preference rankings.
Table 4.1 Simulated samples of 20 performance time values for operations A and B.
Table 4.2 Simulated operation of the two-station assembly line when operation
A precedes operation B.
Table 4.3 Simulated operation of the two-station assembly line when operation
B precedes operation A.
Table 5.1 Customer attributes for a car door.
Table 5.2 Relative importance of weights.
Table 5.3 Customers evaluation of competitive products.
Table 5.4 Examples of mistakes and defects.
Table 6.1 DFMEA severity rating.
Table 6.2 PFMEA severity rating.
Table 6.3 DFMEA occurrence rating.
Table 6.4 PFMEA occurrence rating.
Table 6.5 DFMEA detection table.
Table 6.6 PFMEA detection table.
Table 6.7 Special characteristics for both design and process.
Table 6.8 Manufacturing process control matrix.
Table 6.9 Machinery guidelines for severity, occurrence, and detection.
Table 7.1 Failure rates with median ranks.
Table 7.2 Median ranks.
Table 7.3 Five percent rank table.
Table 7.4 Ninety-ve percent rank table.
Table 7.5 Department of Defense reliability and maintainability standards and
data items.
Table 8.1 Activities in the rst three phases of the R&M process.
Table 8.2 Cost comparison of two machines.
Table 8.3 Thermal calculation values.

SL3151 FMFrame Page 25 Friday, September 27, 2002 3:14 PM

Table 8.4 Guidelines for the Duane model.
Table 9.1 One factor at a time.
Table 9.2 Test numbers for comparison.
Table 9.3 The group runs using DOE congurations.
Table 9.4 Comparisons using DOE.
Table 9.5 Comparisons of the two means.
Table 9.6 The test matrix for the seven factors.
Table 9.7 Test results.
Table 9.8 An example of contrasts.
Table 9.9 L4 setup.
Table 9.10 The L8 interaction table.
Table 9.11 An L9 with a two-level column.
Table 9.12 Combination method.
Table 9.13 Modied L8 array.
Table 9.14 An L8 with an L4 outer array.
Table 9.15 Recommended factor assignment by column.
Table 9.16 Formulas for calculating S/N.
Table 9.17 Concerns with NTB S/N ratio.
Table 9.18 L8 with test results.
Table 9.19 ANOVA table.
Table 9.20 Higher order relationships.
Table 9.21 Inner OA (L8) with outer OA (L4) and test results.
Table 9.22 The STB ANOVA table.
Table 9.23 The LTB ANOVA table.
Table 9.24 The NTB ANOVA table.
Table 9.25 Raw data ANOVA table.
Table 9.26 Combination design.
Table 9.27 L9 OA with test results.
Table 9.28 ANOVA table.
Table 9.29 Second run of ANOVA.
Table 9.30 L8 with test results and S/N values.
Table 9.31 ANOVA table for data from Table 9.30.
Table 9.32 Signicant gures from Table 9.31.
Table 9.33 Observed versus cumulative frequency.
Table 9.34 Attribute test setup and results.
Table 9.35 ANOVA table (for cumulative frequency).
Table 9.36 The effect of the signicant factors.
Table 9.37 Rate of occurrence at the optimum settings.
Table 9.38 Door closing effort: test set up and results.
Table 9.39 ANOVA table for door closing effort.
Table 9.40 The effects of the door closing effort.
Table 9.41 Rate of occurrence at the optimum settings.
Table 9.42 OA and test setup and results.
Table 9.43 ANOVA for the raw data.
Table 9.44 ANOVA table for the NTB S/N ratios.
Table 9.45 Typical ANOVA table setup.

SL3151 FMFrame Page 26 Friday, September 27, 2002 3:14 PM

Table 9.46 L4 OA with test results.
Table 9.47 ANOVA table raw data.
Table 9.48 ANOVA table (S/N ratio used as raw data).
Table 9.49 Level averages raw data.
Table 9.50 OA setup and test results for Example 2.
Table 9.51 ANOVA table (S/N ratio used as raw data).
Table 9.52 Transformed data.
Table 9.53 ANOVA table for the transformed data.
Table 9.54 Components and their levels.
Table 9.55 L8 inner OA with L8 outer OA and test results.
Table 9.56 ANOVA table (NTB) and level averages for the most signicant factors.
Table 9.57 Variation runs using recommended factor target values.
Table 9.58 Calculated response variance.
Table 9.59 Cost of reducing tolerances.
Table 9.60 The impact of tightening the tolerance.
Table 9.61 Reduction of 20% in the tolerance limits of component A.
Table 9.62 Reduction of tolerance limits for component D.
Table 9.63 Reduction of tolerance limits for component C.
Table 9.64 L8 OA used for the conrmation runs with the levels set, test setup,
ANOVA table and level averages.
Table 9.65 Response variance.
Table 10.1 Design review objectives.
Table 10.2 Design review checklist.
Table 10.3 Comparison between traditional and concurrent engineering.
Table 10.4 Typical monthly quality cost report (values in thousands of dollars).
Table 10.5 Prevention costs.
Table 10.6 Appraisal costs.
Table 10.7 Internal failure costs.
Table 10.8 External failure costs.
Table 10.9 Seven-step process redesign model.
Table 10.10 GD&T characteristics and symbols.
Table 12.1 Project identication checklist.
Table 12.2 Idea needlers or thought stimulators.
Table 12.3 The worksheet for setting the list.
Table 12.4 Evolution summary.
Table 12.5 Ranking and weighting.
Table 12.6 Criteria affecting car purchase XXXX pair comparison.
Table 12.7 Criteria weighing.
Table 12.8 Criteria comparison.
Table 12.9 Criteria weight comparison completed matrix.
Table 13.1 Key integrative processes.
Table 13.2 The characteristics of the DFSS implementation model using project
management.
Table 13.3 The process of six sigma and DFSS implementation using project
management.
Table 14.1 Possibilities of selecting a DFSS problem.

SL3151 FMFrame Page 27 Friday, September 27, 2002 3:14 PM

Table 15.1 A summary of debits and credits.
Table 15.2 Summary of normal debit/credit balances.
Table 15.3 The Z score.

SL3151 FMFrame Page 28 Friday, September 27, 2002 3:14 PM

Contents

Introduction

Understanding the Six Sigma Philosophy....................................... 1
A Static versus a Dynamic Process .......................................................................... 1
Products with Multiple Characteristics ..................................................................... 2
Short- and Long-Term Six Sigma Capability ........................................................... 4
Design for Six Sigma and the Six Sigma Philosophy.............................................. 5
Design Phase......................................................................................................... 5
Internal Manufacturing ......................................................................................... 5
External Manufacturing ........................................................................................ 6
References.................................................................................................................. 7

Chapter 1

Prerequisites to Design for Six Sigma (DFSS) ................................... 9
Partnering................................................................................................................... 9
The Principles of Partnering............................................................................... 11
View of Buyer/Supplier Relationship: A Paradigm Shift .................................. 11
Characteristics of Expanded Partnering ............................................................. 12
Evaluating Suppliers and Selecting Supplier Partners....................................... 14
Implementing Partnering .................................................................................... 14
1. Establish Top Management Enrollment
(Role of Top Management Leadership).................................................... 14
2. Establish Internal Organization................................................................. 14
Option 1: Supplier Partnering Manager.................................................... 14
Option 2: Supplier Council/Team............................................................. 15
Option 3: Commodity Management Organization................................... 15
3. Establish Supplier Involvement ................................................................ 15
4. Establish Responsibility for Implementation ........................................... 15
5. Reevaluate the Partnering Process............................................................ 17
Ratings....................................................................................................... 17
Terms Used in Specic Questions............................................................ 19
Major Issues with Supplier Partnering Relationships........................................ 19
How Can We Improve?....................................................................................... 20
Basic Partnering Checklist.................................................................................. 21
1. Leadership ................................................................................................. 21
2. Information and Analysis.......................................................................... 22
3. Strategic Quality Planning........................................................................ 22
4. Human Resource Development and Management ................................... 22
5. Management of Process Quality............................................................... 23
6. Quality and Operational Results............................................................... 23
7. Customer Focus and Satisfaction ............................................................. 23
Expanded Partnering Checklist .......................................................................... 23
1. Leadership ................................................................................................. 23
2. Information and Analysis.......................................................................... 24

SL3151 FMFrame Page 29 Friday, September 27, 2002 3:14 PM

3. Strategic Quality Planning........................................................................ 24
4. Human Resource Development and Management ................................... 24
5. Management of Process Quality............................................................... 25
6. Quality and Operational Results............................................................... 25
7. Customer Focus and Satisfaction ............................................................. 25
The Robust Team: A Quality Engineering Approach............................................. 25
Team Systems ..................................................................................................... 26
Input ............................................................................................................... 27
Signal.............................................................................................................. 27
The System..................................................................................................... 27
Output/Response ............................................................................................ 28
The Environment............................................................................................ 28
External Variation........................................................................................... 28
Internal Variation............................................................................................ 29
The Boundary................................................................................................. 29
Controlling a Team Process: Conformance in Teams........................................ 29
Strategies for Dealing with Variation................................................................. 30
Controlling or Eliminating Variation............................................................. 30
Compensating for Variation ........................................................................... 30
System Feedback............................................................................................ 31
Minimizing the Effect of Variation................................................................ 31
Monitoring Team Performance........................................................................... 33
System Interrelationships............................................................................... 33
Systems Engineering ............................................................................................... 34
Systems Dened.............................................................................................. 34
Implications of the Systems Concept for the Manager ..................................... 35
Dening Systems Engineering ........................................................................... 37
Pre-Feasibility Analysis ...................................................................................... 38
Requirement Analysis ......................................................................................... 38
Design Synthesis................................................................................................. 38
Verication.......................................................................................................... 39
Advanced Quality Planning..................................................................................... 40
When Do We Use AQP?..................................................................................... 42
What Is the Difference between AQP and APQP? ............................................ 43
How Do We Make AQP Work?.......................................................................... 43
Are There Pitfalls in Planning? .......................................................................... 43
Do We Really Need Another Qualitative Tool to Gauge Quality?.................... 44
How Do We Use the Qualitative Methodology in an AQP Setting?................. 44
APQP Initiative and Relationship to DFSS ....................................................... 45
References................................................................................................................ 47
Selected Bibliography.............................................................................................. 47

Chapter 2

Customer Understanding.................................................................... 49
The Concept of Function......................................................................................... 52
Understanding Customer Wants and Needs ....................................................... 54

SL3151 FMFrame Page 30 Friday, September 27, 2002 3:14 PM

Creating a Function Diagram............................................................................. 55
The Product Flow Diagram and the Concept of Functives ............................... 56
The Process Flow Diagram................................................................................ 61
Using Function Concepts with Productivity and Quality Methodologies......... 64
Kano Model ............................................................................................................. 68
Basic Quality....................................................................................................... 69
Performance Quality........................................................................................... 69
Excitement Quality ............................................................................................. 69
Quality Function Deployment (QFD) ..................................................................... 71
Terms Associated with QFD............................................................................... 73
Benets of QFD.................................................................................................. 73
Issues with Traditional QFD............................................................................... 75
Process Overview................................................................................................ 76
Developing a QFD Project Plan ..................................................................... 76
The Customer Axis ........................................................................................ 77
Technical Axis................................................................................................ 79
Internal Standards and Tests .......................................................................... 79
The QFD Approach ............................................................................................ 79
QFD Methodology.............................................................................................. 80
QFD and Planning .............................................................................................. 84
Product Development Process ............................................................................ 86
Conjoint Analysis .................................................................................................... 88
What Is Conjoint Analysis?................................................................................ 88
A Hypothetical Example of Conjoint Analysis.................................................. 89
An Empirical Example ....................................................................................... 90
The Managerial Uses of Conjoint Analysis ....................................................... 95
References................................................................................................................ 95
Selected Bibliography.............................................................................................. 95

Chapter 3

Benchmarking .................................................................................... 97
General Introduction to Benchmarking................................................................... 97
A Brief History of Benchmarking...................................................................... 97
Potential Areas of Application of Benchmarking .............................................. 97
Benchmarking and Business Strategy Development .............................................. 99
Least Cost and Differentiation............................................................................ 99
Characteristics of a Least Cost Strategy .......................................................... 100
Characteristics of a Differentiated Strategy ..................................................... 101
Benchmarking and Strategic Quality Management .............................................. 102
Benchmarking and Six Sigma .......................................................................... 105
National Quality Award Winners and Benchmarking...................................... 107
Example Cadillac .................................................................................... 107
A Second Example Xerox ...................................................................... 108
Third Example IBM Rochester............................................................... 109
Fourth Example Motorola....................................................................... 110
Benchmarking and the Deming Management Method.................................... 110

SL3151 FMFrame Page 31 Friday, September 27, 2002 3:14 PM

Benchmarking and the Shewhart Cycle or Deming Wheel ............................. 111
Plan............................................................................................................... 111
Do................................................................................................................. 111
Study Observe the Effects ...................................................................... 111
Act ................................................................................................................ 111
Why Do People Buy?....................................................................................... 111
Alternative Denitions of Quality.................................................................... 112
Determining the Customers Perception of Quality......................................... 117
Quality, Pricing and Return on Investment (ROI) The PIMS Results ....... 119
Benchmarking as a Management Tool.................................................................. 119
What Benchmarking Is and Is Not................................................................... 120
The Benchmarking Process .............................................................................. 121
Types of Benchmarking.................................................................................... 122
Organization for Benchmarking ....................................................................... 123
Requirements for Success................................................................................. 124
Benchmarking and Change Management ............................................................. 126
Structural Pressure ............................................................................................ 128
Aspiration for Excellence ................................................................................. 128
Force Field Analysis ......................................................................................... 128
Identication of Benchmarking Alternatives ........................................................ 129
Externally Identied Benchmarking Candidates.............................................. 129
Industry Analysis and Critical Success Factors .......................................... 129
PIMS Par Report .......................................................................................... 130
Financial Comparison .................................................................................. 130
Competitive Evaluations .............................................................................. 131
Focus Groups ............................................................................................... 131
Importance/Performance Analysis ............................................................... 131
Internally Identied Benchmarking Candidates Internal
Assessment Surveys.......................................................................................... 132
Nominal Group Process: General Areas in Greatest Need
of Improvement ............................................................................................ 132
Pareto Analysis............................................................................................. 132
Statistical Process Control ........................................................................... 133
Trend Charting ............................................................................................. 133
Product and Company Life Cycle Position................................................. 133
Failure Mode and Effect Analysis ............................................................... 134
Cost/Time Analysis ...................................................................................... 134
Need to Identify Underlying Causes................................................................ 134
Problem, Causes, Solutions ......................................................................... 134
The Five Whys ............................................................................................. 134
Cause and Effect Diagram........................................................................... 134
Business Assessment Strengths and Weaknesses............................................. 135
Prioritization of Benchmarking Alternatives Prioritization Process................ 139
Prioritization Matrix ......................................................................................... 139
Quality Function Deployment (House of Quality) .......................................... 140
Importance/Feasibility Matrix .......................................................................... 141

SL3151 FMFrame Page 32 Friday, September 27, 2002 3:14 PM

Paired Comparisons ..................................................................................... 141
Improvement Potential ................................................................................. 141
Prioritization Factors.................................................................................... 141
Are There Any Other Problems? What Is the Relative Importance
of Each of These? ............................................................................................. 142
Identication of Benchmarking Sources............................................................... 142
Types of Benchmark Sources ........................................................................... 142
Internal Best Performers .............................................................................. 143
Competitive Best Performers....................................................................... 143
Best of Class ................................................................................................ 143
Selection Criteria .............................................................................................. 144
Sources of Competitive Information ................................................................ 144
Gaining the Cooperation of the Benchmark Partner........................................ 148
Making the Contact .......................................................................................... 149
Benchmarking Performance and Process Analysis.......................................... 149
Preparation of the Benchmarking Proposal...................................................... 149
Activity before the Visit.................................................................................... 149
Understanding Your Own Operations.......................................................... 149
Activity Analysis.......................................................................................... 150
1. Dene the Activity ............................................................................. 150
2. Determine the Triggering Event ........................................................ 150
3. Dene the Activity ............................................................................. 150
4. Determine the Resource Requirements ............................................. 151
5. Determine the Activity Drivers.......................................................... 151
6. Determine the Output of the Activity................................................ 151
7. Determine the Activity Performance Measure .................................. 151
Model the Activity ....................................................................................... 152
Examples of Modeling................................................................................. 152
Flow Chart the Process ................................................................................ 153
Activities during the Visit ................................................................................. 155
Understand the Benchmark Partners Activities.......................................... 155
Identication of All of the Factors Required for Success .......................... 155
Activities after the Visit .................................................................................... 156
1. Functional Analysis................................................................................. 156
2. Cost Analysis........................................................................................... 156
3. Technology Forecasting .......................................................................... 156
4. Financial Benchmarking ......................................................................... 157
5. Sales Promotion and Advertising ........................................................... 157
6. Warehouse Operations ............................................................................ 157
7. PIMS Analysis......................................................................................... 157
8. Purchasing Performance Benchmarks .................................................... 157
Motorola Example........................................................................................ 158
Gap Analysis.......................................................................................................... 158
Denition of Gap Analysis............................................................................... 158
Current versus Future Gap ............................................................................... 158

SL3151 FMFrame Page 33 Friday, September 27, 2002 3:14 PM

Goal Setting ........................................................................................................... 159
Goal Denition ................................................................................................. 159
Goal Characteristics.......................................................................................... 159
Result versus Effort Goals................................................................................ 159
Goal Setting Philosophy ................................................................................... 159
Best of the Best versus Optimization.......................................................... 159
Kaizen versus Breakthrough Strategies....................................................... 160
Guiding Principle Implications......................................................................... 160
Goal Structure................................................................................................... 160
Cascading Goal Structure ............................................................................ 160
Interdepartmental Goals............................................................................... 161
Action Plan Identication and Implementation.................................................... 161
A Creative Planning Process ............................................................................ 162
Action Plan Prioritization................................................................................. 162
Action Plan Documentation.............................................................................. 162
Monitoring and Control .................................................................................... 162
Financial Analysis of Benchmarking Alternatives................................................ 163
Managing Benchmarking for Performance........................................................... 164
References.............................................................................................................. 166
Selected Bibliography............................................................................................ 167

Chapter 4

Simulation ........................................................................................ 169
What Is Simulation? .............................................................................................. 169
Simulated Sampling............................................................................................... 170
Finite Element Analysis (FEA) ............................................................................. 175
Types of Finite Elements.................................................................................. 175
Types of Analyses ............................................................................................. 176
Procedures Involved in FEA............................................................................. 178
Steps in Analysis Procedure ............................................................................. 178
Overview of Finite Element Analysis Solution Procedure ......................... 179
Input to the Finite Element Model ................................................................... 180
Outputs from the Finite Element Analysis....................................................... 180
Analysis of Redesigns of Rened Model ........................................................ 181
Summary Finite Element Technique: A Design Tool ................................. 182
Excels Solver ........................................................................................................ 182
Design Optimization.............................................................................................. 182
How To Do Design Optimization..................................................................... 184
Understanding the Optimization Algorithm..................................................... 184
Conversion to an Unconstrained Problem........................................................ 185
Simulation and DFSS............................................................................................ 185
References.............................................................................................................. 186
Selected Bibliography............................................................................................ 186

Chapter 5

Design for Manufacturability/Assembly
(DFM/DFA or DFMA) .......................................................................................... 187

SL3151 FMFrame Page 34 Friday, September 27, 2002 3:14 PM

Business Expectations and the Impact from a Successful DFM/DFA................. 189
The Essential Elements for Successful DFM/DFA .............................................. 192
The Product Plan .............................................................................................. 194
Product Design............................................................................................. 194
Criteria for Decision between Crash Program and Perfect Product ........... 195
Case #1 Crash Program ..................................................................... 195
Case #2 Perfect Product Design ........................................................ 196
The Product Plan Product Design Itself................................................. 196
Dene Product Performance Requirement .................................................. 198
Available Tools and Methods for DFMA ............................................................. 198
Cookbooks for DFM/DFA................................................................................ 199
Use of the Human Body.............................................................................. 199
Arrangement of the Work Place .................................................................. 200
Design of Tools and Equipment .................................................................. 200
Mitsubishi Method............................................................................................ 200
U-MASS Method.............................................................................................. 202
MIL-HDBK-727 ............................................................................................... 203
Fundamental Design Guidance ............................................................................. 204
The Manufacturing Process................................................................................... 206
Mistake Proong.................................................................................................... 208
Denition .......................................................................................................... 208
The Strategy...................................................................................................... 208
Defects .............................................................................................................. 209
Mistake Proof System Is a Technique for Avoiding Errors
in the Workplace ............................................................................................... 210
Types of Human Mistakes ................................................................................ 210
Forgetfulness ................................................................................................ 210
Mistakes of Misunderstanding..................................................................... 210
Identication Mistakes................................................................................. 210
Amateur Errors............................................................................................. 211
Willful Mistakes........................................................................................... 211
Inadvertent Mistakes .................................................................................... 211
Slowness Mistakes ....................................................................................... 211
Lack of Standards Mistakes......................................................................... 211
Surprise Mistakes......................................................................................... 211
Intentional Mistakes..................................................................................... 212
Defects and Errors ............................................................................................ 212
Mistake Types and Accompanying Causes ...................................................... 213
Signals that Alert .............................................................................................. 215
Approaches to Mistake Proong ...................................................................... 215
Major Inspection Techniques....................................................................... 216
Mistake Proof System Devices.................................................................... 216
Devices Used as Detectors of Mistakes.............................................. 217
Devices Used as Preventers of Mistakes............................................. 217
Equation for Success ........................................................................................ 218
Typical Error Proong Devices ................................................................... 219

SL3151 FMFrame Page 35 Friday, September 27, 2002 3:14 PM

References.............................................................................................................. 219
Selected Bibliography............................................................................................ 219

Chapter 6

Failure Mode and Effect Analysis (FMEA) .................................... 223
Denition of FMEA.............................................................................................. 224
Types of FMEAs.................................................................................................... 224
Is FMEA Needed?................................................................................................. 225
Benets of FMEA ................................................................................................. 226
FMEA History ....................................................................................................... 226
Initiation of the FMEA.......................................................................................... 227
Getting Started....................................................................................................... 228
1. Understand Your Customers and Their Needs ............................................ 228
2. Know the Function ...................................................................................... 230
3. Understand the Concept of Priority ............................................................ 230
4. Develop and Evaluate Conceptual Designs/Processes Based
on Customer Needs and Business Strategy...................................................... 230
5. Be Committed to Continual Improvement .................................................. 231
6. Create an Effective FMEA Team................................................................ 231
7. Dene the FMEA Project and Scope.......................................................... 234
The FMEA Form................................................................................................... 235
Developing the Function................................................................................... 238
Organizing Product Functions .......................................................................... 239
Failure Mode Analysis...................................................................................... 240
Understanding Failure Mode ....................................................................... 240
Failure Mode Questions............................................................................... 240
Determining Potential Failure Modes.......................................................... 242
Failure Mode Effects ........................................................................................ 243
Effects and Severity Rating ......................................................................... 244
Severity Rating (Seriousness of the Effect) ................................................ 245
Failure Cause and Occurrence.......................................................................... 246
Popular Ways (Techniques) to Determine Causes ...................................... 247
Occurrence Rating........................................................................................ 249
Current Controls and Detection Ratings ..................................................... 249
Detection Rating .......................................................................................... 250
Understanding and Calculating Risk................................................................ 251
Action Plans and Results....................................................................................... 253
Classication and Characteristics..................................................................... 254
Product Characteristics/Root Causes ....................................................... 255
Process Parameters/Root Causes.............................................................. 255
Driving the Action Plan.................................................................................... 255
Linkages among Design and Process FMEAs and Control Plan......................... 258
Getting the Most from FMEA............................................................................... 260
System or Concept FMEA.................................................................................... 262
Design Failure Mode and Effects Analysis (DFMEA)......................................... 262
Objective ........................................................................................................... 263
Timing............................................................................................................... 263

SL3151 FMFrame Page 36 Friday, September 27, 2002 3:14 PM

Requirements .................................................................................................... 263
Discussion ......................................................................................................... 263
Forming the Appropriate Team.................................................................... 263
Describing the Function of the Design/Product .......................................... 264
Describing the Failure Mode Anticipated ................................................... 264
Describing the Effect of the Failure ............................................................ 264
Describing the Cause of the Failure............................................................ 264
Estimating the Frequency of Occurrence of Failure................................... 265
Estimating the Severity of the Failure......................................................... 265
Identifying System and Design Controls .................................................... 265
Estimating the Detection of the Failure ...................................................... 266
Calculating the Risk Priority Number ......................................................... 267
Recommending Corrective Action............................................................... 267
Strategies for Lowering Risk: (System/Design) High Severity
or Occurrence .......................................................................................... 267
Strategies for Lowering Risk: (System/Design) High Detection
Rating ...................................................................................................... 267
Process Failure Mode and Effects Analysis (FMEA)........................................... 268
Objective ........................................................................................................... 268
Timing............................................................................................................... 268
Requirements .................................................................................................... 268
Discussion ......................................................................................................... 269
Forming the Team........................................................................................ 269
Describing the Process Function ................................................................. 269
Manufacturing Process Functions........................................................... 269
The PFMEA Function Questions............................................................ 270
Describing the Failure Mode Anticipated ................................................... 270
Describing the Effect(s) of the Failure........................................................ 271
Describing the Cause(s) of the Failure........................................................ 272
Estimating the Frequency of Occurrence of Failure................................... 273
Estimating the Severity of the Failure......................................................... 273
Identifying Manufacturing Process Controls............................................... 273
Estimating the Detection of the Failure ...................................................... 274
Calculating the Risk Priority Number ......................................................... 275
Recommending Corrective Action............................................................... 275
Strategies for Lowering Risk: (Manufacturing) High Severity
or Occurrence .......................................................................................... 275
Strategies for Lowering Risk: (Manufacturing) High Detection
Rating ...................................................................................................... 276
Machinery FMEA (MFMEA) ............................................................................... 277
Identify the Scope of the MFMEA.................................................................. 277
Identify the Function ........................................................................................ 277
Failure Mode..................................................................................................... 277
Potential Effects ................................................................................................ 278
Severity Rating.................................................................................................. 279
Classication..................................................................................................... 279

SL3151 FMFrame Page 37 Friday, September 27, 2002 3:14 PM

Potential Causes................................................................................................ 279
Occurrence Ratings........................................................................................... 282
Surrogate MFMEAs.......................................................................................... 282
Current Controls........................................................................................... 282
Detection Rating .......................................................................................... 282
Risk Priority Number (RPN)............................................................................ 282
Recommended Actions ..................................................................................... 283
Date, Responsible Party.................................................................................... 283
Actions Taken/Revised RPN............................................................................. 283
Revised RPN..................................................................................................... 284
Summary................................................................................................................ 284
Selected Bibliography............................................................................................ 284

Chapter 7

Reliability ......................................................................................... 287
Probabilistic Nature of Reliability ........................................................................ 287
Performing the Intended Function Satisfactorily.................................................. 288
Specied Time Period....................................................................................... 288
Specied Conditions ......................................................................................... 289
Environmental Conditions Prole .................................................................... 289
Reliability Numbers.......................................................................................... 290
Indicators Used to Quantify Product Reliability.............................................. 290
Reliability and Quality .......................................................................................... 291
Product Defects................................................................................................. 291
Customer Satisfaction....................................................................................... 292
Product Life and Failure Rate .......................................................................... 293
Product Design and Development Cycle .............................................................. 295
Reliability in Design......................................................................................... 296
Cost of Engineering Changes and Product Life Cycle.................................... 297
Reliability in the Technology Deployment Process......................................... 298
1. Pre-Deployment Process ......................................................................... 298
2. Core Engineering Process....................................................................... 299
3. Quality Support ....................................................................................... 300
Reliability Measures Testing ............................................................................ 300
What Is a Reliability Test? ............................................................................... 300
When Does Reliability Testing Occur?............................................................ 301
Reliability Testing Objectives........................................................................... 301
Sudden-Death Testing .................................................................................. 302
Accelerated Testing...................................................................................... 305
Accelerated Test Methods ..................................................................................... 305
Constant-Stress Testing..................................................................................... 305
Step-Stress Testing............................................................................................ 306
Progressive-Stress Testing ................................................................................ 306
Accelerated-Test Models .................................................................................. 306
Inverse Power Law Model ........................................................................... 307
Arrhenius Model .......................................................................................... 308

SL3151 FMFrame Page 38 Friday, September 27, 2002 3:14 PM

AST/PASS.............................................................................................................. 310
Purpose of AST................................................................................................. 310
AST Pre-Test Requirements ............................................................................. 311
Objective and Benets of AST......................................................................... 311
Purpose of PASS............................................................................................... 311
Objective and Benets of PASS....................................................................... 312
Characteristics of a Reliability Demonstration Test ............................................. 312
The Operating Characteristic Curve................................................................. 313
Attributes Tests ................................................................................................. 313
Variables Tests .................................................................................................. 314
Fixed-Sample Tests ........................................................................................... 314
Sequential Tests ................................................................................................ 314
Reliability Demonstration Test Methods............................................................... 314
Small Populations Fixed-Sample Test
Using the Hypergeometric Distribution ........................................................... 315
Large Population Fixed-Sample Test
Using the Binomial Distribution ...................................................................... 315
Large Population Fixed-Sample Test
Using the Poisson Distribution......................................................................... 316
Success Testing...................................................................................................... 316
Sequential Test Plan for the Binomial Distribution......................................... 317
Graphical Solution ............................................................................................ 318
Variables Demonstration Tests .............................................................................. 318
Failure-Truncated Test Plans Fixed-Sample Test
Using the Exponential Distribution.................................................................. 318
Time-Truncated Test Plans Fixed-Sample Test
Using the Exponential Distribution.................................................................. 319
Weibull and Normal Distributions.................................................................... 320
Sequential Test Plans............................................................................................. 321
Exponential Distribution Sequential Test Plan................................................. 321
Weibull and Normal Distributions.................................................................... 323
Interference (Tail) Testing ................................................................................ 323
Reliability Vision .............................................................................................. 323
Reliability Block Diagrams .............................................................................. 323
Weibull Distribution Instructions for Plotting and Analyzing Failure
Data on a Weibull Probability Chart ................................................................ 325
Instructions for Plotting Failure and Suspended Items Data
on a Weibull Probability Chart ......................................................................... 331
Additional Notes on the Use of the Weibull.................................................... 334
Design of Experiments in Reliability Applications .............................................. 335
Reliability Improvement through Parameter Design ............................................ 336
Department of Defense Reliability and Maintainability Standards
and Data Items....................................................................................................... 337
References.............................................................................................................. 342
Selected Bibliography............................................................................................ 343

SL3151 FMFrame Page 39 Friday, September 27, 2002 3:14 PM

Chapter 8

Reliability and Maintainability ........................................................ 345
Why Do Reliability and Maintainability?............................................................. 345
Objectives............................................................................................................... 346
Making Reliability and Maintainability Work...................................................... 346
Whos Responsible? .............................................................................................. 347
Tools....................................................................................................................... 347
Sequence and Timing ............................................................................................ 348
Concept .................................................................................................................. 349
Bookshelf Data ................................................................................................. 349
Manufacturing Process Selection ..................................................................... 350
R&M and Preventive Maintenance (PM) Needs Analysis .............................. 350
Development and Design....................................................................................... 350
R&M Planning.................................................................................................. 350
Process Design for R&M................................................................................. 351
Machinery FMEA Development ...................................................................... 351
Design Review.................................................................................................. 352
Build and Install .................................................................................................... 352
Equipment Run-Off .......................................................................................... 352
Operation of Machinery.................................................................................... 352
Operations and Support ......................................................................................... 353
Conversion/Decommission .................................................................................... 353
Typical R&M Measures ........................................................................................ 353
R&M Matrix ..................................................................................................... 353
Reliability Point Measurement ......................................................................... 354
MTBE................................................................................................................ 354
MTBF................................................................................................................ 355
Failure Rate....................................................................................................... 355
MTTR................................................................................................................ 355
Availability........................................................................................................ 356
Overall Equipment Effectiveness (OEE).......................................................... 356
Life Cycle Costing (LCC) ................................................................................ 356
Top 10 Problems and Resolutions.................................................................... 357
Thermal Analysis .............................................................................................. 357
Electrical Design Margins ................................................................................ 359
Safety Margins (SM) ........................................................................................ 359
Interference ....................................................................................................... 360
Conversion of MTBF to Failure Rate and Vice Versa..................................... 361
Reliability Growth Plots ................................................................................... 361
Machinery FMEA............................................................................................. 361
Key Denitions in R&M....................................................................................... 362
DFSS and R&M.................................................................................................... 364
References.............................................................................................................. 365
Selected Bibliography............................................................................................ 365

SL3151 FMFrame Page 40 Friday, September 27, 2002 3:14 PM

Chapter 9

Design of Experiments..................................................................... 367
Setting the Stage for DOE..................................................................................... 367
Why DOE (Design of Experiments) Is a Valuable Tool.................................. 367
Taguchis Approach .......................................................................................... 370
Miscellaneous Thoughts ................................................................................... 371
Planning the Experiment ....................................................................................... 372
Brainstorming.................................................................................................... 372
Choice of Response .......................................................................................... 373
Miscellaneous Thoughts ................................................................................... 379
Setting Up the Experiment .................................................................................... 380
Choice of the Number of Factor Levels........................................................... 380
Linear Graphs ................................................................................................... 382
Degrees of Freedom.......................................................................................... 383
Using Orthogonal Arrays and Linear Graphs .................................................. 383
Column Interaction (Triangular) Table............................................................. 384
Factors with Three Levels ................................................................................ 385
Interactions and Hardware Test Setup.............................................................. 385
Choice of the Test Array................................................................................... 387
Factors with Four Levels .................................................................................. 389
Factors with Eight Levels................................................................................. 389
Factors with Nine Levels.................................................................................. 390
Using Factors with Two Levels in a Three-Level Array ................................. 390
Dummy Treatment ....................................................................................... 390
Combination Method ................................................................................... 390
Using Factors with Three Levels in a Two-Level Array ................................. 391
Other Techniques .............................................................................................. 391
Nesting of Factors........................................................................................ 392
Setting Up Experiments with Factors with Large Numbers of Levels....... 392
Inner Arrays and Outer Arrays ......................................................................... 393
Randomization of the Experimental Tests ....................................................... 394
Miscellaneous Thoughts ................................................................................... 394
Loss Function and Signal-to-Noise....................................................................... 397
Loss Function and the Traditional Approach................................................... 397
Calculation of the Loss Function ..................................................................... 398
Comparison of the Loss Function and Cpk ..................................................... 402
Signal-to-Noise (S/N) ....................................................................................... 403
Miscellaneous Thoughts ................................................................................... 404
Analysis.................................................................................................................. 405
Graphical Analysis............................................................................................ 405
Analysis of Variance (ANOVA) ....................................................................... 407
Estimation at the Optimum Level .................................................................... 408
Condence Interval around the Estimation...................................................... 409
Interpretation and Use ...................................................................................... 410
ANOVA Decomposition of Multi-Level Factors ............................................. 410

SL3151 FMFrame Page 41 Friday, September 27, 2002 3:14 PM

S/N Calculations and Interpretations................................................................ 411
Smaller-the-Better (STB) ............................................................................. 412
Larger-the-Better (LTB) ............................................................................... 413
Nominal the Best (NTB) ............................................................................. 413
Combination Design ......................................................................................... 415
Miscellaneous Thoughts ................................................................................... 418
Analysis of Classied Data ................................................................................... 421
Classied Responses......................................................................................... 422
Classied Attribute Analysis............................................................................. 422
Class 1.......................................................................................................... 425
Class 2.......................................................................................................... 426
Classied Variable Analysis.............................................................................. 426
Discussion of the Degrees of Freedom............................................................ 428
Miscellaneous Thoughts ................................................................................... 429
Dynamic Situations................................................................................................ 430
Denition .......................................................................................................... 430
Discussion ......................................................................................................... 431
Conditions .................................................................................................... 431
Analysis........................................................................................................ 432
Miscellaneous Thoughts ................................................................................... 439
For Example 1.............................................................................................. 440
For Example 2.............................................................................................. 440
Parameter Design................................................................................................... 441
Discussion ......................................................................................................... 441
Example........................................................................................................ 441
Tolerance Design ................................................................................................... 447
Discussion ......................................................................................................... 447
Example........................................................................................................ 448
Humidity.................................................................................................. 454
Testing ..................................................................................................... 454
DOE Checklist ....................................................................................................... 454
Selected Bibliography............................................................................................ 455

Chapter 10

Miscellaneous Topics Methodologies....................................... 457
Theory of Constraints (TOC) ................................................................................ 457
The Goal ........................................................................................................... 457
Strategic Measures ............................................................................................ 458
Net Prot, Return on Investment, and Productivity......................................... 459
Measurement Focus .......................................................................................... 460
Throughput versus Cost World......................................................................... 461
Obstacles to Moving into the Throughput World ............................................ 461
The Foundation Elements of TOC................................................................... 463
The Theory of Non-Constraints ....................................................................... 463
The Five-Step Framework of TOC................................................................... 464
Selected Bibliography............................................................................................ 465

SL3151 FMFrame Page 42 Friday, September 27, 2002 3:14 PM

Design Review....................................................................................................... 465
Failure Mode and Effect Analysis (FMEA)..................................................... 467
References.............................................................................................................. 470
Selected Bibliography............................................................................................ 470
Trade-Off Studies................................................................................................... 470
How to Conduct a Trade-Off Study: The Process ........................................... 471
1. Construct the Preliminary Matrix........................................................... 471
2. Select and Assemble the Cross-Functional Team.................................. 472
3. Assign Team Members Roles and Responsibilities .............................. 472
4. Assign Ranking Teams To Evaluate the Alternatives ............................ 473
Identication of Ranking Methods ......................................................... 473
Development of Standardized Documentation....................................... 474
Timing for Report out of Selection Process........................................... 474
5. Weight the Various Categories................................................................ 474
6. Compile the Evidence Book................................................................... 475
7. Present the Results.................................................................................. 475
Glossary of Terms............................................................................................. 476
Selected Bibliography............................................................................................ 477
Cost of Quality ...................................................................................................... 477
Cost Monitoring System................................................................................... 478
Standard Cost ............................................................................................... 478
Actual Costs ................................................................................................. 478
Variance........................................................................................................ 480
Cost Reduction Efforts................................................................................. 480
Concepts of Quality Costs................................................................................ 480
J. Juran ......................................................................................................... 480
W.E. Deming................................................................................................ 480
P. Crosby ...................................................................................................... 481
G. Taguchi .................................................................................................... 481
Denition of Quality Components ................................................................... 481
Methods of Measuring Quality......................................................................... 483
Complaint Indices ............................................................................................. 484
Processing and Resolution of Customer Complaints....................................... 484
Techniques for Analyzing Data ........................................................................ 484
Format for Presentation of Costs...................................................................... 485
Laws of Cost of Quality ................................................................................... 485
Data Sources ..................................................................................................... 487
Inspection Decisions ......................................................................................... 487
Prevention Costs (See Table 10.5) ................................................................... 487
Appraisal Costs (See Table 10.6) ..................................................................... 487
Internal Failure Costs (See Table 10.7)............................................................ 487
External Failure Costs (See Table 10.8)........................................................... 487
Diagnostic Guidelines to Identify Manufacturing Process
Improvement Opportunities .............................................................................. 489
Diagnostic Guidelines to Identify Administrative Process
Improvement Opportunities .............................................................................. 490

SL3151 FMFrame Page 43 Friday, September 27, 2002 3:14 PM

Steps for Quality Improvement Using Cost of Quality.............................. 492
Procedure...................................................................................................... 492
Examples ...................................................................................................... 492
Guideline Cost of Quality Elements by Discipline ......................................... 502
Cost of Quality and DFSS Relationship .......................................................... 509
References.............................................................................................................. 511
Selected Bibliography............................................................................................ 511
Reengineering ........................................................................................................ 511
Process Redesign .............................................................................................. 511
The Restructuring Approach............................................................................. 512
The Conference Method ................................................................................... 513
The OOAD Method .......................................................................................... 515
Reengineering and DFSS.................................................................................. 516
References.............................................................................................................. 517
Selected Bibliography............................................................................................ 518
Geometric Dimensioning and Tolerancing (GD&T) ............................................ 518
References.............................................................................................................. 523
Selected Bibliography............................................................................................ 523
Metrology............................................................................................................... 524
Understanding the Problem.............................................................................. 524
Metrologys Role in Industry and Quality ....................................................... 525
Measurement Techniques and Equipment........................................................ 527
Purpose of Inspection ....................................................................................... 528
How Do We Use Inspection and Why? ........................................................... 529
Methods of Testing ........................................................................................... 529
Interpreting Results of Inspection and Testing ................................................ 530
Technique for Wringing Gage Blocks.............................................................. 531
Length Combinations........................................................................................ 532
References.............................................................................................................. 533

Chapter 11

Innovation Techniques Used in Design for Six Sigma (DFSS).... 535
Modeling Design Iteration Using Signal Flow Graphs as Introduced
by Eppinger, Nukala and Whitney (1997) ............................................................ 535
Rules and Denitions of Signal Flow Graphs as Introduced
by Howard (1971) and Truxal (1955) .............................................................. 538
Basic Operations on Signal Flow Graphs ........................................................ 538
The Effect of a Self Loop................................................................................. 538
Solution by Node Absorption........................................................................... 539
References.............................................................................................................. 539
Selected Bibliography............................................................................................ 540
Axiomatic Designs ................................................................................................ 541
So, What Is an Axiomatic Design? .................................................................. 542
Axiomatic and Other Design Methodologies................................................... 542
Applying Axiomatic Design to Cars ................................................................ 543
New Designs ................................................................................................ 544

SL3151 FMFrame Page 44 Friday, September 27, 2002 3:14 PM

Diagnosis of Existing Design ...................................................................... 544
Extensions and Engineering Changes to Existing Designs ........................ 544
Efcient Project Work-Flow........................................................................ 545
Effective Change Management .................................................................... 545
Efcient Design Function ............................................................................ 545
References.............................................................................................................. 547
Selected Bibliography............................................................................................ 547
TRIZ The Theory of Inventive Problem Solving ............................................ 548
References.............................................................................................................. 551
Selected Bibliography............................................................................................ 551

Chapter 12

Value Analysis/Engineering ........................................................... 553
Introduction to Value Control The Environment ............................................. 553
History of Value Control ....................................................................................... 555
Value Concept........................................................................................................ 556
Denition .......................................................................................................... 556
Planned Approach............................................................................................. 556
Function ............................................................................................................ 557
Value.................................................................................................................. 557
Develop Alternatives......................................................................................... 558
Evaluation, Planning, Reporting, and Implementation .................................... 559
The Job Plan ..................................................................................................... 559
Application............................................................................................................. 560
Value Control The Job Plan ............................................................................. 561
Value Control Techniques versus Job Plan...................................................... 562
Techniques......................................................................................................... 562
Information Phase.................................................................................................. 563
Dene the Problem........................................................................................... 563
Information Development ............................................................................ 564
Information Collection............................................................................ 564
Cost Visibility.......................................................................................... 564
Project Scope........................................................................................... 565
Function Determination ............................................................................... 567
Function Analysis and Evaluation ............................................................... 567
Cost Visibility ................................................................................................... 568
Denitions .................................................................................................... 568
Sources of Cost Information........................................................................ 570
Cost Visibility Techniques ........................................................................... 570
Technique 1 Determine Manufacturing Cost..................................... 571
Technique 2 Determine Cost Element ............................................... 571
Technique 3 Determine Component or Process Costs ...................... 571
Technique 4 Determine Quantitative Costs ....................................... 572
Technique 5 Determine Functional Area Costs................................. 573
Function Determination .................................................................................... 573
What Is Function?........................................................................................ 574

SL3151 FMFrame Page 45 Friday, September 27, 2002 3:14 PM

Basic and Secondary Functions................................................................... 574
Basic Functions ....................................................................................... 574
Secondary Functions ............................................................................... 575
Function Analysis and Evaluation.................................................................... 575
Technique 1 Identify and Evaluate Function.......................................... 575
Technique 2 Evaluate Principle of Operation ........................................ 576
Technique 3 Evaluate Basic Function .................................................... 576
Technique 4 Theoretical Evaluation of Function ................................... 576
Technique 5 Input Output Method ......................................................... 577
Technique 6 Function Analysis System Technique................................ 577
Cost Function Relationship.......................................................................... 580
Evaluate the Function .................................................................................. 580
Creative Phase........................................................................................................ 582
Phase 1. Blast.................................................................................................... 584
Phase 2. Create ................................................................................................. 584
Phase 3. Rene ................................................................................................. 584
Evaluation Phase.................................................................................................... 585
Selection and Screening Techniques ................................................................ 585
Pareto Voting................................................................................................ 585
Paired Comparisons ..................................................................................... 586
Evaluation Summary.................................................................................... 587
Matrix Analysis ............................................................................................ 587
Example................................................................................................... 589
Rank and Weigh Criteria.................................................................... 589
Evaluate Each Alternative .................................................................. 590
Analyze Results.................................................................................. 591
Implementation Phase............................................................................................ 591
Goal for Achievement....................................................................................... 592
Developing a Plan............................................................................................. 592
Evaluation of the System.................................................................................. 593
Understanding the Principles............................................................................ 593
Organization...................................................................................................... 594
Attitude.............................................................................................................. 596
Value Council.................................................................................................... 596
Audit Results..................................................................................................... 597
Project Selection.................................................................................................... 597
Concluding Comments .......................................................................................... 598
References.............................................................................................................. 598
Selected Bibliography............................................................................................ 598

Chapter 13

Project Management (PM)............................................................. 599
What Is a Project? ................................................................................................. 599
The Process of Project Management..................................................................... 601
Key Integrative Processes...................................................................................... 602
Project Management and Quality.......................................................................... 603

SL3151 FMFrame Page 46 Friday, September 27, 2002 3:14 PM

A Generic Seven-Step Approach to Project Management.................................... 603
Phase 1. Dene the Project .............................................................................. 603
Step 1. Describe the Project ........................................................................ 603
Step 2. Appoint the Planning Team............................................................. 604
Step 3. Dene the Work............................................................................... 604
Phase 2. Plan the Project .................................................................................. 604
Step 4. Estimate Tasks ................................................................................. 604
Step 5. Calculate the Schedule and Budgets............................................... 604
Phase 3. Implement the Plan............................................................................ 605
Step 6. Start the Project ............................................................................... 605
Phase 4. Complete the Project.......................................................................... 605
Step 7. Track Progress and Finish the Project ............................................ 605
A Generic Application of Project Management in Implementing Six Sigma
and DFSS............................................................................................................... 605
The Value of Project Management in the Implementation Process ................ 607
Planning the Process .................................................................................... 607
Goal Setting.................................................................................................. 608
PM and Six Sigma/DFSS ................................................................................. 608
Project Justication and Prioritization Techniques ..................................... 610
Benet-Cost Analysis.............................................................................. 610
Return on Assets (ROA)..................................................................... 610
Return on Investment (ROI)............................................................... 610
Net Present Value (NPV) Method...................................................... 611
Internal Rate of Return (IRR) Method .............................................. 611
Payback Period Method ..................................................................... 612
Project Decision Analysis ....................................................................... 612
Why Project Management Succeeds..................................................................... 613
References.............................................................................................................. 615
Selected Bibliography............................................................................................ 615

Chapter 14

Limited Mathematical Background for Design
for Six Sigma (DFSS) ........................................................................................... 617
Exponential Distribution and Reliability............................................................... 617
Exponential Distribution................................................................................... 617
Probability Density Function and Cumulative Distribution Function ........ 618
Probability Density Function (Decay Time)........................................... 618
Cumulative Distribution Function (Rise Time) ...................................... 618
Reliability Problems..................................................................................... 618
Constant Rate Failure ....................................................................................... 619
Example........................................................................................................ 619
Probability of Reliability .................................................................................. 621
Control Charts................................................................................................... 621
Continuous Time Waveform........................................................................ 621
Discrete Time Samples ................................................................................ 621
Digital Signal Processing........................................................................ 622

SL3151 FMFrame Page 47 Friday, September 27, 2002 3:14 PM

Sample Space.................................................................................................... 622
Assigning Probability to Sets ........................................................................... 624
Gamma Distribution .............................................................................................. 625
Gamma Distribution (pdf) ................................................................................ 625
Gamma Function............................................................................................... 626
Properties of Gamma Functions .................................................................. 626
Gamma Distribution and Reliability............................................................ 627
Example 1: Time to Total System Failure....................................................... 627
Gamma Distribution and Reliability............................................................ 628
Reliability Relationships .............................................................................. 632
Reliability Function...................................................................................... 632
Data Failure Distribution .................................................................................. 633
Failure Rate or Density Function ..................................................................... 633
Hazard Rate Function ....................................................................................... 634
Relations between Reliability and Hazard Functions ...................................... 634
Poisson Process................................................................................................. 635
Characteristics of Poisson Process .............................................................. 636
Poisson Distribution..................................................................................... 636
Example........................................................................................................ 639
Weibull Distribution............................................................................................... 640
Three-Parameter Weibull Distribution.............................................................. 643
Taylor Series Expansion........................................................................................ 644
Taylor Series Expansion ................................................................................... 645
Partial Derivatives ........................................................................................ 649
Taylor Series in Two-Dimensions................................................................ 649
Taylor Series of Random Variable (RV) Functions..................................... 650
Variance and Covariance.............................................................................. 650
Functions of Random Variables................................................................... 651
Division of Random Variables..................................................................... 651
Powers of a Random Variable ..................................................................... 652
Exponential of a Random Variable.............................................................. 652
Constant Raised to RV Power ..................................................................... 653
Logarithm of Random Variable ................................................................... 653
Example: Horizontal Beam Deection........................................................ 654
Example: Difference between Two Means.................................................. 655
Miscellaneous ........................................................................................................ 656
Closing Remarks.................................................................................................... 658
Selected Bibliography............................................................................................ 658

Chapter 15

Fundamentals of Finance and Accounting for Champions,
Master Blacks, and Black Belts ............................................................................ 661
The Theory of the Firm......................................................................................... 661
Budgets .................................................................................................................. 662
Our Romance with Growth ................................................................................... 663
The New Industrial State....................................................................................... 663

SL3151 FMFrame Page 48 Friday, September 27, 2002 3:14 PM

Behavioral Theory ................................................................................................. 663
Accounting Fundamentals ..................................................................................... 664
Accountings Role in Business......................................................................... 664
Financial Reports .............................................................................................. 664
The Balance Sheet ....................................................................................... 664
Current Assets and Liabilities................................................................. 665
Fixed Assets............................................................................................. 665
Other Slow Assets ................................................................................... 666
Current Liabilities ................................................................................... 666
Working Capital Format.......................................................................... 666
Noncurrent Assets ................................................................................... 667
Noncurrent Liabilities ............................................................................. 667
Shareholders Equity ............................................................................... 667
The Income Statement ............................................................................ 667
Gross Prot.............................................................................................. 668
A Gaggle of Prots ................................................................................. 668
Earnings per Share .................................................................................. 669
The Statement of Changes ...................................................................... 669
Sources of Funds or Cash....................................................................... 669
Use of Funds ........................................................................................... 670
Changes in Working Capital Items ......................................................... 670
The Footnotes.......................................................................................... 670
Accountants Report..................................................................................... 671
How to Look at an Annual Report .............................................................. 671
Recording Business Transactions ..................................................................... 672
Debits and Credits........................................................................................ 673
Sources and Uses of Cash....................................................................... 673
How Debits and Credits Are Used ......................................................... 673
The Balance Sheet Equations ...................................................................... 673
Classication of Accounts ................................................................................ 674
Recording Transactions................................................................................ 675
The Two Books of Account ......................................................................... 675
The Trial Balance.................................................................................... 676
The Mirror Image.................................................................................... 676
Accrual Basis of Accounting............................................................................ 676
Accrual Basis versus Cash Basis................................................................. 677
Details, Details ............................................................................................. 677
Birth of the Balance Sheet........................................................................... 678
Prots versus Cash....................................................................................... 678
Things Are Measured in Money.................................................................. 678
Values Are Based on Historical Costs......................................................... 678
Understanding Financial Statements ..................................................................... 679
Assets ................................................................................................................ 679
The Ination Effect........................................................................................... 679
Summary of Valuation Methods....................................................................... 679
Historical Cost.............................................................................................. 679

SL3151 FMFrame Page 49 Friday, September 27, 2002 3:14 PM

Liquidation Value ......................................................................................... 679
Investment or Intrinsic Value ....................................................................... 680
Psychic Value ............................................................................................... 680
Current Value or Replacement Cost ............................................................ 680
Assets versus Expenses................................................................................ 680
Types of Assets ................................................................................................. 681
Financial Assets............................................................................................ 681
Physical Assets............................................................................................. 681
Operating Leverage ................................................................................. 682
Determining the Value of Inventory ....................................................... 682
FIFO.................................................................................................... 682
LIFO ................................................................................................... 682
Weighted Average............................................................................... 683
Depreciation ............................................................................................ 683
Useful Life Concept ................................................................................ 683
Depreciation as an Expense ............................................................... 684
Depreciation as a Valuation Reserve.................................................. 684
Depreciation as a Tax Strategy .......................................................... 684
Depreciation as Part of Cash Flow.................................................... 685
Straight Line ....................................................................................... 685
Sum-of-the-Years Digits (SYD)........................................................ 686
Double Declining Balance (DDB) ..................................................... 686
Unit of Production.............................................................................. 687
Replacement Cost ............................................................................... 687
Advantages of Accelerated Depreciation........................................... 687
Financial Statement Analysis ................................................................................ 688
Ratio Analysis ................................................................................................... 688
Liquidity Ratios............................................................................................ 691
Financial Leverage ....................................................................................... 692
Coverage Ratios ........................................................................................... 692
Earnings........................................................................................................ 692
Earnings Ratios ............................................................................................ 693
Le ROI ..................................................................................................... 693
ROE: Return on Equity...................................................................... 694
ROA: Return on Assets ...................................................................... 694
ROS: Return on Sales ........................................................................ 694
Other Return Ratios............................................................................ 694
Financial Rating Systems ...................................................................................... 695
Bond Rating Companies................................................................................... 695
Moodys et al. .............................................................................................. 695
Moodys................................................................................................... 696
Standard and Poors ................................................................................ 696
Ratings on Common Stocks ............................................................................. 696
The S&P Rating Method ............................................................................. 697
The Value Line Method ............................................................................... 697
Good Ole Ben Graham................................................................................ 697

SL3151 FMFrame Page 50 Friday, September 27, 2002 3:14 PM

Commercial Credit Ratings .............................................................................. 698
Dun & Bradstreet ......................................................................................... 698
Other Systems .............................................................................................. 699
Company and Product Life Cycle......................................................................... 699
Cash Flow......................................................................................................... 700
A Final Thought about Cash Flow................................................................... 701
A Handy Guide to Cost Terms......................................................................... 703
Useful Concepts for Financial Decisions.............................................................. 704
The Modied duPont Formula ......................................................................... 704
Breakeven Analysis........................................................................................... 705
Contribution Margin Analysis .......................................................................... 706
PriceVolume Variance Analysis ...................................................................... 707
Inventorys EOQ Model.................................................................................... 707
Return on Investment Analysis......................................................................... 708
Net Present Value (NPV) ............................................................................. 709
Internal Rate of Return (IRR)...................................................................... 709
Prot Planning ....................................................................................................... 710
The Nature of Sales Forecasting ...................................................................... 710
The Plans Up Form...................................................................................... 710
Statistical Analysis ....................................................................................... 711
Compound Growth Rates........................................................................ 711
Regression Analysis ................................................................................ 711
Revenues and Costs................................................................................. 711
Departmental Budgets ............................................................................. 711
How to Budget ........................................................................................ 712
Zero-Growth Budgeting .......................................................................... 712
Selected Bibliography............................................................................................ 712

Chapter 16

Closing Thoughts about Design for Six Sigma (DFSS) ............... 715

Appendix

The Four Stages of Quality Function Deployment .......................... 725
Stage 1: Establish Targets...................................................................................... 725
Stage 2: Finalize Design Timetables and Prototype Plans ................................... 725
Stage 3: Establish Conditions of Production ........................................................ 725
Stage 4: Begin Mass Production Startup .............................................................. 726
Tangible Benets ................................................................................................... 726
Intangible Benets................................................................................................. 727
Summary Value...................................................................................................... 727
The QFD Process................................................................................................... 727
Managing the Process............................................................................................ 728

Selected Bibliography

.......................................................................................... 731

Index

...................................................................................................................... 737

SL3151 FMFrame Page 51 Friday, September 27, 2002 3:14 PM

SL3151 FMFrame Page 52 Friday, September 27, 2002 3:14 PM

1

Introduction
Understanding the
Six Sigma Philosophy

Much discussion in recent years has been devoted to the concept of six sigma
quality. The company most often associated with this philosophy is Motorola, Inc.,
whose denition of this principle is stated by Harry (1997, p. 3) as follows:

A product is said to have six sigma quality when it exhibits no more than 3.4 npmo
at the part and process step levels.

Confusion often exists about the relationship between six sigma and this de-
nition of producing no more than 3.4 nonconformities per million opportunities.
From a typical normal distribution table, one may nd that the area underneath the
normal curve beyond six sigma from the average is 1.248


10

9

or .001248 ppm,
which is about 1 part per billion. Considering both tails of the process distribution,
this would be a total of .002 ppm. This process has the potential capability of tting
two six sigma spreads within the tolerance, or equivalently, having 12


equal the
tolerance.
However, the 3.4 ppm value corresponds to the area under the curve at a distance
of only 4.5 sigma from the process average. Why this apparent discrepancy? It is
due to the difference between a static and a dynamic process. (The reader is encour-
aged to review Volume I of this series.)

A STATIC VERSUS A DYNAMIC PROCESS

If a process is

static,

meaning the process average remains centered at the middle
of the tolerance, then approximately .002 ppm will



be produced. But under the six
sigma concept, the process is considered to be

dynamic,

implying that over time,
the process average will move both higher and lower because of many small changes
in material, operators, environmental factors, tools, etc. Most small shifts in the
process average will go undetected by the control chart. For an n of 4, there is only
a 50 percent chance a 1.5-sigma shift in is detected by the next subgroup after
this change. By the time this next subgroup is collected, it may have returned to its
original position. Thus, this process change will never be noticed on the chart, which
means that no corrective action will be implemented. However, this movement has
caused the actual long-term process variation to increase somewhat because between-
subgroup variation is greater than within-subgroup variation. Note that estimates of
short-term process variation are not impacted because they are determined only from
within-subgroup variation.

SL3151Ch00Frame Page 1 Thursday, September 12, 2002 6:15 PM

2

Six Sigma and Beyond

Based on studies analyzing the effect of these changes on process variation
(Bender, 1962, 1968; Evans, 1970, 1974, 1975a and b; Gilson, 1951), the six sigma
principle acknowledges the likelihood of undetected shifts in the process average of
up to 1.5 sigma. Because shifts in the average greater than 1.5 sigma are expected
to be caught, and six is assumed not to change, the worst case for the production
of nonconforming parts happens when the process average has shifted either the full
1.5 sigma above the middle of the tolerance or the full 1.5 sigma below it. For this
worst case, there would be only 4.5 sigma (6 sigma minus 1.5 sigma) remaining
between the process average and the nearest specication limit.
This reduced Z value of 4.5 for the dynamic model corresponds to 3.4 ppm.
When this size of shift occurs, the Z value for the other specication limit becomes
7.5, which means essentially 0 ppm are outside this limit. Because the process
average can shift in only one direction at a time, the maximum number of noncon-
forming parts produced is 3.4 ppm. Notice that most of the time the average should
be closer to the middle of the tolerance, resulting in far fewer than 3.4 ppm actually
being manufactured.
To achieve a goal of 3.4 ppm, the process average must be no closer than 4.5
sigma to a specication limit. Assuming the average could drift by as much as 1.5
sigma, potential capability must be at least 6.06 (4.56 plus 1.5 sigma) to compensate
for shifts in the process average of up to 1.56, yet still be able to produce the desired
quality level. The required 4.56 plus this added buffer of 1.5 sigma create the 6


requirement, and thereby generate the label six sigma. (Here it must be noted that
the 4.5 shift is allegedly an empirical value for the electronic industry. In the
automotive industry, for years the shift has been identied as only 1 sigma a shift
from a P

pk

of 1.33 to a C

pk

of 1.67 i.e., from 4 sigma to 5 sigma. The point is that
every industry should identify its own shift and use it accordingly. It is unfortunate
that the 4.5 shift has become the default value for everything. For a detailed expla-
nation on the difference between P

pk

and C

pk

, the reader is encouraged to review
Volumes I and IV of this series.)
To counter the effect of shifts in , a buffer of 1.5 standard deviations can be
added to other capability goals as well. If no more than 32 ppm are desired outside
either specication, the goal would be to have 4.06 t within the tolerance, assum-
ing no change in the process average. This target equates to a C

p

of 1.33 (4.0/3).
Under the static model, this potential capability goal translates into 32 ppm outside
each specication when the average is centered at M. But with the inevitable 1.56
drifts in occurring with the dynamic process model, the average could move as
close as 2.56 (4.5 sigma minus 1.5 sigma) to a specication limit before triggering
any type of corrective action. This change in centering would cause as many as 6210
ppm to be produced, quite a bit more than the desired maximum of 32 ppm.

PRODUCTS WITH MULTIPLE CHARACTERISTICS

Extremely low ppm levels are imperative for producing high quality products pos-
sessing many characteristics (or components). Table I.1 compares the probability of
manufacturing a product with all characteristics inside their respective specications
when each is produced with 4 sigma (C

p

= 1.33) versus 6 sigma (C

p

= 2.00)

SL3151Ch00Frame Page 2 Thursday, September 12, 2002 6:15 PM

Introduction Understanding the Six Sigma Philosophy

3

capability. The processes producing the features are assumed to be dynamic, with
up to a 1.5-sigma shift in average possible.
Suppose a product has only one feature, which is produced on a process having
4 sigma potential capability. We can then calculate that a maximum of .6210 percent
of these parts will be non-conforming under the dynamic model. Conversely, at least
99.3790 percent will be conforming, as is listed in the rst line of Table I.1. If this
single characteristic is instead produced on a process with 6 sigma potential capa-
bility, at most .00034 percent of the nished product will be out of specication,
with at least 99.99966 percent within specication.
If a product has two characteristics, the probability that both are within speci-
cation (assuming independence) is .993790 times .993790, or 98.7618 percent when
each is produced on a 4 sigma process. If they are produced on a 6 sigma process,
this probability increases to 99.99932 percent (.9999966 times .9999966). The
remainder of the table is computed in a similar manner.
When each characteristic is produced with 4 sigma capability (and assuming
a maximum drift of 1.5 sigma), a product with 10 characteristics will average about
939 conforming parts out of every 1000 made, with the 61 nonconforming ones
having at least one characteristic out of specication. If all characteristics are man-
ufactured with 6 sigma capability, it would be very unlikely to see even one
nonconforming part out of these 1000.
For a product having 50 characteristics, 268 out of 1000 parts will have at least
one nonconforming characteristic when each is produced with 4 sigma capability.
If these 50 characteristics were manufactured with 6 sigma capability, it would still
be improbable to see one nonconforming part. In fact, with 6 sigma capability, a
product must have 150 characteristics before you would expect to nd even one
nonconforming part out of 1000. Contrast this to the 4 sigma capability level, where
60.7 percent of these parts would be rejected, and the rationale for adopting the six
sigma philosophy becomes quite evident.

TABLE I.1
Probability of a Completely Conforming Product

With 1.56 Shift
Number of C, = 1.33 C,. = 2.00
Characteristics (46) (6a)

1 99.3790 99.99966
2 98.7618 99.99932
5 96.9333 99.9983
10 93.9607 99.9966
25 85.5787 99.9915
50 73.2371 99.9830
100 53.6367 99.9660
150 39.2820 99.9490
250 21.0696 99.9150
500 4.4393 99.8301

SL3151Ch00Frame Page 3 Thursday, September 12, 2002 6:15 PM

4

Six Sigma and Beyond

SHORT- AND LONG-TERM SIX SIGMA CAPABILITY

The six sigma approach also differentiates between short- and long-term process
variation. Just as in the past, the short-term standard deviation has been estimated
from within-subgroup variation, usually from

R

, and the long-term standard deviation
incorporates both the short-term variation and any additional variation in the process
introduced by the small, undetected shifts in the process average that occur over
time. Although no exact relationship between these two types of variation applies
to every kind of process, the six sigma philosophy ties them together with this
general equation (Harry and Lawson, 1992, pp. 68).
As

c

is affected by shifts in the process average, it is related to the

k

factor,
which quanties how far the process average is from the middle of the tolerance.

If a process has a C

p

of 2.00 and is centered at the middle of the tolerance, then
there is a distance of 6


ST

from the average to the USL. When the process average
shifts up by 1.5


ST

, it has moved off target by 25 percent of one-half the tolerance
(1.5/6.0 = .25). For this

k

factor of .25,

c

is calculated as 1.33.
C = 1/(1 .25) = 1/.75 = 1.33
The long-term standard deviation for this process would then be estimated from


ST

, as:
The value 1.33 is quite commonly adopted as the relationship between short-
and long-term process variation (Koons, 1992). This factor implies that long-term
variation is approximately 33 percent greater than short-term variation. Other authors
are more conservative and assume a

c

factor between 1.40 and 1.60, which translates
to a

k

factor ranging from .286 to .375 (Harry and Lawson, 1992, pp. 612, 76).
For a

c

factor of 1.50,

k

is

.

333.
1.50 = 1/(1 k)
1 k = 1/1.50
k = 1 (1/1.50) = .333

LT ST
c =
c
k
=

1
1
k
M
USL LSL
=

( ) / 2

.
LT ST ST
c = =1 33

SL3151Ch00Frame Page 4 Thursday, September 12, 2002 6:15 PM

Introduction Understanding the Six Sigma Philosophy

5

This assumption expects up to a 33.3 percent shift in the process average. With
six sigma capability, there is 6


ST

from M to the specication limit, a distance that
equals one-half the tolerance. A

k

factor of .333 represents a maximum shift in the
process average of 2.0


ST

, a number derived by multiplying one-half the tolerance,
or 6


ST

, by .333.

DESIGN FOR SIX SIGMA AND
THE SIX SIGMA PHILOSOPHY

The six sigma philosophy is becoming more and more popular in the quality eld,
especially with companies in the electronics industry (de Treville et al., 1995).
Organizations striving to attain the quality levels required with the six sigma system
usually adopt the following three recommended strategies for accomplishing this
goal (Tomas (1991) offers a six-step approach). Improving an existing process to
the six sigma level of quality would be very difcult, if not impossible. That is why
Fan (1990) insists this type of thinking must already be incorporated into the original
design of new products

and

the processes that will manufacture them if there is to
be any chance of achieving six sigma quality.
The three recommended strategies are as follows:

D

ESIGN

P

HASE

1. Design in 6


tolerances for all critical product and process parameters.
For additional information on this topic, read

Six Sigma Mechanical
Design Tolerancing

by Harry and Stewart (1988).
2. Develop designs robust to unexpected changes in both manufacturing and
customer environments (see Harry and Lawson, 1992).
3. Minimize part count and number of processing steps.
4. Standardize parts and processes.
Knowing the process capability of current manufacturing operations will greatly
aid designers in accomplishing this rst step. And of course, good designs will
positively inuence the capability of future processes.
Once a new product is released for production, the designed-in quality levels
must be maintained, and even improved upon, by working to reduce (or eliminate)
both assignable and common causes of process variation. McFadden (1993) lists
several additional key components of a six sigma quality program specically tar-
geted at manufacturing.

I

NTERNAL

M

ANUFACTURING

1. Standardize manufacturing practices.
2. Audit the manufacturing system. Pena (1990) provides a detailed audit
checklist for this purpose.

SL3151Ch00Frame Page 5 Thursday, September 12, 2002 6:15 PM

6

Six Sigma and Beyond

3. Use SPC to control, identify, and eliminate causes of variation in the
manufacturing process. Mader et al. (1993) have written a book entitled

Process Control Methods

to help with this step. The reader may also
review Volume IV of this series.
4. Measure process capability and compare to goals. Koons (1992) and
Bothes (1997) books on capability indices are useful here.
5. Consider the effects of random sampling variation on all six sigma esti-
mates and apply the proper condence bounds. The reference by Tavorm-
ina and Buckley (1992) would be helpful here.
6. Kelly and Seymour (1993), Bothe (1993), and Delott and Gupta (1990)
reveal how the application of statistical techniques helped achieve six
sigma quality levels for copper plating ceramic substrates. Harry (1994)
provides several examples of applying design of experiments to improve
quality in the electronics industry.
A special warning here is appropriate. Even if the rst two strategies are adopted,
a company will never achieve six sigma quality unless it has the full cooperation
and participation of all its suppliers.

E

XTERNAL

M

ANUFACTURING

1. Qualify suppliers.
2. Minimize the number of suppliers.
3. Develop long-term partnerships with remaining suppliers.
4. Require documented process control plans.
5. Insist on continuous process improvement.
Craig (1993) shows how Dupont Connector Systems utilized this set of strategies
to introduce new products into the data processing and telecommunications indus-
tries. Noguera (1992) discusses how the six sigma doctrine applies to chip connection
technology in electronics manufacturing, while Fontenot et al. (1994) explain how
these six sigma principles pertain to improving customer service. Daskalantonakis
et al. (19901991) describe how software measurement technology can identify areas
of improvement and help track progress toward attaining six sigma quality in soft-
ware development.
As all these authors conclude, the rewards for achieving the six sigma quality
goals are shorter cycle times, shorter lead times, reduced costs, higher yields,
improved product reliability, increased protability, and most important of all, highly
satised customers.
We have reviewed the principles of six sigma here to make sure the reader
understands the ramications of poor quality and the signicance of implementing
the six sigma philosophy. In Volume I of this series, we discussed this philosophy
in much more detail. However, it is imperative to summarize some of the inherent
advantages, as follows:

SL3151Ch00Frame Page 6 Thursday, September 12, 2002 6:15 PM

Introduction Understanding the Six Sigma Philosophy

7

1. As quality improves to the six sigma level, prots will follow with a
margin of about 8% higher prices.
2. The difference between a six sigma company and a nonsix sigma com-
pany is that the six sigma company is three times more protable. Most
of that protability is through elimination of variability waste.
3. Companies with improved quality gain market share continuously at the
expense of companies that do not improve.
The focus of all these great results is in the manufacturing. However, most of
the cost reduction is not in manufacturing. We know from many studies and the
experience of management consultants that about 80% of quality problems are
actually designed into the product without any conscious attempt to do so. We also
know that about 70% of a products cost is determined by its design.
Yet, most of the hoopla about six sigma in the last several years has been
about the DMAIC model. To be sure, in the absence of anything else, the DMAIC
model is

great

. But it still focuses on after-the-fact problems, issues, and concerns.
As we keep on xing problems, we continually generate problems to be xed. That
is why Stamatis (2000) and Tavormina and Buckley (1994) and the rst volume of
this series proclaimed that six sigma is not any different from any other tool already
in the tool box of the practitioner. We still believe that, but with a major caveat.
The benet of the six sigma philosophy and its application is in the design phase
of the product or service. It is unconscionable to think that in this day and age there
are organizations that allow their people to chase their tails and give accolades to
so many for xing problems. Never mind that the problems they are xing are
repeatable problems. It is an abomination to think that the more we talk about quality,
the more it seems that we regress. We believe that a certication program will do
its magic when in fact nothing will lead to real improvement unless we focus on
the design.
This volume is dedicated to the Design for Six Sigma, and we are going to talk
about some of the most essential tools for improvement in real terms. Specically,
we are going to focus on resource efciency, robust designs, and production of
products and services that are directly correlated with customer needs, wants, and
expectations.

REFERENCES

Bender, A, Bendarizing Tolerances A Simple Practical Probability Method of Handling
Tolerances for Limit-Stack-Ups.

Graphic Science,

Dec. 1962, pp. 1721.
Bender, A., Statistical Tolerancing as It Relates to Quality Control and the Designer, SAE
Paper No. 680490, Society of Automotive Engineers, Southeld, MI, 1968.
Bothe, D.R.,

Reducing Process Variation

, International Quality Institute., Inc., Sacramento,
CA, 1993.
Bothe, D.R.,

Measuring Process Capability

, McGraw-Hill. New York, 1997.
Craig, R.J., Six-Sigma Quality, the Key to Customer Satisfaction, 47th ASQC Annual Quality
Congress Transactions,



Boston, 1993,



pp. 206212.

SL3151Ch00Frame Page 7 Thursday, September 12, 2002 6:15 PM

8

Six Sigma and Beyond

Daskalantonakis, M.K., Yacobellis, R.H., and Basili, V.R., A method for assessing software
measurement technology,

Quality Engineering

3, 2740, 19901991.
Delott, C. and Gupta, P., Characterization of copperplating process for ceramic substrates,

Quality Engineering

, 2, 269284, 1990.
de Treville, S., Edelson, N.M., and Watson, R., Getting six sigma back to basics,

Quality
Digest

, 15, 4247, 1995.
Evans, D.H., Statistical tolerancing formulation,

Journal of Quality Technology

, 2, 188195,
1970.
Evans, D.H., Statistical tolerancing: the state of the art, Part I: Background,

Journal of Quality
Technology

, 6, 188195, 1974.
Evans, D.H., Statistical tolerancing: the state of the art, Part II: Methods for estimating
moments,

Journal of Quality Technology

, 7, 112. 1975 (a).
Evans, D.H., Statistical tolerancing: the state of the art, Part III: Shifts and drifts,

Journal of
Quality Technology

, 7, 7276, 1975 (b).
Fan, John Y. (May 1990). Achieving Six Sigma in Design, 44

th

ASQC Annual Quality
Congress Transactions, San Francisco, May 1990, pp. 851856.
Fontenot, G., Behara, R., and Gresham, A., Six sigma in customer satisfaction,

Quality
Progress

, 27, 7376, 1994.
Gilson, J.,

A New Approach to Engineering Tolerances

, Machinery Publishing Co., London,
1951.
Harry, M.,

The Nature of Six Sigma Quality

, Motorola Univ. Press, Schaumburg, IL, 1997.
Harry, M. and Stewart, R.,

Six Sigma Mechanical Design Tolerancing

, Motorola University
Press, Schaumburg, IL, 1988.
Harry, M.,

The Vision of Six Sigma: Case Studies and Applications

, 2

nd

ed., Sigma Publishing
Co., Phoenix, 1994.
Harry, M. and Lawson, J.R.,

Six Sigma Producibility Analysis and Process Characterization

,
Addison-Wesley Publishing Co., Reading, MA, 1992.
Kelly, H.W. and Seymour, L.A.,

Data Display

. Addison-Wesley Publishing Co., Reading,
MA, 1993.
Koons, J.,

Indices of Capability: Classical and Six Sigma Tools

, Addison-Wesley Publishing
Co., Reading, MA, 1992.
Mader, D.P., Seymour, L.A., Brauer, D.C., and Gallemore, M.L.,

Process Control Methods

,
Addison-Wesley Publishing Co., Reading, MA, 1993.
McFadden, F.R., Six-sigma quality programs,

Quality Progress

, 26, 3742, 1993.
Noguera, J., Implementing Six Sigma for Interconnect Technology, 46

th

ASQC Annual
Quality Congress Transactions, Nashville, TN, May 1992, pp. 538544.
Pena, E., Motorolas secret to total quality control,

Quality Progress

, 23, 4345, 1990.
Stamatis, D.H., Six sigma: point/counterpoint: who needs six sigma anyway,

Quality Digest,

3338, May, 2000.
Tadikamalla, P.R., The confusion over six-sigma quality,

Quality Progress

, 21, 8385, 1994.
Tavormina, J.J., and Buckley, S., SPC and six-sigma,

Quality

, 31, 47, 1992.
Tomas, S., Motorolas Six Steps to Six Sigma, 34

th

International Conference Proceedings,
APICS, Seattle, WA, 1991. pp. 166169.

SL3151Ch00Frame Page 8 Thursday, September 12, 2002 6:15 PM

9

1

Prerequisites to Design
for Six Sigma (DFSS)

So far in this series we have presented an overview of the six sigma methodology
(DMAIC) and some of the tools and specic methodologies for addressing problems
in manufacturing. Although this is a commendable endeavor for anyone to pursue
as mentioned in Volume I of this series it is not an efcient way to use resources
to pursue improvement. The reason for this is the same as the reason you do not
apply an atomic bomb to demolish a two-story building. It can be done, but it is a
very expensive way to go.
As we proposed in Volume I, if an organization really means business and wants
quality improvement to go beyond six sigma constraints, it must focus on the design
phase of its products or services. It is the design that produces results. It is the design
that allows the organization to have exibility. It is the design that convinces the
customer of the existence of quality in a product. Of course, in order for this design
to be appropriate and applicable for customer use, it must be perceived by the
customer as functional, not by the organizations denition but by the customers
personal perceived understanding and application of that product or service.
Design for Six Sigma (DFSS) is an approach in which engineers interpret and
design the functionality of the customer need, want, and expectation into require-
ments that are based on a win-win proposition between customer and organization.
Why is this important? It is important because only through improved quality and
perceived value will the customer be satised. In turn, only if the customer is satised
will the competitive advantage of a given organization increase.
There are four prerequisites to DFSS and beyond. The rst is the recognition
that improvement must be a collaboration between organization and supplier (part-
nering). The second is the recognition that true DFSS and beyond will only be
achieved if in a given organization there are real teams and those teams are really
robust. The third prerequisite is that improvement on such a large scale can only
be achieved by recognizing that systems engineering must be in place. Its function
has to be to make sure that the customers needs, wants, and expectations are
cascaded all the way to the component level. The fourth prerequisite is the imple-
mentation of at least a rudimentary system of Advanced Quality Planning (AQP).
In this chapter we will address each of these prerequisites in a cursory format.
(Here we must note that these prerequisites have also been called the recognize
phase of the six sigma methodology.) In the follow-up chapters, we will discuss
specic tools that we need in the pursuit of DFSS and beyond.

PARTNERING

Partnering and cooperation must be our watchwords. In any industry, better com-
munication up and down the supply chain is mandatory. In the past in few

SL3151Ch01Frame Page 9 Thursday, September 12, 2002 6:15 PM

10

Six Sigma and Beyond

instances even today U.S. companies have bought almost solely on the basis of
price through competitive bidding. We need to change our attitude. Price is important,
but it is not the only consideration. Partnering with both customers and suppliers is
just as important.
The Japanese have created a competitive edge through vertical integration. We
can learn from them by establishing virtual vertical integration through partnering
with customers and suppliers. Just as in a marriage, we need to give more than we
get and believe that it will all work out better in the end. We need to give preferential
treatment to local suppliers. We should take a long-term view, understanding their
need for protability and looking beyond this years buy.
To begin our thinking in that direction we must change our current paradigm.
The rst paradigm shift must be in the following denitions:
1. Vendors must be viewed as

suppliers

.
2. Procurement must be viewed as

business strategy.

These are small changes indeed but they mean totally different things. For
example: supplier implies working together in a win-win situation, while vendor
implies a one-time benet usually price. Procurement implies price orientation
based on bidding of some sort, while business strategy takes into account the
concern(s) of the entire organization. We all know that price alone is not the sole
reason we buy. If we do buy on the basis of price alone, we pay the consequences
later on.
So, what is partnering? Partnering is a business culture that fosters open com-
munication and mutually benecial relationships in a supportive environment built
on trust. Partnering relationships stimulate continuous quality improvement and a
reduction in the total cost of ownership.
Partnering starts with:
1. An attitude and behavioral change at the top of the organization
2. Recognition of long-term mutual dependencies internal and external to
the organization
3. A commitment to this change being understood and valued at all levels
within the organization
At the core or basic level, partnering:
1. Fosters

excellence

throughout the organization
2. Encourages open communication in a benecial, supportive, and non-
adversarial environment of mutual trust and respect
3. Carries this positive environment outward from the organization to its
customers and suppliers
At an expanded level, partnering involves:

SL3151Ch01Frame Page 10 Thursday, September 12, 2002 6:15 PM

Prerequisites to Design for Six Sigma (DFSS)

11

1. Teaming
2. Sharing resources
3. Melding of customer and supplier
4. Eliminating the we/they approach to conducting business
By the same token, partnering is not:
1. A negotiation or purchasing tool to be used as leverage against the supplier
2. A business guarantee
However, in all cases, partnering promotes:
1. Customer satisfaction
2. Mutual protability
3. Improved product, service, and operational quality
4. A desire for and a commitment to excellence through continuous improve-
ments in communication skills, quality, delivery, administration, and ser-
vice performance
5. The factors that contribute to customer satisfaction and the lowest total
cost of ownership
6. A situation in which each partner enhances its own competitive position
through the knowledge and resources shared by the other

T

HE

P

RINCIPLES



OF

P

ARTNERING

Effective partnering has its foundation in the basic principles of economics, mar-
keting, business, humanities, and sociology. The customer develops a set of business
and technical desires, needs, requirements, and expectations in a competitive global
market. The supplier most closely meeting those business and technical needs will
be successful.
The supplier asks the customer what is wanted rather than telling the customer
what is available. The customer recognizes and understands the suppliers business
and technical requirements, allowing the supplier to be a viable and successful source
to the industry.

All transactions are honorable and fair. The parties are not trying
to take advantage of each other.

Functioning interchangeably each day as customer and supplier, internally within
the organization and externally with customers and suppliers, every person in a
strong supply chain recognizes mutual dependencies. All transactions must be mutu-
ally benecial, with each person encouraging open communication and operating
with integrity, mutual trust, cooperation, and respect.

V

IEW



OF

B

UYER

/S

UPPLIER

R

ELATIONSHIP

: A P

ARADIGM

S

HIFT

Partnering involves an expanded view of the buyer/supplier relationship, as shown
here:

SL3151Ch01Frame Page 11 Thursday, September 12, 2002 6:15 PM

12

Six Sigma and Beyond

Traditional Expanded

Lowest price Total cost of ownership
Specication-driven End customerdriven
Short-term, reacts to market Long-term
Trouble avoidance Opportunity maximization
Purchasings responsibility Cross-functional teams and top management involvement
Tactical Strategic
Little sharing of information on both sides Both supplier and buyer share short- and long-term plans
Share risk and opportunity
Standardization
Joint venture
Share data

How can this partnership develop? There are prerequisites. Some are listed here.
The prerequisites for basic partnering include:
1. Mutual respect
2. Honesty
3. Trust
4. Open and frequent communication
5. Understanding of each others needs
Additional prerequisites for expanded partnering include:
6. Long-term commitment
7. Recognition of continuing improvement objective and factual
8. Passion to help each other succeed
9. High priority on relationship
10. Shared risk and opportunity
11. Shared strategies/technology road maps
12. Management commitment

C

HARACTERISTICS



OF

E

XPANDED

P

ARTNERING

Expanded partnering promotes dedication, desire, and commitment to product and
service excellence through improvements in technology, skills, quality, delivery,
administration, responsiveness, and total cost of ownership. All these are imperative
requirements for DFSS. In other words, expanded partnering:
1. Builds on basic partnering
2. Is a long-term relationship process
3. Provides focus on mutual strategic and tactical goals
4. Includes customer/supplier team support to

promote mutual success and
protability

.
Of course, there are different levels of partnering just as there are different levels
of results. For example:

SL3151Ch01Frame Page 12 Thursday, September 12, 2002 6:15 PM

Prerequisites to Design for Six Sigma (DFSS)

13

Results Partnering Focus Stage

Sale only Short term 1
Loyalty/trust Product 2
Secured volumes Product and service 3
Mutual improvements Process or system 4
Mutual breakthrough Continual improvement 5

Why is partnering so important in the DFSS even though it may mean different
things to different people? It is because the purposes or goal of most customers who
advocate partnerships are to reduce the time to get a new product to market by
eliminating the bid cycle and to extend the customers capability without adding
personnel.
Partnering is joining together to accomplish an objective that can best be met
by two individuals or corporations rather than one. For a partnership to work well,
it requires that both partners understand the objective, each partner complements
the other in skills necessary to meet the objective, and each recognizes the value of
the other in the relationship. A true partnership occurs when both partners make a
conscious decision to enter into a unique relationship. As the partnership develops,
trust and respect build to a degree that both share the joy and rewards of success
and, when things do not go so well, both work hard together to resolve the issues
to mutual satisfaction.
In a customer/supplier partnership, the customer must dene the objective (or
the scope of the project) and identify the needs. The supplier must have the capability
to meet the customers needs and become an extension of the customers resources.
To be more specic, the customer must be able to quantify and share the desired
needs in terms of the quantity of services required, the timeline or critical path
desired, and targeted costs including up-front engineering as well as unit cost
and capital investment. The supplier must determine whether it can commit the
resources required to meet those needs and whether it is capable of reaching the
targets. A mutual commitment must be made early in the program, and it must be
for the life of the program.
In a more practical sense, the customer in a customer/supplier partnership must
be the leader and be in a position to guide the partners to the objective no different
than a project leader or a team leader of a program that is 100 percent internal to
the customer. The leader also must monitor the progress in terms of cost and time
with input from the supplier. Our experience would indicate that longer projects
should be broken into phases so that there are milestones that are mutually agreed
to in advance by the partners and that mark the points at which the supplier is paid
for its services.
For a partnership to work well, customer/supplier communications must be open
and frequent. With the availability of CAD, e-mail, Internet, Web sites, fax, and
voice mail, there should be no reason not to communicate within minutes of recog-
nition of an issue critical to the program, but there is also a need for regular meetings
at predetermined intervals at either the customers or suppliers location (probably
with some meetings at each location to expose both partners to as many of the team
players as possible).

SL3151Ch01Frame Page 13 Thursday, September 12, 2002 6:15 PM

14

Six Sigma and Beyond

I am sure there is more to be said as to why partnership and DFSS work in
tandem and why both strive for mutual benets, but I hope these thoughts gave some
idea of the signicance that both have for each other.

E

VALUATING

S

UPPLIERS



AND

S

ELECTING

S

UPPLIER

P

ARTNERS

There are many schemes to evaluate suppliers, and each of them has advantages and
disadvantages. We believe, however, that each organization should take the time to
generate its own criteria in at least two dimensions. The rst should be the suppliers
situation and the second the purchasers situation. Within each category, levels of
satisfaction may be assessed as total dissatisfaction, partial satisfaction, or total
satisfaction, or numerical values may be used. The higher the number, the more
qualied the supplier is. This may be done with either a questionnaire or a matrix.
In either case, this task should be performed by a team of people from various
functional areas, such as purchasing, engineering, nance, quality, and legal. The
important point is to evaluate key suppliers for a t with your companys needs.

I

MPLEMENTING

P

ARTNERING

There are ve steps to partnering. They are:

1. Establish Top Management Enrollment
(Role of Top Management Leadership)

The senior management, in the role of an executive customer partner or executive
supplier partner (champion):
1. Serves in a long-term assignment for each expanded partnering relationship
2. Is available to support prompt issue resolution
3. Establishes strong counterpart relationships with key customers and suppliers
4. Provides for and supports decision-making authority at the lowest prac-
tical levels
5. Provides partnering progress updates for executive management review
6. Encourages and supports prompt responsiveness to communications
affecting customer/supplier relationships
7. Maintains a rapid management approval cycle, providing an ombudsman
when required
8. Commits adequate time to the partnering process
9. Ensures that cohesive internal, cross-functional teams are in place to
support the partnering process

2. Establish Internal Organization

There are several options in this phase. However, the most common are:

Option 1: Supplier Partnering Manager

A

staff

supplier partnering manager is appointed to a full-time position (for a
minimum of two years). This manager will be responsible for:

SL3151Ch01Frame Page 14 Thursday, September 12, 2002 6:15 PM

Prerequisites to Design for Six Sigma (DFSS)

15

1. Working with purchasing/commodity team management
2. Instilling the partnering principles into the company culture
3. Implementing the partnering process with company management and
suppliers
4. Reviewing progress during customer/supplier review sessions
5. Working the issues specic to the partnering process

Option 2: Supplier Council/Team

A supplier partnering council or team is established within the organizational and
operational structure that owns the resources required to support the partnering
process. The functions are the same as for the supplier partnering manager but are
assigned to several individuals.
Typically, the council or team is made up of purchasing, quality, product engi-
neering, and manufacturing management with additional resources available from
nance, law, training, and other departments as required.

Option 3: Commodity Management Organization

A

line organization

consisting of a commodity manager and staff is created to
manage the commodity and the partnering activities described in Option 1. Support
is received from the operational groups as required.

3. Establish Supplier Involvement

To have an effective partnering involvement is of paramount importance. This
involvement may be encouraged and helped to grow by having open communication.
Communication may be conducted in a variety of forums or as scheduled periodic
meetings see Table 1.1.

4. Establish Responsibility for Implementation

Identify roles and responsibilities of the partnering process manager:
1. Serve as customer representative.
2. Serve as supplier advocate. (Avoid conict of interest.)
3. Focus participants on long-term success.
4. Accelerate and route communications (good news, bad news).
5. Perform meeting planning (with supplier) and facilitation function.
Perhaps one of the most important functions in this step is to establish credibility
with each other as well as condentiality requirements. The process of this exchange
must be truthful and full of integrity. Some characteristics of this exchange are:
1. Each party provides the other with the information needed to be success-
ful.
2. The supplier needs to know the customers requirements and expectations
in order to meet them on a long-term basis.

SL3151Ch01Frame Page 15 Thursday, September 12, 2002 6:15 PM

16

Six Sigma and Beyond

T
A
B
L
E

1
.
1
C
u
s
t
o
m
e
r
/
S
u
p
p
l
i
e
r

E
x
p
a
n
d
e
d

P
a
r
t
n
e
r
i
n
g

I
n
t
e
r
f
a
c
e

M
e
e
t
i
n
g
s

M
e
e
t
i
n
g
s
I
n
t
e
r
n
a
l

P
r
e
p
a
r
a
t
i
o
n

M
e
e
t
i
n
g
K
i
c
k
-
o
f
f

M
e
e
t
i
n
g
M
o
n
t
h
l
y

T
e
a
m

M
e
e
t
i
n
g
Q
u
a
r
t
e
r
l
y
/
S
e
m
i
a
n
n
u
a
l

M
a
n
a
g
e
m
e
n
t

M
e
e
t
i
n
g

A
n
n
u
a
l

M
a
n
a
g
e
m
e
n
t

R
e
v
i
e
w
P
a
r
t
i
c
i
p
a
n
t
s

C
u
s
t
o
m
e
r

t
e
a
m

a

S
u
p
p
l
i
e
r

t
e
a
m

a

C
u
s
t
o
m
e
r

t
e
a
m
S
u
p
p
l
i
e
r

t
e
a
m
E
x
e
c
u
t
i
v
e

p
a
r
t
n
e
r
s

(
i
f

a
p
p
o
i
n
t
e
d
)
P
u
r
c
h
a
s
i
n
g
T
e
c
h
n
i
c
a
l
Q
u
a
l
i
t
y
/
r
e
l
i
a
b
i
l
i
t
y
(
O
t
h
e
r

t
e
a
m

m
e
m
b
e
r
s
)
P
u
r
c
h
a
s
i
n
g
T
e
c
h
n
i
c
a
l
Q
u
a
l
i
t
y
/
r
e
l
i
a
b
i
l
i
t
y
(
O
t
h
e
r

t
e
a
m

m
e
m
b
e
r
s
)
E
x
e
c
u
t
i
v
e

p
a
r
t
n
e
r
s

b

P
u
r
c
h
a
s
i
n
g
T
e
c
h
n
i
c
a
l
Q
u
a
l
i
t
y
/
r
e
l
i
a
b
i
l
i
t
y
(
O
t
h
e
r

t
e
a
m

m
e
m
b
e
r
s
)
E
x
e
c
u
t
i
v
e

p
a
r
t
n
e
r
s

M
e
e
t
i
n
g

T
o
p
i
c
s

P
a
r
t
n
e
r

m
e
e
t
i
n
g
M
e
e
t
i
n
g

p
u
r
p
o
s
e
O
b
j
e
c
t
i
v
e
s
I
s
s
u
e
s
P
a
r
t
i
c
i
p
a
n
t

r
e
s
p
o
n
s
i
b
i
l
i
t
i
e
s
I
n
t
r
o
d
u
c
e

p
r
o
g
r
a
m
O
b
t
a
i
n

m
u
t
u
a
l

a
g
r
e
e
m
e
n
t

a
n
d

c
o
m
m
i
t
m
e
n
t
I
d
e
n
t
i
f
y

t
e
a
m
s
I
n
t
r
o
d
u
c
e
/
s
u
g
g
e
s
t

e
x
e
c
u
t
i
v
e

p
a
r
t
n
e
r
s
P
r
e
s
e
n
t
/
d
i
s
c
u
s
s

c
u
s
t
o
m
e
r

o
b
j
e
c
t
i
v
e
s
S
u
p
p
l
i
e
r

o
b
j
e
c
t
i
v
e
s
P
r
o
p
o
s
e
d

o
b
j
e
c
t
i
v
e
s
B
u
s
i
n
e
s
s

o
b
j
e
c
t
i
v
e
s
D
e

n
i
t
i
o
n

o
f

r
e
s
p
o
n
s
i
b
i
l
i
t
i
e
s
E
x
p
e
c
t
a
t
i
o
n
s
E
s
t
a
b
l
i
s
h
/
u
p
d
a
t
e

m
u
t
u
a
l

k
e
y

r
e
s
u
l
t
s
,

g
o
a
l
s
,

o
b
j
e
c
t
i
v
e
s
,

a
c
t
i
o
n

p
l
a
n
s
D
i
s
c
u
s
s

i
s
s
u
e
s
R
e
v
i
e
w

p
e
r
f
o
r
m
a
n
c
e
R
e
v
i
e
w
/
d
i
s
c
u
s
s

o
n
-
t
i
m
e

d
e
l
i
v
e
r
i
e
s
R
e
q
u
i
r
e
d

a
c
t
i
o
n
s

o
f

b
o
t
h

p
a
r
t
i
e
s
Q
u
a
l
i
t
y

i
n
d
i
c
a
t
o
r
s
Q
u
a
l
i
t
y

a
c
t
i
o
n

p
l
a
n
B
u
s
i
n
e
s
s

i
s
s
u
e
s
M
a
j
o
r

i
s
s
u
e
s
P
e
r
f
o
r
m
a
n
c
e

r
e
v
i
e
w

H
e
a
l
t
h

c
h
e
c
k

O
b
j
e
c
t
i
v
e
s
E
x
p
e
c
t
a
t
i
o
n
s
A
c
t
u
a
l

p
e
r
f
o
r
m
a
n
c
e
T
e
c
h
n
o
l
o
g
y

t
r
e
n
d
s
B
u
s
i
n
e
s
s

t
r
e
n
d
s
P
r
o
g
r
a
m

d
i
r
e
c
t
i
o
n
A
t

s
u
p
p
l
i
e
r

l
o
c
a
t
i
o
n

a
n
d

t
o
u
r
M
a
i
n
t
a
i
n

k
e
y

c
o
n
t
a
c
t
s
M
a
j
o
r

p
e
r
f
o
r
m
a
n
c
e

r
e
v
i
e
w

a

T
e
a
m

i
n
c
l
u
d
e
s

p
e
r
s
o
n
n
e
l

f
r
o
m

P
u
r
c
h
a
s
i
n
g
,

Q
u
a
l
i
t
y
,

M
a
t
e
r
i
a
l

C
o
n
t
r
o
l
,

E
n
g
i
n
e
e
r
i
n
g
.

W
h
e
n

n
e
e
d
e
d
,

a
l
s
o

c
a
n

i
n
c
l
u
d
e

p
e
r
s
o
n
n
e
l

f
r
o
m

S
a
l
e
s
,

S
a
f
e
t
y
,

M
a
n
u
f
a
c
t
u
r
i
n
g
,

P
r
o
c
e
s
s
A
r
e
a

M
a
n
a
g
e
m
e
n
t
,

P
l
a
n
n
i
n
g
,

T
r
a
i
n
i
n
g
,

L
e
g
a
l
,

R
i
s
k

M
a
n
a
g
e
m
e
n
t
,

F
i
n
a
n
c
e
,

P
r
o
j
e
c
t

M
a
n
a
g
e
m
e
n
t
.

b

O
p
t
i
o
n
a
l

a
s

p
a
r
t

o
f

q
u
a
r
t
e
r
l
y

a
n
d

s
e
m
i
a
n
n
u
a
l

m
e
e
t
i
n
g
s
.

SL3151Ch01Frame Page 16 Thursday, September 12, 2002 6:15 PM

Prerequisites to Design for Six Sigma (DFSS)

17

To be successful in this exchange requires time. The reason for this is that
building trust is a function of time. The longer you work with someone the more
you get to know that person. To expedite the process of gaining trust, suppliers and
customers may want to share in:
1. Non-disclosure agreements
2. Quality improvement process
3. Technology development roadmaps
4. Specication development
5. Should-cost/Total-cost model
6. Forecasts/Frozen schedules
7. Executive partners
8. Job rotation with suppliers
Be aware of, adhere to, and respect the sensitive/condential nature of propri-
etary information, both yours and your partners. Always remember: recognize the
differences in company cultures. Find ways to do things without imposing your
value system.
Compromise...
Find the common ground...
Work out the differences...
Move forward
Negotiate...
COOPERATE!

5. Reevaluate the Partnering Process

People cannot improve unless they know where they are. Evaluation of the partnering
process is a way to benchmark the progress of the relationship and to set priorities
for future improvement. Questionnaires with ve-point rating criteria provide a
means for this evaluation in which both customers and suppliers take an active role.
A typical questionnaire may look like Table 1.2.
Sometimes the questionnaires provide detailed denitions of certain words or
criteria that are being used in the instrument. The following is a brief supplement
to explain/dene the rating categories and some of the terms used in Table 1.2:

Ratings

1. Does not meet Failing to satisfy requirements, unacceptable performance
2. Marginally meets Performance is not fully acceptable, needs improve-
ment
3. Meets Fullls basic requirements, satisfactory
4. Exceeds Surpasses normal requirements
5. Superior Consistently excels above and beyond expectations, world-
class performance

SL3151Ch01Frame Page 17 Thursday, September 12, 2002 6:15 PM

18

Six Sigma and Beyond

TABLE 1.2
A Typical Questionnaire

Please select one of the following ratings for each question:
Ratings:
(1) Does not meet (2) Marginally meets (3) Meets (4) Exceeds (5) Superior
1. Rate the relationships impact in focusing both parties on strategic and tactical goals to foster mutual
success.
Strategic 1 2 3 4 5
Tactical 1 2 3 4 5
Comments:
2. Have all established communication channels within Intel, from executive sponsor down, enabled the
partners to improve their effectiveness/competitiveness as a company?
Technical Issues 1 2 3 4 5
Business Issues 1 2 3 4 5
Comments:
3. Rate the effectiveness of the team structure.
Management Team 1 2 3 4 5
Working Team 1 2 3 4 5
Performance Reviews 1 2 3 4 5
(Both Parties)
Follow-Up on Action Items 1 2 3 4 5
Comments:
4. Rate the effectiveness of the Key Supplier Program team in generating high quality solutions.
Time of Solutions 1 2 3 4 5
Quality of Solutions 1 2 3 4 5
Cost-Effective Solutions 1 2 3 4 5
Comments:
5. Does the Executive Partner provide meaningful support?
Customer 1 2 3 4 5
Supplier 1 2 3 4 5
Comments:
6. Is the Key Supplier Program process formally managed in an effective manner?
Customer Resource Commitment 1 2 3 4 5
Supplier Resource Commitment 1 2 3. 4 5
Formal Communication Tools 1 2 3 4 5
Information Sharing 1 2 3 4 5
Total Cost Focus 1 2 3 4 5
Dealing with Me Best 1 2 3 4 5
Comments:

SL3151Ch01Frame Page 18 Thursday, September 12, 2002 6:15 PM

Prerequisites to Design for Six Sigma (DFSS)

19

Terms Used in Specic Questions

Question 1
Strategic Goals Long-range objectives (i.e., next-generation technology)
Tactical Goals Operational, day-to-day problem solving, etc.
Question 3
Management Team Executive sponsors plus upper/middle managers
Working Team Commodity/product teams, task forces, user groups
Performance Reviews Grading joint MBOs, other indicators (e.g.,
quality, customer satisfaction survey)
Question 4
Time of Solution Meets or exceeds time requirements/expectations
Quality of Solution Meets or exceeds quality requirements/expecta-
tions
Cost-Effective Solution Improves total cost effectiveness/fosters mutu-
al profitability
Question 5
Meaningful Support Active participation and involvement during and
between business meetings
Question 6
Resource Commitment Adequate support (people, tools, space...) to al-
low successful results
Formal Communication Tools Meetings, reports, MBOs technology
exchange; correct topics, timely, worthwhile
Information Sharing Plans, technology, data; useful, timely, fosters
profitability
Total Cost Focus Model in place and used to support decisions to apply
resources
Dealing with The Best Process contributes to world-class perfor-
mance
Another general questionnaire evaluating the partnering process is shown in
Table 1.3.

M

AJOR

I

SSUES



WITH

S

UPPLIER

P

ARTNERING

R

ELATIONSHIPS

In any relationship that one may think of, issues and concerns exist. Partnering is
no different. Some of the areas that might be of general concern include the follow-
ing:
1. Issues or concerns within the customers company
2. Issues or concerns within the suppliers company
3. Issues or concerns of a competitive nature
4. Issues or concerns of a political or legal nature
5. Issues or concerns of a technological nature
6. Other

SL3151Ch01Frame Page 19 Thursday, September 12, 2002 6:15 PM

20

Six Sigma and Beyond

Issues or concerns of specic nature may develop when any of the following
situations exist:
1. Support on either side is insufcient.
2. Something has caused one party to consider abandoning the partnering
relationship.
3. A better deal or innovation threatens the partnering relationship.
4. Unequal benets or conicting incentives exist.
5. There are forced requirements under the guise of a partnering relationship
and fear on the part of the supplier to decline or dissent, particularly if
the supplier is small.
6. Key players change or there is a change of ownership.

H

OW

C

AN

W

E

I

MPROVE

?

A fundamental question that needs to be answered from a customers perspective is
How can we improve? The answer is by establishing a process with strategic
importance of key relationships. Once this process is identied then it needs
recognition the more the better. How do we do that? We can do it by:
1. Establishing upper management involvement
2. Sharing information: technology exchanges
3. Showing suppliers how to use the data
4. Educating suppliers in tools and methodologies
We can benet from creating a mentoring attitude toward our suppliers. Tra-
ditionally we say, Do this because we need it. Start saying (and thinking), Do

TABLE 1.3
A General Questionnaire

Evaluate the following categories based on a rating of 1 to 5, with 1 being low and 5 being excellent.
(Yet another variation of the criteria may be 1 = Much improvement needed, 5 = Little or no improvement
needed.)
Executive commitment to the process
Recognition of mutual dependencies
Mutually dened and shared expectations/objectives
Executive partners/sponsors
Quick issue resolution (break down roadblocks)
Understanding and sharing of risks
Sharing of technical roadmaps/competitive analysis/business plans
Openness, honesty, respect
Formal and frequent communication/feedback process
Access to data
Establish clear denition of responsibility (project leadership)

SL3151Ch01Frame Page 20 Thursday, September 12, 2002 6:15 PM

Prerequisites to Design for Six Sigma (DFSS)

21

this because it will make you a stronger company, and that will in turn make us a
stronger company. Become a mentor in the Partnering for Total Quality assessment
process with your suppliers.
Clearly dene expectations by:
1. Mutually developing short- and long-term objectives for each relationship
2. Increasing the concentration on areas for mutual success; reducing the
concentration on terms and conditions
3. Making decisions based on total cost; increasing the involvement and
awareness of suppliers in this process
In the nal analysis, in order for a successful partnership to ourish both
partners customer and supplier must recognize that change is imminent, at
least in the following areas:
1. Organization itself
2. Internal, interfunctional communication
3. Customer orientation
4. World-class denition
5. Skills development
Are there indicators of a successful partnering process? We believe that there
are. Typical indicators are the existence of:
1. Formal communication processes
2. Commitment to the suppliers success
3. Stable relationships, not dependent on a few personalities
4. Consistent and specic feedback on supplier performance
5. Realistic expectations
6. Employee accountability for ethical business conduct
7. Meaningful information sharing
8. Guidance to supplier in dening improvement efforts
9. Non-adversarial negotiations and decisions based on total cost of owner-
ship
10. Employees empowered to do the right thing

B

ASIC

P

ARTNERING

C

HECKLIST

The basic partnering principles below may be applied to any customer/supplier
relationship, regardless of size of company and number of employees. The principles
also apply to relationships within the organization. The investment is primarily an
attitude and behavioral change to bring about six sigma quality and beyond.

1. Leadership

Our management:

SL3151Ch01Frame Page 21 Thursday, September 12, 2002 6:15 PM

22

Six Sigma and Beyond

1. Is personally committed to the principles of the partnering process
2. Has directed organization-wide commitment, adoption, and execution of
the partnering principles and philosophy
3. Is committed to generating accurate forecasts to improve delivery schedule
stability with our suppliers
4. Ensures that the partnering principles ourish even in stressful times
5. Seeks mutually protable arrangements with our suppliers
6. Is involved in high-level review of the partnering process.

2. Information and Analysis

Our organization:
1. Has standardized measurements and performance for products, processes,
service, and administration
2. Respects the protection of intellectual property
3. Treats information gained in open exchanges with respect and conden-
tiality
4. Provides consistent and specic feedback on supplier performance

3. Strategic Quality Planning

Our organization:
1. Avoids short-term solutions at the expense of long-term viability
2. Places more emphasis on overall needs and mutual expectations, less on
legal or formal aspects of the relationship
3. Uses reasonable and realistic expectations and milestones with our cus-
tomers and suppliers
4. Demonstrates a commitment to continuous improvement in all facets of
our business

4. Human Resource Development and Management

Our organization:
1. Promotes employee accountability for ethical business conduct through
performance reviews, holding supervisors accountable for promoting such
practices
2. Helps employees understand their roles as customer and supplier internal
and external to the organization
3. Trains employees on business practices that are ethical, open, professional,
and of high integrity
4. Provides position descriptions with a clear denition of responsibility
5. Supports decision-making authority at the lowest practical level

SL3151Ch01Frame Page 22 Thursday, September 12, 2002 6:15 PM

Prerequisites to Design for Six Sigma (DFSS)

23

5. Management of Process Quality

Our organization:
1. Shares basic evaluation criteria with our customers and suppliers
2. Has methods for ensuring quality of components, processes, administra-
tion, service, and nal product.
3. Checks periodically with our customers to verify that our quality meets
their expectations

6. Quality and Operational Results

Our organization:
1. Shares meaningful information and data with our customers and suppliers,
with frequent and timely feedback on problems as well as successes
2. Provides guidance to suppliers in dening improvement efforts that
address all problems

7. Customer Focus and Satisfaction

Our organization:
1. Recognizes mutual dependencies with our customers and the need to work
together; understands that partnering does not end with the signing of the
purchase order.
2. Engages in win/win, non-adversarial negotiations and purchasing deci-
sions based on total cost of ownership
3. Provides prompt disclosure to customers of any inability of the organiza-
tion to meet current or future requirements; makes realistic commitments
to customers

E

XPANDED

P

ARTNERING

C

HECKLIST

In addition to the basic partnering principles, expanded partnering recognizes the
need for mutual support based on such factors as cost, risk, criticalness, and actual
performance. The investment involves an application of resources from both the
customer and the supplier. Customer resource availability limits the number of
expanded partnering relationships in which any organization can be simultaneously
engaged.

1. Leadership

Our senior management, in the role of an executive customer partner or executive
supplier partner (champion):

SL3151Ch01Frame Page 23 Thursday, September 12, 2002 6:15 PM

24

Six Sigma and Beyond

1. Serves in a long-term assignment for each expanded partner relationship
2. Is available to support prompt issue resolution
3. Establishes strong counterpoint relationships with our key customers and
suppliers
4. Provides for and supports decision-making authority at the lowest prac-
tical levels
5. Encourages and supports prompt responsiveness to communications
affecting customer/supplier relationships
6. Maintains a rapid management approval cycle, providing an ombudsman
when required
7. Commits adequate time to the partnering process
8. Ensures that cohesive, internal, cross-functional teams are in place to
support the partnering process

2. Information and Analysis

Our organization, with our suppliers:
1. Uses positive encouragement and support to improve performance and
total cost of ownership
2. Participates in joint information-sharing activities to develop value anal-
ysis models
3. Shares technical roadmaps, competitive analyses, and plans
4. Focuses on clearly dened, complete, achievable requirements, with less
emphasis on contractual terms and conditions
5. Ensures that suppliers understand our long-term procurement strategy

3. Strategic Quality Planning

Our organization:
1. Shares short- and long-term improvement plans and priorities with sup-
pliers and customers
2. Works with customers and suppliers to understand their quality needs and
plans for continuous improvements

4. Human Resource Development and Management

Our company management:
1. Has established technical advisory boards to support supplier activities
2. Communicates regularly with customer and supplier management to
understand mutual needs and possible areas for cooperation
3. Encourages employees to submit suggestions for continuous quality
improvements
4. Offers the same quality training to supplier personnel as we provide to
our own employees

SL3151Ch01Frame Page 24 Thursday, September 12, 2002 6:15 PM

Prerequisites to Design for Six Sigma (DFSS)

25

5. Management of Process Quality

Our organization works with customers and suppliers to:
1. Share mutual joint performance measures that are written, measured, and
tracked
2. Work toward standardization of quality and certication programs
3. Develop and implement valid quality assurance systems for products,
processes, service, and administration

6. Quality and Operational Results

Our organization works with customers and suppliers to:
1. Develop joint quality and yield improvement processes
2. Provide access to process data for tool and material development and
renement

7. Customer Focus and Satisfaction

Our organization works with customers to:
1. Mutually dene expectations, understand mutual requirements, and share
risks
2. Ensure that partnering survives lapses in missed generation orders
3. Establish formal, frequent communications as part of the management
process

THE ROBUST TEAM:
A QUALITY ENGINEERING APPROACH

In general, the traditional approach to evaluating the performance of groups in
process has been twofold. The rst has been to use a developmental model that
provides a summary of the different phases or stages in the life cycle of a group. A
popular example of this approach is the forming, storming, norming, performing
model of group development. Each phase corresponds to a stage in the group life
cycle review Volume I, Part II of this series.
The second model has emphasized structural patterns of a group or team. These
may be construed in terms of gender, experience, length of service, or positional
roles (leader, secretary, or assistant, for example). Using the structural approach, the
team can also be analyzed in terms of process; the peacemaker, the aggressor,
the blocker, or the help-seeker, for example, or Resource Investigator, Coordi-
nator, and so on.
Both these models have proven to be useful when trying to describe some
aspects of group dynamics, and it may be possible to identify colleagues who
fulll some of these roles or identify teams that have passed through these different

SL3151Ch01Frame Page 25 Thursday, September 12, 2002 6:15 PM

26

Six Sigma and Beyond
stages of development. Unfortunately, such a restricted approach to monitoring team
process does not provide any feedback as to whether the team is producing predictable
results, nor does it identify problems or opportunities for improvement especially
breakthrough opportunities. Specically, no opportunity exists to determine whether
team process is in control (capable) or whether the group is out of control (chaotic
and falling far short of what it could achieve). Some of these issues were addressed in
Volumes I and II of this series, and perhaps the reader may want to review them at this
time.
A further shortcoming in non-systems approaches to team building concerns
team process improvement. As long as the team is operating within acceptable
parameters, no opportunity or drive to improve or maximize the performance of the
team exists. Furthermore, the team usually does not have the ability or training to
self-regulate and, through self-regulation, to begin to change and adapt to the con-
tinual change taking place in the workplace. These and other considerations suggest
that a systems approach to team building may have considerable advantages.
The robust team involves an examination of teams as systems in conjunction
with more detailed parallels between a team systems approach and the model put
forward by Taguchi as part of his quality engineering methodology (see Volume V
of this series). Using this viewpoint, a system is considered as a means by which a
users intention is transformed into a perceived result. Therefore, if teams are con-
sidered in terms of how successfully they transfer energy when they function, it
should be evident that there will be parallels between their functioning and the
functioning of an engineered system as in the P diagram for example. After all,
in many ways, a team shares similar features to the manufacturing process of a
particular product. Specications are drawn up (objectives, time scales, etc. are
established); the production machinery is put in place (team members are selected);
the production process is designed and implemented (teams meet, establish norms,
set agendas, and engage in problem solving, decision making, and planning activities,
etc.) and the system is regulated by performance criteria (by the individual members
expectations, assessments, performance appraisals, etc.).
In manufacturing, it is important not to separate the performance of the compo-
nent from its interaction with other components and its integration into large sub-
systems of the whole process or product. In teams, it is important not to separate
the performance of the individual from his/her relationships to other team members,
their interactions, and their membership in sub-teams and the team as a whole
rather it is of paramount importance to view them as a team system.
TEAM SYSTEMS
Many social psychologists only consider a collection of people to be a group if their
activities relate to one another in a systematic fashion. However, it is easier to dene
a group as a collection of individuals. The word team, however, as mentioned in
Volume I, Part II, is reserved for those groups that constitute a system whose parts
interrelate and whose members share a common goal. Some groups can easily be
viewed according to this criterion. A soccer club, its manager, and its players
constitute a set of parts necessary to the functioning of the whole the common
SL3151Ch01Frame Page 26 Thursday, September 12, 2002 6:15 PM
Prerequisites to Design for Six Sigma (DFSS) 27
aim being to win soccer games. However, when does a newly established team
become a good or effective team? To see the answer to this question let us examine
the team from a systems approach.
Input
A team has an input or signal. The input is the information, energy, resources, etc.,
that enter into the system and are transformed through its structures and processes.
A broad spectrum of inputs into the system can exist and, depending on the per-
spective one chooses to take, the boundaries that are drawn around the system can
be more or less inclusive of these elements.
A system in which the boundary is closely dened will have only the xed
structures and extant processes within it and will have a wide range of inputs, many
of which may enter the system simultaneously. A system that has a very broad
boundary might include people, materials, resources, and most information as a part
of the system, with the input dened very narrowly as a discrete piece of information
or energy.
Signal
The signal as developed in the Taguchi model has a more specic and limited
denition. It is an input into the system, but it is limited to the means by which the
user conveys to the system a deliberate intention to change (or adjust) the system
output. In more general terms, it is the variable to which the system must respond
in order to fulll the users intent. From this perspective, most of what are tradi-
tionally considered inputs into the system, i.e., people, materials, information, and
so on, are already part of the system itself, and the signal is the discrete piece of
information that determines the amount of energy transformed by the system.
The System
The structure of a system comprises aspects of the system that are relatively static
or enduring. Process, on the other hand, refers to the behavior of the system.
Consequently, process refers to those relatively dynamic or transient aspects of a
system that are observable by virtue of change or instability. Traditional models of
a system are based upon an input-process-output model. The system acts to transform
the energy from the input into the output. This process, once established, is subject
to variation due to internal and external factors that produce error states or outputs
other than the desired output. These outputs can simply be wasted energy or may
actually reduce the functional ability of the system itself.
If a particular team has a task to perform, e.g., solving a problem, you can
consider the team to be a system that has inputs, output, and a process that allows
the team members to transform their energy into the desired outputs. Team process
can be dened as any activity (for example, meetings) that utilizes resources (the
team) to transform inputs (ideas, skills, and qualities of team members) into outputs
(discoveries, solutions to problems, proposals, actions, design ideas, products, etc.).
SL3151Ch01Frame Page 27 Thursday, September 12, 2002 6:15 PM
28 Six Sigma and Beyond
Often the energy that the team brings to the process is not used to best effect.
For example, in a meeting, time may be wasted reiterating points because individuals
have not paid attention to what is being discussed or because there is cross talk.
This in turn leaves people annoyed and frustrated. These are examples of error
states or undesirable outputs from the team process.
Output/Response
In traditional systems models, the output is whatever the system transforms, pro-
duces, or expresses into the environment as a consequence of the impact its structures
and processes have on the input. An output can be anything from a newborn baby
to well done barbecued ribs to a presentation to a text return. This is very important
to understand because teams, by their nature, are complex and multifunctional. They
cannot and should not be congured to produce one kind of response. Most teams
will have a whole range of outputs with accompanying measures that will be used
to identify how successful they are and how effective they are in transferring energy.
The key is to identify appropriate measures that can be used to monitor the teams
progress.
The Environment
It is important in attempting to maximize the performance of a team to identify
factors that may have an impact on the performance of the team and its ability to
maximize the transfer of its input into desired output and over which the team has
little or no control. (Remember, the output of the team will be a new design
however dened and it is up to the team to make that design wanted in the
preset environment. This is not a small feat.) These factors are designated as internal
or external to the system. It is these factors that cause energy to be wasted and
undesirable output (error states) to occur.
External Variation
In teams, external variation factors may include such things as change in team
membership, the environment in which the team is working, changing demands from
management, corporate cultural, racial, and gender factors, and so on. In developing
a group process, it is important to develop group systems and processes that are
robust to these factors. In addition, team goals exert a considerable inuence on the
behavior of individual members, and goals can vary enormously. They could be
output targets that will vary in accordance with the teams task problem-solving
teams puzzling over the root cause of a problem; design teams considering the
optimization of a particular system design to achieve robustness; a marketing team
attempting to understand the exact details of customer requirements; or sports teams,
each of which will have an entirely different set of performance goals depending
upon the sport: soccer, football, tennis, golf, and so on.
Any analysis of working teams should take into account the objectives of the
team and the situation in which the team performs because both will have a profound
effect on the team functioning.
SL3151Ch01Frame Page 28 Thursday, September 12, 2002 6:15 PM
Prerequisites to Design for Six Sigma (DFSS) 29
Internal Variation
Internal variation, on the other hand, relates to factors that are in the team system
and its members. People may bring predetermined ideas about the correct design
solution. They may have biases about other team members depending on their race,
gender, function, grade, and so on. Certain team members may not get along with
other team members and will regularly question, challenge, or contradict the others
for no apparent reason. The team may not manage its time well and consequently
may nd itself chronically short of time at the end of meetings.
Team members may not know how to ask open questions that will open up fresh
avenues of information. Closed questions will result in familiar dead ends or non-
productive and previously rejected ideas. Team members may not know how to build
on the ideas of other team members and, consequently, good ideas may be regularly
lost. If the reader needs help in this area, we recommend a review of Volume I, Part II.
The Boundary
At the simplest level, boundaries can be put around almost anything, thereby dening
it as a system. In practice, the identication of the boundary is the key to successful
system analysis. The classication of factors (signal, control, and variation) that
impact on the system is dependent on the way in which the boundary is dened.
For example, by setting the boundary of the system fairly wide, to include the team
members, environment, resources, information, and so on, leaving only the directive
from the champion or the monthly output target outside, more factors would be
considered as control factors and fewer as variation. In this case, the directive from
the champion would be the signal factor. The team members, environment (or aspect
of it), and so on would be control factors.
External variations would then include disruptions to the team process from
sources outside the team boundary. Internal variations would include attitudinal,
cultural, and intellectual variations among and between team members and variations
in environmental conditions (e.g., temperature). By setting a narrower boundary,
many of the factors such as environment and resources would be considered external
to the system and therefore would become noise factors rather than control factors.
These issues are important because they determine the teams strategy for dealing
with variations and establishing a means of becoming robust to them.
CONTROLLING A TEAM PROCESS: CONFORMANCE IN TEAMS
A tale in Hellenic mythology describes the behavior of Procrustes an innkeeper
by the Corinthean peninsula. Procrustes took his clientele, people of denite natural
shape and size, and either stretched or truncated their limbs so that they might t
the mattresses he provided. There are many echoes here of the original approach to
quality, We know what you want, we will design it, you will buy it and you will
like it. Or the now famous quality euphemism, We are not sure of what is really
quality, but we sure know it, if we ever see it.
Fortunately, this philosophy is being transformed into a customer-driven
approach and the pursuit of Total Quality Excellence through DFSS. It is not entirely
SL3151Ch01Frame Page 29 Thursday, September 12, 2002 6:15 PM
30 Six Sigma and Beyond
unreasonable, therefore, when it comes to monitoring groups or teams, to identify
an alternative to the current emphasis on tting the behavior of team members into
behavioral roles through a Procrustean method, that is, by squeezing identity and
function into personality models like those of Belbin, Myers-Briggs, Bion, and so
on, through normalization and pressure to conform. Remember, one of the diversity
issues is the fact that everyone is different and we are all much better because of
that difference.
This is particularly the case when old norms are not questioned and challenged
regularly or when personality models are used to avoid genuine personal contact or
in place of a genuine understanding of the uniqueness of others.
STRATEGIES FOR DEALING WITH VARIATION
There are four basic strategies for dealing with variation and its effect on the
performance of a system: ignore the variation, attempt to control or eliminate the
variation, compensate for the variation, or minimize the effect of the variation by
making the system robust to it. Adopting the rst of these strategies would mean
accepting that teams will never function efciently, but hoping that they will do
the best they can under the circumstances. As with an engineering system, this
strategy would result in a lot of unhappy customers.
Generally, with engineering systems, you are encouraged to adopt strategy four
rst, reverting only to strategies two and three as a last resort because they are
difcult and expensive to implement. While strategy four should also be chosen in
the case of the team system wherever possible, you have greater exibility in many
cases to consider the other two options.
Controlling or Eliminating Variation
Procrustes behavior is an example of controlling inner variation. While this approach
to variation might be considered extreme, you may have some scope for selecting
team members with the right characteristics for effective teamwork as well as the
necessary technical expertise.
External variations are perhaps a little easier to deal with. For example, you
could ensure that meetings are held away from the shop oor to reduce distractions
due to noise (in the audible sense!) or hold them at an off-site location to minimize
interruption.
Compensating for Variation
The principal means of compensating for variation is by providing some feedback
on its effect on system output. The link between structure and process the way
in which structure determines process, and for your purposes perhaps more impor-
tantly, the way that process determines structure is found in the concept of
feedback loops. Feedback loops are so named because they are circular interrela-
tionships that feed information from output back to input.
Information is transmitted within the system and is used to maintain stability,
to bring about structural changes, and to facilitate interaction with other systems.
SL3151Ch01Frame Page 30 Thursday, September 12, 2002 6:15 PM
Prerequisites to Design for Six Sigma (DFSS) 31
Even the simplest model of the effective team includes this concept of feedback
loops. By employing information feedback loops, systems may behave in ways that
can be described as goal seeking or purposive.
Negative feedback allows a system to maintain stability as in the case of the
most commonly quoted example, a thermostat. A thermostat is controlled by negative
feedback so that when the temperature increases above a certain level the heating
is switched off, but when the temperature decreases sufciently the heating is
switched on. The process of maintaining stability is called homeostasis. The
capacity for such control is engineered into some mechanical systems and occurs
naturally in all biological and social systems. Threats to the stability of the system
will be countered in a powerful attempt to maintain homeostasis.
System Feedback
One alternative approach is to monitor those aspects of team behavior that are
observable (i.e., gather the voice of the process). Descriptive Feedback offers a
non-judgmental method of monitoring what happens in working groups. It allows
team members to notice when team process is in control and meeting or exceeding
predetermined expectations or drifting out of control and reducing potential. Descrip-
tive Feedback provides three basic functions:
1. It makes explicit what is happening during team process.
2. It describes those characteristics of team process behavior, relationships,
and feelings that may degrade or go out of control and inhibit the potential
of the team.
3. It determines what, if anything, needs to be changed in order to facilitate
continuous improvement in team process.
Feedback over time enables a team to establish performance-based control limits.
By using these data, specic characteristics or variables relating to team process can
be plotted over time. This will identify patterns that emerge and that can be used to
identify and capture the degree of variability of the team. Some patterns are related
to in control conditions, others to out of control conditions, just as the patterns
of points on a control chart can be used to establish whether a manufacturing process
is in control or out of control.
Based on feedback that describes what people notice and how they feel, the
team is able to regulate its process and identify opportunities for improvement.
Minimizing the Effect of Variation
The Parameter Design approach used in quality engineering see Volume V of this
series is concerned with minimizing the effect of variation factors by making the
system robust. This involves identifying control factors in this case, aspects of
the team process that are within the control of the team and that can be used to
reduce the impact of variation factors without eliminating or controlling the variation
factors themselves. An example of a control factor functioning in this way is the
SL3151Ch01Frame Page 31 Thursday, September 12, 2002 6:15 PM
32 Six Sigma and Beyond
use of Warm-Up and its consideration of place (layout, heating, lighting, ventila-
tion) so that best use is made of the facility provided and distractions are minimized,
even though the place itself and many of its features cannot be changed.
The key to a successful team lies not only in identifying those parameters that
are critical for the efcient transformation of inputs to the team process into outputs
but also in doing this with minimal loss of energy in error states and maximum
robustness to variation factors in the environment. Different types of teams with
different outputs required of them would have different parameters established for
their most efcient performance. Many of the structures, processes and skills that
could be used as control factors in a team process have been identied in Volume I,
Part II of this series.
Through this process of observation, it is possible to establish control limits in
a wider area of team performance. A number of the factors that have an impact on
team performance can be observed and regulated through feedback, and tolerance
for them can be established depending upon the makeup and objective of the team.
These factors include warming up and down, place, task, maintenance, process
management, team roles, agenda management, communication skills, speaking
guidelines, meeting management, exploratory thinking guidelines, experimental
thinking guidelines, change management, action planning, and team parameters.
The traditional approach to engineering waits until the end of the design process
to address the optimization of a systems performance in other words, after
parameter values are selected and tolerances determined, often at the extremes of
conditions and often without considering interactions among different components
or subsystems. When the components and subsystems are integrated, and if perfor-
mance does not meet the target value or the customers requirements, parameter
values are altered. Consequently, though the system may be adjusted to operate
within tolerance, this process does not guarantee that the system is producing its
ideal performance.
Similarly, traditional approaches to building teams have selected team members
according to a number of factors: predetermined skills and knowledge, established
roles for team members, and implemented structured norms. They also have waited
until the end of the process of team design in order to optimize performance. If the
team does not perform within the accepted values of these parameters, then it is
adjusted: team members are changed, roles are redened, norms are more strictly
enforced. This, however, is against performance criteria that do not necessarily
optimize the teams performance nor add to the motivation or job satisfaction of the
team members.
The shift suggested in Parameter Design in engineering (and that may be applied
to teams as well) is to move from establishing parameter values to identifying those
parameters that are most important for the function of the process and then determine
through experimental design the correct values for those parameters. The key is to
establish the values that use the energy of the system most efciently and that are
resistant to uncontrollable impact from other factors internal or external to the system
itself.
SL3151Ch01Frame Page 32 Thursday, September 12, 2002 6:15 PM
Prerequisites to Design for Six Sigma (DFSS) 33
MONITORING TEAM PERFORMANCE
One way of monitoring team performance has already been suggested, namely the
use of Descriptive Feedback. Gathering the voice of the process enables the team
to evaluate its performance and to continuously improve its efciency and hence its
effectiveness, before completing the task. Preliminary work in using process-control
charting from Statistical Process Control suggests that there is opportunity for
application to group process. This provides a second means of monitoring and
continuously improving the teams performance. Critical control factors, identied
using the Parameter Design approach, could be measured and monitored in this way.
Based upon further renement, it may be possible to establish control limits, targets,
and tolerances for these factors.
System Interrelationships
A systems model of processes differs from traditional models in many ways, one
of which is the notion of circular causality. In the non-systems view, every event
has its cause or causes in preceding events and its effects on subsequent events: the
scientist seeks the cause or effect. Using the linear method of causality, ultimate
causes are sought by tracing back through proximate causes. However, many phe-
nomena do not t the linear model: the relationships between them and the
relationships between the attributes or characteristics of the elements do not
conform to this linear approach to causality.
In engineering systems, a direct cause and effect relationship often exists
between the component of the system and the transformation of the input into an
output. A steering wheel channels the input of the vehicle operator directly into the
output of the system. That is, turning the steering wheel to the right or left actually
turns the wheels of the vehicle to the right or the left. However, it is equally clear
that error states or phenomena are nowhere near as simple or linear in the causal
relationship. Feedback loops and circular causality create very complex interactions.
Similarly, the choice of lubricants may not affect the performance of the system
until months or years later, when early deterioration of a transmission would result
in difculty shifting gears.
Similarly, in teams, some cause and effect relationships are clearly related in
time and others are not. Interventions by a timekeeper will affect the ability of the
team to stick to its agenda. But other factors have more circular relationships. In a
global problem-solving team, changing seating arrangements from the long-tabled
boardroom style to a circular arrangement will result in more universal eye contact
among team members, which may increase the teams communication. This leads
to enhanced exchange of information, which may lead to a clearer identication of
the problem which will, in turn, lead to a more targeted search for relevant data,
which will nally lead to a root-cause identication for the problem. Changing the
seating arrangement may enhance nding a root cause more quickly than might have
been the case in boardroom seating, and the cause and effect chain may be quite
intricate.
SL3151Ch01Frame Page 33 Thursday, September 12, 2002 6:15 PM
34 Six Sigma and Beyond
SYSTEMS ENGINEERING
An emerging basis for unifying and relating the complexities of managerial problems
is the system concept and its methodology. This concept has been applied more to
the analysis of productive systems than to other elds, but it is clear that the value
of the concept in management is pervasive.
The word system has become so commonplace in the general literature as
well as in the eld that one often wants to scream, for its common use almost
depreciates its value. Yet the word itself is so descriptive of the general interacting
nature of the myriad of elements that enter managerial problems that we can no
longer talk of complex problems without using the term systems. Indeed, we must
learn to distinguish the general use of the term from its specic use as a mode of
structuring and analyzing problems.
One of the great values of the system concept is that it helps us to take a very
complex situation and lend order and structure to it by using statistics, probability,
and mathematical modeling. A major contribution of the concept is the reduction of
complexity in managerial problems to a block diagram showing the relationship and
interacting effects of the various elements that affect the problem at hand. At its
present state of development and application, the systems concept is most useful in
helping us gain insight into problems. At a second and very powerful level of
contribution, however, systems analysis is gaining prominence as a basis for gener-
ating solutions to problems and evaluating their effects, and for designing alternate
systems.
SYSTEMS DEFINED
We have been using the term systems without dening it. Though nearly everyone
may have a general understanding of the term, it may be useful to be more precise.
Webster denes a system as a regularly interacting or interdependent group of items
forming a unied whole. Thus a system may have many components and objects,
but they are united in the pursuit of some common goal. They are in some sense
unied, organized, or coordinated. The components of a system contribute to the
production of a set of outputs from given inputs that may or may not be optimal or
best with respect to some appropriate measure of effectiveness. Systems are often
complex, although the denition does not specify that they need to be.
It is probably correct to say that some of the most interesting systems for study
are complex and that a change in one variable within the system will affect many
other variables of the system. Thus in productive systems, a change in production
rate may affect inventories, hours worked per week, overtime hours, facility layout,
and so on. Understanding and predicting these complex interactions among variables
is one of our main objectives in this section.
One of the elusive aspects of the systems concept is in the denition of a specic
system. The fact that we can dene the system that we wish to consider and draw
boundaries around it is important. We can then look inside the dened system to
see what happens, but it is just as important to see how the system is affected by
its environment.
SL3151Ch01Frame Page 34 Thursday, September 12, 2002 6:15 PM
Prerequisites to Design for Six Sigma (DFSS) 35
Thus, invariably, every system can be thought of as a part of an even larger
system. One of the dangers of dening systems that are too narrow in scope is that
we may fail to see broader implications. On the other hand, a broad denition runs
the risk of leaving out important details involved in the functioning of the system.
Obviously, there is a large element of art in the application of systems concepts.
Systems can be open or closed. An open system is one characterized by outputs
that respond to inputs but where the outputs are isolated from and have no inuence
on the inputs. An open system is not aware of its own performance. In an open
system, past performance does not control future performance.
A closed system (sometimes called a feedback system), on the other hand, is
inuenced by its own behavior. A feedback system has a closed loop structure that
brings results from past action of the system back to control future action. There
are two types of feedback systems: the negative feedback, which seeks a goal and
responds as a consequence of failure to achieve the goal, and the positive feedback,
which generates growth processes wherein action builds a result that generates still
greater action. Unfortunately most of the feedback systems in managerial problems
are of the negative feedback type where the objective is to control a process.
IMPLICATIONS OF THE SYSTEMS CONCEPT FOR THE MANAGER
Managers who put the systems concept to work are rewarded initially by the devel-
opment of a deeper understanding of the systems that they manage. By developing
the structure of the interacting effects of system components and the various feedback
control loops in the system, managers can see better which handles to twist in
order to keep themselves in control. Indeed, with a knowledge of the system struc-
ture, a manager can see how it might be possible to restructure the system in order
to create the most effective feedback control mechanisms.
With the availability of large-scale system models (simulation, statistical, reli-
ability, and mathematical models) a manager is better able to assess the effects of
changes in one division component on another and on the organization as a whole.
Furthermore, the managers of any of the productive operations are better able to see
how their units t into the whole and to understand the kinds of trade-offs that are
often made by higher level management and that sometimes seemingly affect one
unit adversely.
Perhaps one of the most important contributions of systems thinking is in the
concept of suboptimization. Suboptimization often occurs when one views a problem
narrowly. For example, one can construct mathematical formulas to determine the
minimum cost (optimum) quantity of products or parts to manufacture at one time,
which results in a supposedly optimum inventory level. If one broadens the denition
of the system under study, however, and includes not just the inventory and reorder
subsystem but the production and warehousing subsystems as well, it may turn out
that the inventory-connected costs are a measure of only part of the problem. If the
product exhibits seasonal sales, the costs of changing production levels may be
signicant enough to warrant carrying extra inventories to smooth production and
employment. In such a situation, the minimum cost inventory model would be a
suboptimal policy.
SL3151Ch01Frame Page 35 Thursday, September 12, 2002 6:15 PM
36 Six Sigma and Beyond
Organizational suboptimization often occurs when the production and distribu-
tion functions of an enterprise are operated as essentially two different businesses.
The factory manager will be faced with minimizing production cost while the
sales/distribution manager will be faced mainly with an inventory management,
shipping, and customer service problem. Each suborganization attempting to opti-
mize separately will likely result in a combined cost somewhat larger than if the
attempt were made to optimize the combined system. The reasons are fairly obvious,
since in minimizing the costs of inventories, the sales function transmits directly to the
factory most of the effects of sales uctuations instead of absorbing these uctuations
through buffer inventories. Suboptimization is the result. By coordinating the efforts of
the production and distribution managers, however, it may be possible to achieve some
balance between inventory costs and the costs of production uctuation.
Another way to view suboptimization is through the hidden factory the
terminology of six sigma. If we take for example the issue of safety, let us examine
what is really at stake. No one will deny that the bottom line of all safety programs
is injury prevention, more often called loss control. To appreciate the concept of
loss control, however, we must look at the direct and indirect costs (often called
the hidden costs) associated with an on-the-job injury.
The direct costs are:
1. Medical
2. Compensation
The indirect costs are:
1. Time lost from work by injured
2. Loss in earning power
3. Economic loss to injureds family
4. Lost time by fellow workers
5. Loss of efciency due to breakup of crew
6. Lost time by supervision
7. Cost of breaking in new worker
8. Damage to tools and equipment
9. Time damaged equipment is out of service
10. Spoiled work
11. Loss of production
12. Spoilage re, water, chemical, explosives, and so on
13. Failure to ll orders
14. Overhead cost (while work was disrupted)
15. Miscellaneous (There are at least 100 other items of cost that appear one
or more times with every incident in which a worker is injured.)
The point here is that with most injuries the focus becomes the direct cost,
thereby dismissing the indirect costs. It has been estimated time and again that the
cost relationship of direct to indirect cost is one to three, yet we continue to ignore
the real problems of injury. An appropriate system design for injury prevention would
SL3151Ch01Frame Page 36 Thursday, September 12, 2002 6:15 PM
Prerequisites to Design for Six Sigma (DFSS) 37
minimize if not eliminate the hidden costs. Generally speaking, the system should
include (a) engineering, (b) education, and (c) enforcement considerations. Some
specic considerations should be:
1. Workers will not be injured or killed
2. Property and materials will not be destroyed
3. Production will ow more smoothly
4. You will have more time for the other management duties of your job
DEFINING SYSTEMS ENGINEERING
A simple denition of systems engineering is: A customer/requirementsdriven
engineering and management process which transforms the voice of the customer(s)
into a feasible and veried product/process of appropriate conguration, capability,
and cost/price. A system is always greater than the sum of its parts and is no better
than the weakest link. The derivative of that, of course, is that optimizing a part
does not optimize the whole. This was brought out by Mayne et al. (2001), when
they reported that 37% of all the automotive warranty for model year 2000 was in
interfacing of parts rather than individual components. The message of Mayne and
coworkers and most of us in the quality eld has been and continues to be: inter-
actions determine the performance of the system. We cannot, no matter how hard
we try, fully understand the whole by breaking down and analyzing parts yet
design is historically done precisely that way.
Systems engineering builds on the fact that the whole is the most important
entity and that integration to meet cost, schedule, and technical performance is
dependent on both technical and management intervention. Ultimately, systems
engineering is a team-based activity. This is very important because as we move
into the future we see that:
1. Quality is becoming more customer dependent rather than denitional
from the providers point of view. In other words, we must specify what
the product or service must do and how well it must do it, then verify the
design to those requirements.
2. Products/services are becoming more sophisticated (complex).
3. Traditionally, product development has been very serial with designs thrown
over the imaginary wall to manufacturing something that today is not
working very well. This has resulted in late changes and ultimately higher
costs. Systems engineering is based on the notion that design may be on a
parallel development process and with a strong consideration for its total life
cycle manufacture, delivery, maintenance, decommission, and recycling.
For systems engineering to be effective in any organization, that organization
must be committed to integration of several items including timing of development
and specic delivery(ies) at specic milestones. A generic approach to facilitate this
is the following model, involving the steps of pre-feasibility analysis, requirement
analysis, design synthesis, and verication.
SL3151Ch01Frame Page 37 Thursday, September 12, 2002 6:15 PM
38 Six Sigma and Beyond
PRE-FEASIBILITY ANALYSIS
Before the actual analysis takes place there is a preliminary trade-off analysis as to
what the customer needs and wants and what the organization is willing to provide.
This is done under the rubric of preliminary feasibility. When the feasibility is
complete, then the actual requirement analysis takes place.
REQUIREMENT ANALYSIS
The requirement analysis involves the following steps:
1. Collect the requirements the customers needs, wants, and expectations
are collected at every level.
2. Organize requirements group the information in such a way that require-
ments are easy to address. Determine if the requirements are complete.
3. Translate into more precise terms cascade the denitions to precise
terms, honing their denition to the best possible correlation of real world
usage.
4. Develop verication requirements preliminary verication tests are
discussed and proposed here to make sure that they are in fact doable.
At the end of the requirement analysis the results are moved to the second stage
of the system engineering model, which is design synthesis. However, before the
synthesis actually takes place, another feasibility analysis is completed to nd out
whether the organization is capable of designing the requirements of the customer.
This feasibility analysis takes into consideration the organizations knowledge from
previous or similar designs and incorporates it into the new. The idea of this feasi-
bility analysis is to make sure the designers optimize reusability and carry over parts
and/or complete designs.
DESIGN SYNTHESIS
Design synthesis involves the following steps:
1. Generate alternatives the more alternatives the better. The alternatives
are generated with functionality in mind from the customers perspective
as reected in the system specications. Remember that the ultimate
design is indeed a trade-off design.
2. Evaluate alternatives the generated alternatives are evaluated with
appropriate benchmarking data and integrated into the design based on
the customers requirements.
3. Generate sub element requirements big chunks or sub elements are
chosen and requirements cascaded to each sub element. As the cascading
process continues, verication requirements are also developed to test the
overall system integrity as more and more sub elements are integrated
into the total system.
SL3151Ch01Frame Page 38 Thursday, September 12, 2002 6:15 PM
Prerequisites to Design for Six Sigma (DFSS) 39
At the end of the design synthesis a very important analysis takes place. This
analysis tests the integrity of the design against the customers requirements. If it is
found that the requirements are not addressed (design gap), a redesign or a review
takes place and a x is issued. If, on the other hand, everything is as planned, the
process moves to the third link of the model verication.
VERIFICATION
The nal stage is verication. It involves the following:
1. Verify that requirements are complete a review of all requirements
from both design and the customer takes place with appropriate tests and
correlated to real world usage.
2. Verify that design meets customers requirements CAE tools, labs, rigs,
simulations, and key life testing are some of the verication methodolo-
gies used at this stage. The intent here is to verify that the selected system
and cascaded requirements will meet the customers requirements and
provide a balanced optimum design from the customers perspective.
At the end of this stage, if problems are found they (the problems) revert back
to the design; if there are no problems, the design goes to manufacturing, with a
design ready to fulll the customers expectations. This nal stage in essence tests
the integrity of the design against the actual hardware. In other words, the questions
often heard in verication are: Does the design work? Can you prove it?
The beauty of this model is that it is an iterative model, meaning that the
process no matter where you are in the model iterates until a balanced optimum
design is achieved. This is because the goal is to design a customer-friendly design
with compatibility, carryover, reusability, and low complexity requirements com-
pared to other, similar designs. Iterations happen because of human oversights,
poorly dened requirements, or an increase in knowledge.
Another special characteristic of systems engineering is the notion of traceability.
Traceability is reverse cascading and is used throughout the design process to make
sure that the voices of the customer, regulator, and corporate or lower-level design
are heard and accounted for in the overall design. With traceability, extra caution is
given to the trade-off analysis. This is because by denition trade-off analysis
accounts for designs with certain priority levels among the needs and wants of the
customer. In a trade-off analysis, we choose among stated design alternatives.
However, a trade-off analysis is also an iterative process, and usually none of
the alternatives is perfect [R(t) = 1 F(t)]. This is important to remember because
all trade-off analyses assess risk, both external and internal, of the given alternatives
so as to make robust designs.
A nal word about verication and systems engineering: As we already men-
tioned, the intent of verication is to make sure that the hardware meets the require-
ments of the design. The process for conducting this verication is done
generally in ve steps, which are:
SL3151Ch01Frame Page 39 Thursday, September 12, 2002 6:15 PM
40 Six Sigma and Beyond
1. Plan Review all requirements and make a preliminary assessment as
to their impact. At the end of this evaluation, take ownership of important
requirements and begin the assessment of specic tests and methods. It
is not unusual at this stage to review the plan again and perhaps combine,
consolidate, or even adjust the plan completely. In this stage we select
attribute data, as well, monitor the unselected requirement, schedule
preliminary tests, and approve the testing schedule. As you begin to zero
in on specic targets, you may want to take into consideration features
of the proposed design and benchmarking data so that your targets become
of value to the customer. If this plan is rich in information, it is possible
to begin predicting and formulating prototype(s).
2. Execute In this stage, the engineer in charge will determine which
test(s) to run, when to run them, what the data should look like, and what
to expect. Proper test execution is of importance here.
3. Analyze/revise Analyze the test results, and see if the design has
changed in any way. Determine whether to redo the test if the design
changed during the test for any reason. At this stage you expect no design
changes, only testing revisions and modications.
4. Sign-off This is the most common ending for a verication process. In
this stage nal approvals are given, usually several months before pro-
duction begins.
5. Archive This is a step that most engineers do not do, yet it is a very
important step in the process. The idea of archiving or documenting is to
make sure that key events are appropriately documented for future use. You
may want to document unusual tests, time frames of specic tests, or any
specic requirements that you had the intention of verifying but could not
verify using the planned method. In essence, this phase of verication con-
sists of lessons learned that need to be carried forward to the next design.
The focus of this process is to make sure that the requirements are driving the
process and not the tests regardless of how sophisticated they are. To be sure, tests
are an integral part of verication, but they are the means not the end. The intent
of the tests is to verify each requirement, and there is no wrong way as long as the
testing method is linked to real world usage. The reason for doing all this is to:
1. Reduce workload in design verication
2. Improve prototype and testing efciency by avoiding duplication
3. Improve testing quality resulting in higher sign-off condence
4. Improve communication and stronger relationships between customer and
suppliers
ADVANCED QUALITY PLANNING
Before we address the why of planning, we assume that things do go wrong. But
why do they go wrong? Obviously, many specic answers address this question.
Often the answer falls into one of these four categories:
SL3151Ch01Frame Page 40 Thursday, September 12, 2002 6:15 PM
Prerequisites to Design for Six Sigma (DFSS) 41
1. We never have enough time, so things are omitted.
2. We have done this, this way, in order to minimize the effort.
3. We assume that we know what has been requested, so we do not listen
carefully.
4. We assume that because we nish a project, improvement will indeed
follow, so we bypass the improvement steps.
In essence then, the customer appears satised, but a product, service, or process
is not improved at all. This is precisely why it is imperative for organizations to
look at quality planning as a totally integrated activity that involves the entire
organization. The organization must expect changes in its operations by employing
cross-functional and multidisciplinary teams to exceed customer desires not just
meet requirements. A quality plan includes, but is not limited to:
A team to manage the plan
Timing to monitor progress
Procedures to dene operating policies
Standards to clarify requirements
Controls to stay on course
Data and feedback to verify and to provide direction
An action plan to initiate change
Advanced quality planning (AQP), then, is a methodology that yields a quality
plan for the creation of a process, product, or service consistent with customer
requirements. It allows for maximum quality in the workplace by planning and
documenting the process of improvement. AQP is the essential discipline that offers
both the customer and the supplier a systematic approach to quality planning, to
defect prevention, and to continual improvement. Some specic uses are:
1. In the auto industry, demand is so high that Chrysler, Ford, and General
Motors have developed a standardized approach to AQP. That standardized
approach is a requirement for the QS-9000 and/or ISO/TS19469 certi-
cation. In addition, each company has its own way of measuring success
in the implementation and reporting phase of AQP tasks.
2. Auto suppliers are expected to demonstrate the ability to participate in
early design activities from concept through prototype and on to produc-
tion.
3. Quality planning is initiated as early as possible, well before print release.
4. Planning for quality is needed particularly when a companys management
establishes a policy of prevention as opposed to detection.
5. When you use AQP, you provide for the organization and resources needed
to accomplish the quality improvement task.
6. Early planning prevents waste (scrap, rework, and repair), identies
required engineering changes, improves timing for new product introduc-
tion, and lowers costs.
SL3151Ch01Frame Page 41 Thursday, September 12, 2002 6:15 PM
42 Six Sigma and Beyond
7. AQP is used to facilitate communication with all individuals involved in
a program and to ensure that all required steps are completed on time at
acceptable cost and quality levels.
8. AQP is used to provide a structured tool for management that enforces
the inclusion of quality principles in program planning.
WHEN DO WE USE AQP?
We use AQP when we need to meet or exceed expectations in the following situa-
tions:
1. During the development of new processes and products
2. Prior to changes in processes and products
3. When reacting to processes or products with reported quality concerns
4. Before tooling is transferred to new producers or new plants
5. Prior to process or product changes affecting product safety or compliance
to regulations
The supplier as in the case of certication programs such as ISO 9000, QS-
9000, ISO/TS19469, and so on is to maintain evidence of the use of defect prevention
techniques prior to production launch. The defect prevention methods used are to
be implemented as soon as possible in the new product development cycle. It follows
then, that the basic requirements for appropriate and complete AQP are:
1. Team approach
2. Systematic development of products/services and processes
3. Reduction in variation (this must be done, even before the customer
requests improvement of any kind)
4. Development of a control plan
As AQP is continuously used in a given organization, the obvious need for its
implementation becomes stronger and stronger. That need may be demonstrated
through:
1. Minimizing the present level of problems and errors
2. Yielding a methodology that integrates customer and supplier develop-
ment activities as well as concerns
3. Exceeding present reliability/durability levels to surpass the competitions
and customers expectations
4. Reinforcing the integration of quality tools with the latest management
techniques for total improvement
5. Exceeding the limits set for cycle time and delivery time
6. Developing new and improving existing methods of communicating the
results of quality processes for a positive impact throughout the organi-
zation
SL3151Ch01Frame Page 42 Thursday, September 12, 2002 6:15 PM
Prerequisites to Design for Six Sigma (DFSS) 43
WHAT IS THE DIFFERENCE BETWEEN AQP AND APQP?
AQP is the generic methodology for all quality planning activities in all industries.
APQP is AQP; however, it emphasizes the product orientation of quality. APQP is
used specically in the automotive industry. In this book, both terms are used
interchangeably.
HOW DO WE MAKE AQP WORK?
There are no guarantees for making AQP work. However, three basic characteristics
are essential and must be adhered to for AQP to work. They are:
1. Activities must be measured based on who, what, where, and when.
2. Activities must be tracked based on shared information (how and why),
as well as work schedules and objectives.
3. Activities must be focused on the goal of quality-cost-delivery, using
information and consensus to improve quality.
As long as our focus is on the triad of quality-cost-delivery, AQP can produce
positive results. After all, we all need to reduce cost while we increase quality and
reduce lead time. That is the focus of an AQP program, and the more we understand
it, the more likely we are to have a workable plan.
ARE THERE PITFALLS IN PLANNING?
Just like everything else, planning has pitfalls. However, if one considers the alter-
natives, there is no doubt that planning will win out by far. To be sure, perhaps one
of the greatest pitfalls in planning is the lack of support by management and a hostile
climate for its practice. So, the question is not really whether any pitfalls exist, but
why such support is quite often withheld and why such climates arise in organizations
that claim to be quality oriented.
Some specic pitfalls in any planning environment may have to do with com-
mitment, time allocation, objective interpretations, tendency toward conservatism,
and an obsession with control. All these elements breed a climate of conformity and
inexibility that favors incremental changes for the short term but ignores the
potential of large changes in the long run. Of these, the most misunderstood element
is commitment.
The assumption is that with the support of management, all will be well. This
assumption is based in the axiom of F. Taylor at the turn of the 20th century, which
is there is one best way. Planning is assumed to generate the one best way not
only to formulate, but to implement, a particular idea, product, and so on. Sometimes,
this notion is not correct. In todays agile world, we must be prepared to evaluate
several alternatives of equal value. (See the section on system engineering).
As a consequence, the issue is not simply whether management is committed to
planning. It is also, as Mintzberg (1994) has observed, (1) whether planning is com-
mitted to management, (2) whether commitment to planning engenders commitment
SL3151Ch01Frame Page 43 Thursday, September 12, 2002 6:15 PM
44 Six Sigma and Beyond
to the process of strategy making, to the strategies that result from that process, and
ultimately to the taking of effective actions by the organization, and (3) whether the
very nature of planning actually fosters managerial commitment to itself.
Another pitfall, of equal importance, is the cultural attitude of ghting res.
In most organizations, we reward problem solvers rather than planners. As a conse-
quence, in most organizations the emphasis is on low-risk re ghting, when in
fact it should be on planning a course of action that will be realistic, productive,
and effective. Planning may be tedious in the early stages of conceptual design, but
it is certainly less expensive and much more effective than corrective action in the
implementation stage of any product or service development.
DO WE REALLY NEED ANOTHER QUALITATIVE TOOL TO GAUGE QUALITY?
While quantitative methods are excellent ways to address the who, what, when,
and where, qualitative study focuses on the why. It is in this why that the
focus of advanced quality planning contributes the most results, especially in the
exploratory feasibility phase of our projects.
So, the answer to the question is a categorical yes because the aim of qualitative
study is to understand rather than to measure. It is used to increase knowledge,
clarify issues, dene problems, formulate hypotheses, and generate ideas. Using
qualitative methodology in advanced quality planning endeavors will indeed lead to
a more holistic, empathetic customer portrait than can be achieved through quanti-
tative study, which in turn can lead to enlightened engineering and production
decisions as well as advertising campaigns.
HOW DO WE USE THE QUALITATIVE METHODOLOGY IN AN AQP SETTING?
Since this volume focuses on the applicability of tools rather than on the details of
the tools, the methodology is summarized in seven steps:
1. Begin with the end in mind. This may be obvious; however, it is how most
goals are achieved. This is the stage where the experimenter determines
how the study results will be implemented. What courses of action can
the customer take and how will they be inuenced by the study results?
Clearly understanding the goal denes the study problem and report
structure. To ensure implementation, determine what the report should
look like and what it should contain.
2. Determine what is important. All resources are limited and therefore we
cannot do everything. However, we can do the most important things. We
must learn to use the Pareto principle (the vital few as opposed to the
trivial many). To identify what is important, we have many methods,
including asking about advantages and disadvantages, benets desired,
likes and dislikes, importance ratings, preference regression, key driver
analysis, conjoint and discrete choice analysis, force eld analysis, value
analysis, and many others. The focus of these approaches is to improve
performance in areas in which a competitor is ahead or in areas where
SL3151Ch01Frame Page 44 Thursday, September 12, 2002 6:15 PM
Prerequisites to Design for Six Sigma (DFSS) 45
your organization is determined to hold the lead in a particular product
or service.
3. Use segmentation strategies. Not everyone wants the same thing. Learn
to segment markets for specic products or services that deliver value to
your customer. By segmenting based on wants, engineering and product
development can develop action oriented recommendations for specic
markets and therefore contribute to customer satisfaction.
4. Use action standards. To be successful, standards must be used, but with
diagnostics. Standards must be dened at the outset. They are always
considered as the minimum requirements. Then when the results come
in, there will be an identied action to be taken, even if it is to do nothing.
List the possible results and the corresponding actions that could be taken
for each. Diagnostics, on the other hand, provide the what if questions
that one considers in pursuing the standards. Usually, they provide alter-
natives through a set of questions specic to the standard. If you cannot
list actions, you have not designed an actionable study. Better design it
again.
5. Develop optimals. Everyone wants to be the best. The problem with this
statement is that there is only room for one best. All other choices are
second best. When an organization focuses on being the best in everything,
that organization is asking for failure. No one can be the best in everything
and sustain it. What we can do is focus on the optimal combination of
choices. By doing so, we usually have a usable recommendation based
on a course of action that is reasonable and within the constraints of the
organization.
6. Give grasp-at-a-glance results. The focus of any study is to turn people
into numbers (wants into requirements), numbers into a story (require-
ments into specications), and that story into action (specications into
products or services). But the story must be easy to understand. The results
must be clear and well-organized so that they and their implications can
be grasped at a glance.
7. Recommend clearly. Once you have a basis for an action, recommend that
action clearly. You do not want a doctor to order tests and then hand you
the laboratory report. You want to be told what is wrong and how to x
it. From an advanced quality planning perspective, we want the same.
That is, we want to know where the bottlenecks are, what kind of problems
we will encounter, and how we will overcome them for a successful
delivery.
APQP INITIATIVE AND RELATIONSHIP TO DFSS
The APQP initiative in any organization is important in that it demonstrates our
continuing effort to achieve the goal of becoming a quality leader in the given
industry. Inherent in the structure of APQP are the following underlying value-added
goals:
SL3151Ch01Frame Page 45 Thursday, September 12, 2002 6:15 PM
46 Six Sigma and Beyond
1. Reinforces the companys focus on continuous improvement in quality,
cost, and delivery
2. Provides the ability to look at an entirely new program as a single unit
Preparing for every step in the creation
Identifying where the greatest amount of effort must be centered
Creating a new product with efciency and quality
3. Provides a better method for balancing the targets for quality, cost, and
timing
4. Allows for deployment of the targets using detailed practical deliverables
with specic timing schedule requirements
5. Provides a tool for program management to follow up all program plan-
ning processes.
The APQP initiative explicitly focuses on basic engineering activities to avoid
concerns rather than focusing on the results in the product throughout all phases.
Based on the fact that the deliverables are clearly dened between departments
(supplier/customer relationships), program concerns and issues can be solved ef-
ciently.
The APQP initiative also is forceful in viewing the review process at the end of
the cycle as unacceptable. Rather, the review must be done at the end of each planning
step. This provides a critical step-by-step review of how the organizations are
following best possible practices. Also, the APQP initiative has a serious impact on
stabilizing the program timing and content. Stabilization results in cost improvement
opportunities including reduction of special sample test trials. Understanding the
program requirements for each APQP element from the beginning provides the
following advantages:
Claries the program content
Controls the sourcing decision dates
Identies customer-related signicant/critical characteristics
Evaluates and avoids risks to quality, cost, and timing
Claries for all organizations product specications using a common
control plan concept
Application of APQP in the DFSS process provides a company with the oppor-
tunity to achieve the following benets:
1. It provides a value-added tool allowing program management to track and
follow up on all the program planning processes focusing on engineer-
ing method and quality results.
2. It provides a critical review of how each organization is following best
possible practices by focusing on each planning step.
3. It identies the complete program content upon program initiation, view-
ing all elements of the process as a whole (AIAG 1995; Stamatis 1998).
SL3151Ch01Frame Page 46 Thursday, September 12, 2002 6:15 PM
Prerequisites to Design for Six Sigma (DFSS) 47
Once program content has been claried, the following information can be
discerned:
1. Sourcing decision dates are identied.
2. Customer-related signicant/critical characteristics are specied.
3. Quality, cost, and timing risks are evaluated and avoided.
4. Product specications are established for all organizations using a com-
mon control plan concept.
Using the APQP process to stabilize program timing and content, the opportu-
nities for cost improvement are dramatically increased. When we are aware of the
timing and concerns that may occur during the course of a program, it provides us
the opportunity to reduce costs in the following areas:
1. Product changes during the program development phase
2. Engineering tests
3. Special samples
4. Number of verication units to be built (prototypes, rst preproduction
units, and so on)
5. Number of concerns identied and reduced
6. Fixture and tooling modication costs
7. Fixture and tooling trials
8. Number of meetings for concern resolution
9. Overtime
10. Program development time and deliverables (an essential aspect of both
APQP and DFSS)
For a very detailed discussion of APQP see Stamatis (1998).
REFERENCES
Automotive Industry Action Group (AIAG), Advanced Product Quality Planning and Control
Plan. Chrysler Co., Ford Motor Co., and General Motors. Distributed by AIAG,
Southeld, MI, 1995.
Mayne, E. et al., Quality Crunch, Wards AUTOWORLD, July 2001, pp. 1418.
Mintzberg, H., The Rise and Fall of Strategic Planning, New York Free Press, New York, 1994.
Stamatis, D.H., Advanced Quality Planning. A Commonsense Guide to AQP and APQP,
Quality Resources, New York, 1998.
SELECTED BIBLIOGRAPHY
Bossert, J., Considerations for Global Supplier Quality, Quality Progress, Jan. 1998, pp.
2934.
Brown, J.O., A Practical Approach to Service: Supplier Certication, Quality Progress, Jan.
1998, pp. 3540.
SL3151Ch01Frame Page 47 Thursday, September 12, 2002 6:15 PM
48 Six Sigma and Beyond
Forcinio, H., Supply Chain Visibility: Is It Really Possible? Managing Automation, July 2001,
pp. 2428
Gurwitz, P.M., Six Questions to Ask Your Supplier About Multivariate Analysis, Quirks
Marketing Review, Feb. 1991, pp 89, 23.
Mehta, P.V. and Schefer, J.M., Getting Suppliers in on the Quality Act, Quality Progress,
Jan. 1998, pp. 2128.
Schoenfeldt, T., Building Effective Supplier Relationships, Automotive Excellence, Winter
1999, pp.1725.
SL3151Ch01Frame Page 48 Thursday, September 12, 2002 6:15 PM

49

2

Customer
Understanding

In Volume I of this series, we made a point to discuss the difference between
customer satisfaction and loyalty. We said that they are not the same and that
most organizations are interested in loyalty. We are going to pursue this discussion
in this chapter because, as we have been saying all along, understanding the differ-
ence between customer

service

and customer

satisfaction

can provide marketers with
the competitive advantage necessary to retain existing customers and attract new
ones. Understanding what satisfaction is and what the customer is looking for can
provide the engineer with a competitive advantage to design a product and or service
second to none. At rst glance, service and satisfaction may appear to mean the
same thing, but they do not; service is what the marketer provides and what the
customer gets, and satisfaction is the customers evaluation of the level of service
received based on preconceived assumptions and the customers own denition of
functionalities. The satisfaction level is determined by comparing expected service
to delivered service. Four outcomes are possible:
1. Delight positive disconrmation (a pleasant surprise)
2. Dissatisfaction negative disconrmation (an unpleasant surprise)
3. Satisfaction positive conrmation (expected level of service)
4. Negative conrmation, which suggests that you are neither managing
expectations properly nor delivering good service
In managing service delivery, relying solely on the objective aspects of service
is a mistake. Customers base future behavior on their evaluation of the experience
they actually had, which is in effect their degree of satisfaction or dissatisfaction.
In addition to determining that satisfaction degree, marketers should seek to learn
the reasons underlying customers feelings (the insight) in order to tell the engineers

what

,

how

and

when

to make changes and maintain high satisfaction levels when
they are achieved. In researching these areas, marketers should note that the

why

is
not the

what

; nor is it the

how

. That is, what happened and how it made customers
feel does not tell us why they felt as they did. And not knowing that, managing not
only the service that customers



experience but also their expectations becomes
difcult, if not impossible.
At times, service providers and customers tend to think differently. Consider
this dealership example: After conducting 10 focus groups for an automotive com-
pany in a medium-size Midwestern city, the researchers discussed the ndings in a
review meeting with the head of marketing for the company. The researchers noted
that, after having spoken with more than 100 recent customers, they had learned

SL3151Ch02Frame Page 49 Thursday, September 12, 2002 6:13 PM

50

Six Sigma and Beyond

that the vast majority were frustrated and unhappy about having to wait more than
15 minutes before getting attended to. The marketing executive interrupted, saying,
Those customers should consider themselves lucky; if they were in the dealerships
of one of our competitors, they would have to wait 20 to 30 minutes before they
were seen by the service manager.
This example includes all the information needed to explain the difference between
customer service and customer satisfaction. The customers in this example dened their
personal expectations about the service their waiting time experience and clearly,
a conict existed between their service expectation (a short wait before being seen
by a service manager) and their service experience (waits of more than 15 minutes).
Customers then were dissatised with the waiting rooms and the dealerships in
general. The marketing managers response to customer dissatisfaction was to note
that the waits could have been worse: He knew that competitors dealerships were
worse. He also knew customer waits of more than 30 minutes were not uncommon.
In light of these data, he judged the 15- to 20-minute waits in the waiting room
acceptable.
Herein lies the conict between service delivery and customer satisfaction. The
important concept for this marketing executive and for all marketers is that
customers dene their own satisfaction standards. The customers in this example
did not go to the competitors dealerships; instead, they came to the marketers
dealership with a set of their own expectations in a preconceived environment. When
the marketer used his service delivery criteria to defend the waiting time, he simply
missed the point.
Unfortunately, this illustration is typical of how many marketers think about
customer satisfaction. They tend to relate customer satisfaction directly to their own
service standards and goals rather than to their customers expectations, whether or
not those expectations are realistic. To assess satisfaction, marketers must look
beyond their own assessments, tapping into the customers evaluations of their
service experience.
Consider, for example, a bank that thought it was doing a good job of measuring
service satisfaction but really was too focused on service delivery. This bank had
developed a policy that time spent in the lobby room should be less than 15 minutes
for all customers. A customer came into the ofce and waited 12 minutes in the
reception area for a mortgage application. Then she waited another ve minutes for
the loan ofcer to clear all the papers from his desk from the previous customer and
an additional three minutes for him to get the le and all the pertinent information
for the current application. As this customer was leaving, she was asked to ll out
a customer satisfaction questionnaire. Under the category for reception area waiting
time, she checked off that she had waited less than 15 minutes.
Based on this response, the banks marketing director assumed the customer
was satised, but she was not; the customer had been told that if she came in for
the mortgage loan during her lunch hour, she would be taken care of right away.
Instead, she waited a total of 20 minutes for her application process to begin. She
did not have time to shop for the gift her son needed that night for a birthday party,
and her entire schedule was in disarray. She left dissatised.

SL3151Ch02Frame Page 50 Thursday, September 12, 2002 6:13 PM

Customer Understanding

51

Understanding the difference between service and satisfaction is the rst step
in developing a successful customer satisfaction program, and all marketers must
share the same understanding. Only customers can dene what satisfaction means
to them. Here are some practical ways to understand customers expectations:
Ask customers to reect on their experiences with your services and their
needs, wants, and expectations.
Talk to customers face to face through focus groups, as well as through
questionnaires. A wealth of information can be collected this way.
Talk with your staff about what they hear from customers about their
expectations and experience with service delivery.
Review warranty data.
Remember the three words that can help you learn from your customers:

What,
how

and

why.

That is,

what

service did you experience,

how

did it make you feel,
and

why

did you feel that way? Continual probing with these three perspectives will
deliver the answers you need to better manage service to generate customer satis-
faction.
As Harry (1997 p. 2.20) has pointed out:
We do not know what we do not know
We cannot act on what we do not know
We do not know until we search
We will not search for what we do not question
We do not question what we do not measure
Hence, we just do not know
Therefore, part of this understanding is to identify a transfer function. That is,
you need a bridge (quantitatively or qualitatively) that will dene and explain the
dependent variable (the customers needs, wishes, and excitements) with the inde-
pendent variable(s) (the actual requirements that are needed from an engineering
perspective to satisfy the dependent variable). The transfer function may be a linear
one (the simplest form) or a polynomial one (a very complex form).
Typical equations expressing transfer functions may look something like:
Y = a + bx
Y = f(x

1

, x

2

x

n

)
They can be derived from:
df
d
r
a
g
r a
g

+ +

sin
cos
sin
cos cos
cos
sin
cos
2 2

SL3151Ch02Frame Page 51 Thursday, September 12, 2002 6:13 PM

52

Six Sigma and Beyond

Known equations that describe the function
Finite element analysis and other analytical models
Simulation and modeling
Drawing of parts and systems
Design of experiments
Regression and correlation analysis
In DFSS, the transfer function is used to estimate both mean (sensitivity) and
variance (variability) of

Y

s and

y

s. When we do not know what the Y is, it is
acceptable to use surrogate metrics. However, it must be recognized from the begin-
ning that not all variables should be included in a transfer function. Priorities should
be set based on appropriate trade-off analysis. This is because DFSS is meant to
emphasize only what is critical, and that means we must understand the mathematical
concept of measurement.
The focus of understanding customer satisfaction has been captured by Rechtin
and Hair (1998), when they wrote that an insight is worth a thousand market
surveys. It is that insight that DFSS is looking for before the requirements are
discussed and ultimately set. This will help us in identifying what is really going
on with the customer. Let us look at the function rst.

THE CONCEPT OF FUNCTION

In any business environment, there may be no more powerful concept than that of
function. To understand why this is a potent notion, we need to consider what we
mean by function.
What is function? Let us start with a common denition: The natural, proper,
or characteristic

action

of any thing... This is the

Websters New Collegiate Dictio-
nary

denition, and it is quite representative of what you will nd in most dictio-
naries

.

This is actually a powerful and insightful denition.
Think about any product or service that you purchase. What is it about the
product or service (I will use the term product from here on, although every issue
that will be discussed will be equally valid for services) that causes you to exchange
money, goods, or some other scarce resource for it? Ultimately, it is because you
want the characteristic actions that the product provides. These actions may be
simple or complex, utilitarian or capricious, Spartan or gilded but in each trans-
action, you enter with a set of unfullled wants and needs that you attempt to satisfy.
Moreover, if the product you purchase actually manages to fulll the wants and
needs that you perceive, you are more likely to be satised with your purchase than
when the product fails to satisfy your desires. Within these few short sentences, we
have the fundamental principles that underpin three of the most powerful tools in
the modern pantheon of quality, productivity, and protability: Quality Function
Deployment, Value Analysis, and Failure Modes & Effects Analysis.
To put the concept of function into action, we need to rene our denition. The
expanded denition we would like you to consider is The

characteristic actions

that a system, part, service, or manufacturing process generates to

satisfy customers

.

SL3151Ch02Frame Page 52 Thursday, September 12, 2002 6:13 PM

Customer Understanding

53

In this expanded denition, you cannot only see the concept of function at work,
but you may be able to recognize the essential abstraction of a

process.

In a process, some type of input is transformed into an output. As a simple
equation, we might say that
Input(s) + Transformation = Output(s)
In the case of function, the inputs



are the unfullled wants and needs that a
customer or a prospective customer has. These can be and often are intricate; this
is why the discipline of marketing is still more art than science. (We will have more
to say about this issue in just a moment.) Nevertheless, there exist multiple sets of
unfullled wants and needs that are open to the lures and attractions provided by
the marketplace.
In this very broad model, the transformation



is provided by the producer. With
one, ten, or hundreds of internal processes (within any discussion of process, there
is always the Russian doll image: processes within processes within processes),
business organizations attempt to determine the unmet wants and needs that cus-
tomers have. The producer then must design and develop products and delivery
processes that will provide tangible and/or intangible media of exchange that will
assuage the unmet needs or need sets.
Finally, the external processes that involve exchange of the producers goods for
money or other barter provide the customer with varying degrees of satisfaction.
The gratication (or lack of satisfaction) that results can then be viewed as the output
of the general process.
In business, the inputs are not within the control of producers. As a result,
producers need powerful tools to understand, delineate, and plan for ways to meet
these needs. This can be thought of as the domain of the Kano model or

Quality
Function Deployment

.
The transformational activities, however,

are

within the control of the producer.
These controlled activities include planning efforts to deliver function at a
satisfactory price; the nuances and subtleties of this activity can be strongly inu-
enced or even controlled by the discipline of

Value Analysis

.
In addition, fulllment of marketplace need sets also implies that this fulll-
ment will occur without unpleasant surprises. Unwanted, incomplete, or otherwise
unacceptable attempts to produce function often result in failure. This implies that
producers have a need to systematically analyze and plan for a reduction in the
propensity to deliver unpleasant surprises. This planning activity can be greatly aided
by the application of

Failure Modes and Effects Analysis

techniques.
To see how these ideas mesh, we need to consider how function can be
comprehensively mapped. This will require several steps. To apply what will be
discussed in the rest of this section, we need to emphasize the importance of choosing
the proper scale for any analysis. The probability is that you will choose too broad
a view or too much detail; we will try to provide guidance on this issue during our
discussion of methods.

SL3151Ch02Frame Page 53 Thursday, September 12, 2002 6:13 PM

54

Six Sigma and Beyond

U

NDERSTANDING

C

USTOMER

W

ANTS



AND

N

EEDS

The nature of customer wants and needs is complex, deceptive, and difcult to
discern. Nevertheless, the prediction of future wants and needs in the marketplace
is perhaps the most important precursor to nancial success that exists.

Knowing

and

doing something that is protable

are two very separate (but not completely
independent) aspects of this challenge.
The rst task that must be undertaken is to list the customers that we are
interested in. Virtually no business is universal in terms of target market. Moreover,
in todays highly differentiated world, it is likely an act of folly to suggest that any
product would have universal appeal. (Even an idealized product such as a capsule
that, when ingested, yields immortality would have its detractors and would be
rejected by some elements of humanity.) So, we need to start by cataloging the
customers that we might wish to serve.
In this effort, however, we need to recognize that there is a chain of customers.
This is often seen in discussions of the value chain, a concept explored in detail
by Porter (1985). For example, Porter discusses the concept of channel value,
wherein channels of distribution perform additional activities that affect the buyer,
as well as inuence the rms own activities.
This means that there are several dimensions on which we will discover impor-
tant customers

.

First, there are market segments and niches. These can be geographic,
demographic, or even psychographic in nature. Second, there are many intermediary
customers, who have an important inuence on ultimate purchases in the market-
place. Finally, there are what might be called overarching customers persons
or entities that must be satised even in the absence of any purchasing power to
enable or permit the sale of goods and services.
This is readily visible in the auto industry. From the standpoint of a major parts
manufacturer, say United Technologies, Johnson Controls, Dana Corporation, or
Federal-Mogul, there are legions of important customers. In the market segment
category, there are the vehicle manufacturers, including GM, Ford, Toyota, and all
of the others. Contained within this category of customers are many sub-customers,
including purchasing agents, engineers, and quality system specialists.
As far as intermediary customers are concerned, we can consider perhaps a dozen
or more important players. We need to consider the transportation rms that carry the
parts from the parts plant to the assembly plant. We also need to think about the people
and the equipment within the assembly plant that facilitate the assembly of the part into
a vehicle. (If anyone doubts this is important, they have never tried to sell a part to an
assembly plant where the assembly workers truly dislike some aspect of the part.) The
auto dealer is yet another step in this array of hurdles, and mechanics and service
technicians constitute still one more customer who must, in some way, be reasonably
satised if commercial success is to spring forth.
In addition, the auto industry has a web of regulatory and statutory requirements
that govern its operation. These include safety regulations, emission standards, eet
mileage laws, and the general requirements of contract law. Behind these government
requirements are still more governmental prerequisites, including occupational safety
law, environmental law as applied to manufacturing, and labor law. This means that

SL3151Ch02Frame Page 54 Thursday, September 12, 2002 6:13 PM

Customer Understanding

55

the governmental agencies and political constituencies that administer these laws
can be seen as the overarching customers described previously.
Ultimately, vehicle purchasers



themselves are the critical endpoint in this chain of
evaluation. And within this category of customers are the many segments and niches
that car makers discuss, such as entry level, luxury, sport utility, and the many other
differentiation patterns that auto marketers employ.

Only when a product passes through
the entire sequence will it have a reasonable chance of successfully and repeatedly
generating revenue for the producer.

This provides a critical insight about function.
Function



is only meaningful through some transactional event involving one or
more customers. Only customers



can judge whether a product delivers desired or
unanticipated-yet-delightful function.
In many cases (in fact, most), rms simply do not consider all of these customers.
As a result, they are often surprised when problems arise. Moreover, they suffer
nancial impediments as a result even though they may simply budget some
degree of failure expectation into overhead calculations.
A rational assessment of this situation means that the rst requirement for under-
standing function is a comprehensive listing of customers. Frankly, this is very hard
work, and it requires time, dedication, and effort. Regardless, understanding the cus-
tomers that you wish to serve is an essential prerequisite to comprehension of function.

C

REATING



A

F

UNCTION

D

IAGRAM

If you want to understand function, the rst requirement is the use of a special language.
Function must be described using

an active verb

and

a measurable noun.

Fowler
(1990) calls this linguistic construction a functive a function described in direct
terms that are, to the greatest degree possible, unambiguous.
In a functive, the verb should be active and direct. How can you tell if the verb
meets this test? Can you subject the action described by the verb to reasonable
verication? One of the difculties with this approach is the widespread afnity for
ambiguity, the evil spawn of corporate life. To reduce ambiguity, you must avoid
nerd verbs such as provide, be, supply, facilitate, and allow.
Since most people pepper their business speech with these verbs, how can you
avoid using them? If you cannot avoid nerd verbs, then you might try to convert
the noun to a verb. Instead of allow adjustment think about what it is that you are
adjusting. For example, you could easily restate this nerd verb functive with adjust
clearance. Whenever a nerd verb comes up, try converting the noun that goes
with the nerd verb to a verb, and then select the appropriate measurable noun. Most
of the time, this will reduce the ambiguity.
The measurable noun also must be reasonably precise. In particular, it should
be relatively unchanging in usage and should rarely be the name of a part, operation,
or activity used to generate the product or service under consideration. The test for
a measurable noun is very simple: can you measure the noun? Bear in mind, however,
that the measurement may be as simple as counting or it can be a detailed
statement of a technical or engineering expectation of the degree to which a function
can be fullled. Ultimately, the combination of an active verb and measurable noun
will give rise to an

extent

the degree to which the functive is executed.

SL3151Ch02Frame Page 55 Thursday, September 12, 2002 6:13 PM

56

Six Sigma and Beyond

For example, let us consider a simple mechanical pencil. The mechanism of the
pencil must feed lead at a controlled rate. This also means that there must be a
specic position for the lead. If the lead is fed too far, the lead will break. If the
feed is not far enough, the pencil may not be able to make marks. As a result, one
function that we can consider is position lead. The measurement is the length of
exposed lead, and the desired extent of the positioning function may be 5 mm from
the barrel end of the pencil. If there are limits on the extent in the form of tolerance,
this is a good time to think about these limits as well.*
While you are describing function in terms of an active verb and measurable
noun, it is very important to maintain a customer frame of reference. Do not forget
that function is only meaningful in terms of customer perception. No matter how
much

you

may be enamored of a product feature or service issue, you must decide
if the target customer will perceive your product in the same way.**

T

HE

P

RODUCT

F

LOW

D

IAGRAM



AND



THE

C

ONCEPT



OF

F

UNCTIVES

Now that we understand the essential issues involved in describing function, we can
learn more about techniques for understanding the many complex functions that
exist in a product. If products had just one or two functions, it would be easy to
understand the issues that motivate purchase behavior. In todays complex world,
though, products seem to have more features (and hence more functions) nearly
every day. How can we understand this complexity? Fortunately, there are common
patterns that exist in the functionality of any product. We can see this through the
creation of a

product ow diagram.

A product ow diagram uses simple, direct language to delineate function. This
is a valuable aid to help you understand what your product provides to customers.
We can start our efforts to develop this diagram by identifying functions.
In practice, this is best done by a group or team, and it should be done after all
participants have become familiar with the list of customers at whom the product
is aimed. A general list of functions can then be developed using brainstorming
techniques or other group-based creativity tools.
There are a few issues that you should keep in mind while simply listing
functions. Functions must describe customer wants and needs from the viewpoint
of the customer. A common problem is to confuse product functions with functions
being performed by the customer, the designer, the engineer who created the product,
or the manufacturer who produces the product. Again, think about a mechanical
pencil. Many people will start by describing the function of the pencil as write
notes. However, the pencil, by itself, cannott write anything. (If you can invent a

* When you do this, you have created a specication for this function.
** One of the most common and debilitating errors in market analysis is to assume that others will
respond the same way that you do. This is a simple but profound delusion. Most of us think that we are
normal, typical people. When we awaken in the morning, we look in the mirror and see a normal (although
perhaps disheveled if we look before the second cup of coffee) person. Thus, we think, I like this widget.
Since I am normal, most other people will like this widget, too. Therefore, my tastes are likely to be a
good guideline to what my customers will want. In most cases, even if you really are normal and
even typical, this easy generalization is dangerously false.

SL3151Ch02Frame Page 56 Thursday, September 12, 2002 6:13 PM

Customer Understanding

57

pencil that will write notes without a writer attached you will probably become rich.)
The function that is more appropriate for the pencil is make marks.
The best way to start is to simply brainstorm as many functions as you can using
active verbs and measurable nouns. There are many ways to brainstorm; in this case,
it is usually easiest to have everyone involved use index cards or sticky notes to record
their ideas. Remember that brainstorming should not be interrupted by criticism; just
let the ideas ow. You will get things that do not apply, and, until you gain experience,
you will not always use the functive structure that is ultimately important. Do not
worry about these issues during the idea-generation phase of this process.
Once you have a nice pile of cards or notes, start by sifting and sorting the ideas
into categories. In any pile of ideas, there will be natural groupings of the cards.
Determine these categories and then sort the cards. This can be thought of as afnity
diagramming of the ideas. You will nd some duplicates and some weird things
that probably do not belong in the pile.* Discard the excess baggage and look at
the categories. Are there any important functions you have missed? Do not hesitate
to add new ideas to the categories, either.
Finally, you are ready to bear down on the linguistic issues. Make sure that

all

of the ideas are expressed in terms of active verbs and measurable nouns. Change
the idea to a functive construction, and then look for the nerd verb cards. Convert
all of the nerd verb functions into true functives, with fully active verbs and
measurable nouns. When you are done, you will have an interesting and important
preliminary output.
Now, count the cards again. If you have more than 20 to 30 cards, you have
probably tackled too complex a subject or viewpoint. For example, a commercial
airline has thousands even hundreds of thousands of functions. If you wanted
to analyze function on the widest scale, you would probably be guilty of too much
detail if you listed more than 30 functives. On the broadest scales of view, you may
only list a handful of functions. Nothing is wrong with a short list, especially for
the broadest view.
If you have trouble, we can suggest some function questions that can assist
you in your brainstorming. Try these questions:
What does it do?
If a product feature is deleted, what functions disappear?
If you were this element, what are you supposed to accomplish? Why do
you exist?
Ask the function questions in this order:
The entire scope of the project
A system view
Each element of the project

* Do not automatically toss out strange ideas see if the team can reword or express more clearly the
idea that underlies the oddball cards or notes.

Some percentage of these cards will have important
information.

Many will be eventual discards, but do not jump to conclusions.

SL3151Ch02Frame Page 57 Thursday, September 12, 2002 6:13 PM

58

Six Sigma and Beyond

A part view
Each sub-element of the project
A component view
Finally, we can start our next task, which consists of arranging functions into
logical groups that show interrelationships. In addition, this next arranging step
will allow us to test for completeness of function identication and improve team
communication.
We start by asking What is the reason for the existence of the product or
service? This function represents the

fundamental need

of the customer. Example:
a vacuum cleaner sucks air but the customer really needs remove debris. What-
ever this

reason for being

is, we need to identify this particular function, which we
call the

task function.

You must identify the task function from all of the functions
you have listed.
If you happen to nd more than one task function, it is quite likely that you
have actually taken on two products. For example, a clock-radio has two task
functions:

tell time

and

play music.

However, you would be far better served by
breaking your analysis into two components one for telling time, the other for
playing music. Alternatively, this product could be considered on a broader basis,
as a system in which case the task function might be inform user, with subor-
dinate functions of tell time and play music.
In any event, once you have identied the task function, you will realize that
there are many functions other than the task function. Divide the remaining functions
by asking:

Is the function required for the performance of the task function?

If the answer to this question is yes, then the function can be termed essential.
If the answer is no, then the function can be considered enhancing. All functions
other than the task function must be either essential (necessary to the task function)
or enhancing. So, your next task is to divide all of the remaining functions into these
two general categories.
You can further divide the enhancing functions the functions that are not
essential to the task function. Enhancing functions inuence customer satisfaction
and purchase decisions. Enhancing functions always divide into four categories:
1. Ensure dependability
2. Ensure convenience
3. Please senses
4. Delight the customer*

* Delight the customer is actually quite rare most enhancing functions t one of the other three
categories. If you do nd a delight the customer function, try comparing this with an excitement
feature in a Kano analysis; you should nd that the function ts both descriptions.

SL3151Ch02Frame Page 58 Thursday, September 12, 2002 6:13 PM

Customer Understanding

59

None of these categories is needed to accomplish the task function. In fact, if
you do not have a task function (and the associated essential functions), you probably
do not have a product. The enhancing functions are those issues that purchasers
weigh once they have determined that the task function will likely be fullled by
your product. So, divide all of the enhancing functions into these four categories.
Your next challenge is to create a function hierarchy that will, in nished form,
be a function diagram. Start by asking this question: how does the product perform
the task function? Primary essential functions provide a direct answer to this question
without conditions or ambiguity. Secondary functions explain how primary functions
are performed. Continue until the answer to how requires using a part name, labor
operation, or activity, or you deplete your reserve of essential function cards.
Now, you must reverse this process. Ask why in the reverse direction. For
example, for a mechanical pencil, the task function is make marks. One of the
functions you must perform to make marks is support lead. How do you support
the lead? You do it by supporting the internal barrel tube (support tube) that carries
the lead and by positioning this tube (position tube). Why do you support the tube
and position the tube? You do this to support the lead. Why do you support the lead?
You support the lead in order to make marks. The chain of function is driven by
the how questions from the task function to primary then secondary functions
while this same chain is driven in reverse by why questions from secondary to
primary to task function.
As you progress, you will notice that you may be missing functions. If you nd
that you are, add additional functions as needed. After you have completed building
trees of functions with the essential functions, repeat this process with the enhanc-
ing functions. The only difference is that the primary enhancing functions ensure
convenience, ensure dependability, please the senses, and delight the customer
have already been chosen.
When you have nished, you will have a completed product ow diagram. At
this point, try to delineate the extent of each function (range, target, specication,
etc.) for each of the functions. Do not forget: Extent also tests measurability of
each active verbmeasurable noun combination.
For example, for the mechanical pencil, the assembly may look like Figure 2.1.
The sorted brainstorm list of functives may look like this:
Entire project scope:
Make marks
Erase marks
Fit hand
Fit pocket
Show name
Display advertising
Convey message
Maintain point
Tube assembly:
Store lead
Position lead

SL3151Ch02Frame Page 59 Thursday, September 12, 2002 6:13 PM

60

Six Sigma and Beyond

Feed lead
Reposition lead
Support lead
Locate eraser
Position tube
Generate force
Hold eraser
Lead:
Make marks
Maintain point
Eraser:
Erase marks
Locate eraser
Barrel:
Support tube
Support lead
Position lead
Protect lead
Position tube
Position eraser
Show name
Display advertising
Convey message
Fit hand
Enhance feel
Provide instructions
Clip:
Generate force
Position clip
Retain clip
And, nally, the function diagram (only one possibility among many, many
different results) may look like Figure 2.2.

FIGURE 2.1

Paper pencil assembly.
Eraser
Lead
Barrel
Clip
Tube Assembly
EAT AT JOES
0,7 mm

SL3151Ch02Frame Page 60 Thursday, September 12, 2002 6:13 PM

Customer Understanding

61

T

HE

P

ROCESS

F

LOW

D

IAGRAM

If you are working with a process rather than a product, you need to create a broad
viewpoint map that shows how the activities in the process are accomplished. This
can be done quickly and easily with a process ow diagram. The difculty with
most process ow diagrams is that they quickly bog down in too much detail.
Whenever the detail gets too extensive, people lose interest (except for those who
created the chart, but they are only part of the audience). Even though we need
detail, we must avoid placing all of the details into one ow chart at least if we
want people to

use

the resulting charts. So, we will employ a 10


10 method that
will aid in both communicating and managing the level of detail in a ow chart.
If you keep the number of boxes in a ow chart to

ten or fewer

, most people
will nd your chart easy to read and understand. You can also use a standard
symbol set for ow charting. After a great deal of trial and error from our experience,
we have found that a simple set of ten symbols



will explain almost any business
process and provide enough options so that any team can easily illustrate what is
going on see Figure 2.3. By using some of the American National Standards
Institute (ANSI) symbols and judiciously mixing in some easy-to-remember shapes,
anyone can learn to ow chart a process in just a few minutes. The rst step is to
select a simple basis or point of view for your ow charts. This could be the view
of the process operator, the work piece, or the process owner. (Be careful if you

FIGURE 2.2

Function diagram for a mechanical pencil.



Make
Marks
Maintain
point
Support
lead
Position
lead
Ensure
convenience
Ensure
dependability
Please the
senses
Delight the
customer
Support
tube
Position
tube
Feed lead
Erase
marks
Fit pocket
Provide
Instructions
Enhance
feel
Display
advertising
Convey
message
Store lead
Re-position
lead
Hold
eraser
Retain clip
Position
clip
Fit hand
Show name
Position
eraser
Locate
eraser
Generate
force
Basic
Functions
Supporting
Functions

SL3151Ch02Frame Page 61 Thursday, September 12, 2002 6:13 PM

62

Six Sigma and Beyond

confuse your viewpoint while developing a ow chart, you will quickly become
confused about the process functions.)
Inputs and outputs are the easiest steps to understand. You start with an input
and you end with an output. A document may be a special kind of input or output
it can appear at the beginning, at the end, or during the overall process. The process
box is the most common box; it describes transformations that occur within the
process. Decisions are represented by a diamond shape, and an inspection step (in
the shape of a stop sign) is just a special kind of decision. If you delay a process,
you use a yield sign. If you store information, you use an inverted yield sign a pile.
Movement is also important. If a move is incidental, you tie the associated boxes
together with a simple arrow. However, if a movement is complex (say, sending a
courier package to Hong Kong as opposed to handing it to your next-cubicle neigh-
bor), then you may have a special transformation or process step that we call a
signicant move, i.e., a large horizontal arrow.
Let us look at a simple process for handling complaints. Your ofce deals with
customer complaints, but you have a local factory (where your ofce is) and a factory
in Japan. How you handle a complaint might look like Figure 2.4.
This ow chart shows many of the symbols noted above, but it is not the only
way that the process could be ow charted. However, if the team that developed the
chart (once again, a team approach is likely to be the most effective technique) can
reach a high level of consensus, then the communication of these ideas to others
will be powerful and comprehensive.
Now that the basics of 10


10 (ten steps or fewer using ten or fewer symbols)
are apparent, it becomes possible to construct a hierarchy of ow charts that will
ll in missing details that may have been skirted with the 10 step limit.
The next step is to create a new 10


10 ow chart for each box in the top level
ow chart that requires additional explanation to reach the desired level of detail.
These next ow charts (typically three to ve of the boxes require additional detail)
make up the second level ow charts

.

Wherever necessary, go to another level of
ow charts; continue creating 10


10 ow charts until you have a

hierarchy

of ow
charts that directly addresses all of the details that you feel are important.
Finally, for each process box on each ow chart, you will have a process purpose.
Why did you do this step? Simple you had one (or possibly two) purposes in

FIGURE 2.3

Ten symbols for process ow charting.
Significant
move
Incidental
move
Output Process
Store
Inspect
Input Decision
Delay
Document

SL3151Ch02Frame Page 62 Thursday, September 12, 2002 6:13 PM

Customer Understanding

63

mind when you designed this step into your process. Process purpose can be easily
described

using the language of function.

Once again, you must use an active verb
and a measurable noun.
Often, a team can move directly to listing process functions from the ow charts.
However, especially in manufacturing, it is common for the level of detail hidden
in ow charts to be large, especially with intricate or subtle fabrication procedures.
You may need to use an additional tool for teasing the function information from
a ow chart called a characteristic matrix.
A characteristic matrix is a reasonably simple analysis tool. The purpose of the
matrix is to show the relationships between product characteristics and manufactur-
ing steps. The importance of product characteristics in this matrix is signicant; by
considering the impact of a manufacturing step on product characteristics, we again
focus our attention on customer requirements. Too often, manufacturing emphasis
turns inward; it is critical that the focus be constantly directed at customers. Of
course, there are internal customers as well. It is certainly important that interme-
diate characteristics, necessary for facilitating additional fabrication or assembly
activities, be included in the analysis of function.
For example, a simple machining process could have the characteristic matrix
shown in Table 2.1.
In this example, a simple machining step could be shown on a process ow
chart with a process box that describes the machining operation as CNC Lathe or
something similar. However, the lathe operation creates several important dimen-
sions, or product characteristics, that are needed to meet customer expectations.
These characteristics are sufciently varied and complex that an additional level of
detail is necessary. Some of these characteristics are important to the end customer;
some are important to internal or next step process stations.
For this example, the three left hand columns establish important functional
information. The product characteristic is essentially the measurable noun (an

FIGURE 2.4 Process ow for complaint handling.
Local or
overseas
factory?
Local Overseas
Log
complaint
into database
Factory is
notified of
complaint
Send by
courier to
Japan
Local
factory is
notified of
complaint
Phone notice of
customer
complaint
Compile
information
for notice
Complaint
notice
Pending
file
SL3151Ch02Frame Page 63 Thursday, September 12, 2002 6:13 PM
64 Six Sigma and Beyond
occasional adjective is acceptable in a functive if there are several identical nouns,
such as diameter in this case). The extent is shown in the target dimension and
tolerance columns, and the active verbs can be constructed or deduced from the
code letters inserted in the matrix cells in the Process Operation columns.
In any event, whether you are able to determine functions directly from a process
ow chart or whether you nd the use of characteristic matrices important, you need
to end with a comprehensive listing of function. The important aspect of process
function is to use a ow charting technique of some type to assist in reaching the
comprehensive assessment of function that is similar to the point-by-point listing
that can be achieved by the product ow diagram technique.
USING FUNCTION CONCEPTS WITH PRODUCTIVITY AND
QUALITY METHODOLOGIES
Earlier, we suggested that function concepts form a powerful fundamental basis for
three major productivity and quality methodologies:
Quality Function Deployment (QFD)
Failure Modes and Effects Analysis (FMEA)
Value Analysis (VA)
While we do not intend to explain these techniques fully in this context (however,
they will be explained later), we would like to address the usefulness of function
concepts in these methodologies. In these discussions, we are assuming that you
have a passing or even detailed familiarity with these tools. If not, you may wish
to pass over to the discussion of QFD later in this chapter or to Chapters 6 and 12
for lengthy discussions of FMEA and VA.
TABLE 2.1
Characteristic Matrix for a Machining Process
Product
Characteristic
Target
Value
Tolerance Process Operations
Lathe
Turn
10
Lathe
Turn
20
Face
Cut
30
Deburr
40
Cut
Radius
50
Diameter A 6.22 mm 0.25 mm X C L
Diameter B 3.25 mm 0.1 mm X C L
Shoulder C 12.2 mm 0.5 mm X C L
Radius D 0.5 mm 0.05 mm X
X = Characteristic Created By This Operation
C = Characteristic Used For Clamp Down In This Operation
L = Characteristics Used As Locating Datum In This Operation
SL3151Ch02Frame Page 64 Thursday, September 12, 2002 6:13 PM
Customer Understanding 65
For Quality Function Deployment, the most challenging issue is the one that we
have just explored: how can one determine the functions that must be analyzed for
deployment? In other forms, this is the same question facing practitioners in FMEA
and VA. Clearly, the product ow diagram provides several instrumental techniques
for improving these activities.
A major difculty in QFD is the often overwhelming complexity of the House
of Quality approach. Constructing the rst house, using conventional QFD tech-
niques, is often the start of the complexity. Many different customer wants are
listed. This is occasionally done as a pre-planning matrix. Moreover, the linguistic
construction for these wants is undisciplined and subjective.
Similarly, in FMEA, the initial list of failure modes is difcult to obtain. In VA,
determining the baseline value assessment can also be difcult.*
The techniques for developing a function diagram, especially the informal sug-
gestions about sizing a project, can be very helpful in this regard. QFD, like FMEA
and VA, typically fails to deliver the results expected because the project selected
is too complex. A QFD study on a car or truck, for example, could easily contain
hundreds of thousands of pages of information. That is not to say that the information
in this study would not be valuable or that it should not be done; the issue is how
complexity of this type should be dealt with.
If you start with a systemwide view and construct a function diagram of the
limited size previously discussed (2030 functions maximum, even fewer are better),
then this will provide a rst level in a hierarchy of function diagrams. Subsequent
analysis of various subsystems, then components and parts, and nally processes
will complete the analysis. While the end result (for a car) would conceivably be of
the same magnitude, the belief that all of the work must be done within the same
team or by the same organization would be quickly abandoned. Moreover, the
knowledge and understanding that is developed is generated at the hierarchical level
(in the supply chain) of greatest importance, utility, and impact.
Moreover, using the functive combination of active verbs and measurable
nouns will assist in making QFD a useful tool. The vague, imprecise, or even
confusing descriptions of function that are often used in QFD contribute to the
difculty in usage.
A vehicle planning team may carry out a QFD study on the overall vehicle,
assessing the major issues regarding the vehicle; these could include size, styling
motifs, performance themes, and target markets. Subsequently, a study of the pow-
ertrain (engine, transmission, and axles) could be completed by another team. The
engine itself could then be divided into major components: block, pistons, electronic
controls, and so forth. Ultimately, suppliers of major and minor components alike
would be asked to carry out QFD studies on each element. The multiplicity of
information is still present, but it is no longer generated in some centralized form.
This means that accessibility, usefulness, and the likelihood of benecial deployment
of the ndings are much greater.**
* In Value Analysis, the Function Analysis System Technique or FAST, a close cousin of the function
diagram, is typically used to establish the initial functional baseline for value calculations.
** If the reader sees an echo of the hierarchy of ow charts, this is not coincidental.
SL3151Ch02Frame Page 65 Thursday, September 12, 2002 6:13 PM
66 Six Sigma and Beyond
As an added benet, starting QFD using this approach provides benets in the
completion of FMEA and VA studies, since a consistent set of functions will be used
as a basis for each technique. We will next consider each of these in turn. We will
start with FMEA, because the importance of function in this methodology is not
widely understood or appreciated.
In FMEA, determining all of the appropriate failure modes is usually a great
challenge. This obstacle is reected in the widespread difculty in understanding
what is a failure mode and what is an effect. For example, the effect customer is
dissatised is often found in FMEA studies. While this is likely to be true, it is an
effect of little or no worth in developing and improving products and processes.
Similarly, failure modes are often confused with effect. This can be illustrated
with another common product, a disposable razor. How can we determine a com-
prehensive list of failure modes? Simply start with an appropriate function diagram.
For each function, we need to consider how these functions can go astray. There are
a limited number of ways that this can occur, all related to function. If you consider
the completion of a function (at the desired extent) to be the absence of failure, then
pose these questions about each function in the function diagram:
What would constitute an absence of function?
What would occur if the function were incomplete?
What would demonstrate a partial function?
What would be observed if there was excess function?
What would a decayed function consist of?
What would happen if a function occurs too soon or too late (out of desired
sequence)?
Could there be an additional unwanted function?
Each of these conditions establishes a possible failure mode. For the disposable
razor, the task function is generally understood to be cut hair (not, of course, to
shave). The failure mode that is most obvious is an additional unwanted function,
namely cut skin. Notice that the mode of failure is not feel pain or bleed;
these are failure effects.
To make use of these ideas in the context of the function diagram, we must next
dene terminus functions. Terminus functions are simply those functions at the
right hand (or how) end of any function chain in the function diagram. In the
mechanical pencil example, two terminus functions would be position eraser and
locate eraser. Why do you position and locate the eraser? To hold the eraser. Why
do you hold the eraser? To erase marks. Why do you erase marks? To ensure
correctness. Since this chain is one of enhancing functions, we do not directly modify
the task function.
Start your analysis of failure modes by testing each of the possible conditions
listed above against the terminus functions. After you have completed the terminus
functions, move one step in the why direction. However, as you move to the left,
you will nd that you frequently discover the same modes for the other functions.
Since the function chain shows the interrelated nature of the functions, this should
SL3151Ch02Frame Page 66 Thursday, September 12, 2002 6:13 PM
Customer Understanding 67
not be surprising. As a rule, you will get most (if not all) of the relevant failure
modes from the terminus functions.* So, starting with the terminus functions will
speed your work and reduce redundancy.
By working through each function chain in the function diagram, a comprehen-
sive list of failure modes can be developed. This listing of failure modes then alters
the approach to FMEA substantially; modes are clear, and cause-effect relationships
are easier to understand. Moreover, developing FMEA studies using function dia-
grams that were originally constructed as part of the QFD discipline assures that
product development activities continue to reect the initial assumptions incorpo-
rated in the conceptual planning phase of the development process.**
Once you have identied failure modes in association with functions, the remain-
der of the FMEA study though still involved is rather mechanical. For each
failure mode, you must examine the likely effects that will result from this mode.
With a clear mode statement, this is much simpler, and you are much less likely to
confuse mode and effect issues. The effects can then be rated for severity using an
appropriate table. With the effects in hand, causes can next be established and the
occurrence rating estimated. Notice that this sequence of events makes the confusion
of cause and effect much more difcult; in many cases, the logical improbability of
reversal of cause and effect statements is so obvious that you simply cannot reverse
these two issues.
Finally, you can conclude the fundamental analysis with an evaluation of controls
and detection. Once again, starting with a statement of function makes this clearer
and less subject to ambiguity. Understanding the progression from function to mode
to cause to effect sets the stage. What is it that you expect to detect? Is it a mode?
In practice, detecting modes is extremely unlikely. You are more likely to detect
effects. However, are effects what you want to detect? Once an effect is seen the
failure has already occurred, and costs associated with the failure must already be
absorbed.
Let us return to the disposable razor to understand this. If the failure mode is
cut skin, we must recognize that detecting cut skin is extremely difcult. You
are much more likely to detect an effect namely, pain or bleeding. Now, we
recognize that we really do not want to detect failures at this point. Instead, we need
to ask what are the possible causes of this failure mode. In this simple example, two
different causes are readily apparent. From a design standpoint, the blades of the
razor could be designed at the wrong angle to the shaver head. Even if the manu-
facturing were 100% accurate, a design that sets the blade angle incorrectly would
have terrible consequences. On the other hand, the design could be correct; the blade
angle could be specied at the optimum angle, but it could be assembled at an
incorrect angle. Detection would best be aimed at testing the design angle*** and
* This is even more true for a system FMEA than for a design FMEA study.
** Of course, any change that is made in concept during development activities requires a continuous
updating of the function diagrams under consideration.
*** In the ISO and QS-9000 systems, we can think of this in terms of design verication.
SL3151Ch02Frame Page 67 Thursday, September 12, 2002 6:13 PM
68 Six Sigma and Beyond
at controlling the manufacturing process so that the optimum design angle would
be repeatable (within limits) in production.*
Finally, the Value Analysis process can also make use of the function diagrams
that serve in the QFD and FMEA processes. In VA, the essence of the technique is
the association of cost with function. Once this is accomplished, the method of
functional realization can be considered in a variety of what if conditions. If there
is a comprehensive statement of function, VA teams can be reasonably sure that
ongoing value assessments, based on the ratio of function to cost, have a consistent
and rational foundation. Moreover, the teams have a much higher condence that
these what if questions take customer issues into proper account.
Too often, VA activities are carried out as if function is well understood and
only cost matters. In too many cases, no function analysis is even performed. Despite
the long-standing cautions against this, this alluring shortcut is often taken to save
time, money, or both. The shortcomings of skipping function analysis in VA are not
trivial. More disappointing results in usage of the VA methodology have probably
been obtained because function was not fully and comprehensively understood.
At a very fundamental level, how can a value ratio analysis be performed without
a full statement of function? This is like calculating a return on investment without
knowing the investment. Moreover, the analysis of value ratio can be misleading if the
function issue is not well dened. It is easy to reduce cost. You simply eliminate features
and functions from a product. Soon, you will not even be able to accomplish the task
function. (In practice, functionless VA studies typically eliminate important enhancing
functions that make a critical difference in the marketplace, and customers consequently
pronounce unfavorable judgments on decontented products. VA then gets the blame.)
Since value studies typically occur subsequent to QFD and FMEA in product
development activities, the difculty of understanding function is eliminated if
function is fully dened and even specied during these earlier activities. By using
function as the basis for product and manufacturing activities, a degree of focus and
understanding of customer wants and needs is preserved not only during VA activities
but throughout the product life cycle.
KANO MODEL
The tool of choice that is preferred for understanding the function is the Kano
model. A typical framework of the model is shown in Figure 2.5. The Kano model
identies three aspects of quality, each having a different effect on customer satis-
faction. They are:
1. Basic quality take for granted they exist
2. Performance quality the more principle
3. Excitement quality the wow
* This is the issue of process control in the ISO and QS-9000 systems in QS-9000, it goes to the
heart of the control plan itself. Also, this is a simplied example. In more detail, the failure mode of
cut skin can even occur when the blade angle is correct both in design and execution. A deeper
examination of these issues quickly leads to the consideration of robustness in the design itself.
SL3151Ch02Frame Page 68 Thursday, September 12, 2002 6:13 PM
Customer Understanding 69
The more we nd out about these three aspects from the customer, the more
successful we are going to be in our DFSS venture. (Caution: It is imperative to
understand that the customer talks in everyday language, and that this language may
or may not be acceptable from a design perspective. It is the engineers responsibility
to translate the language data into a form that may prove worthwhile in requirements
as well as verication. A good source for more detailed information is the 1993
book by Shoji.)
BASIC QUALITY
Basic quality refers to items that the customer is dissatised with when the product
performs poorly but is not more satised with when the product performs well.
Fixing these items will not raise satisfaction beyond a minimum point. These items
may be identied in the Kano model as in Figure 2.6.
Some sources for the basic quality characteristics are: things gone right, things
gone wrong, surrogate data, surveys, warranty, and market research.
PERFORMANCE QUALITY
Performance quality refers to items that the customer is more satised with more
of. In other words, the better the product performs the more satised the customer.
The worse the product performs, the less satised the customer. Attributes that can
be classied as linear satisers fall into this category. A typical depiction is shown
in Figure 2.7.
Some sources for performance quality characteristics are: internal satisfaction anal-
ysis, customer interviews, corporate targets/goals, competition, and benchmarking.
EXCITEMENT QUALITY
Excitement quality refers to items that the customer is more satised with when
the product is more functional but is not less satised with when it is not. This is
the area where the customer can be really surprised and delighted. A typical depiction
of these attributes is shown in Figure 2.8.
Some sources for excitement quality characteristics are: customer insight, tech-
nology, interviews with comments such as high % or better than expected.
FIGURE 2.5 Kano model framework.
+ Y axis (Customer satisfaction)
+ X axis (product functionality)
-
-
SL3151Ch02Frame Page 69 Thursday, September 12, 2002 6:13 PM
70 Six Sigma and Beyond
Items that are identied as surprise/delight candidates are very ckle in the sense
that they may change without warning. Indeed, they become expectations. The
engineer must be very cautious here because items that are identied as excitement
items now may not predict excitement at some future date. In fact, we already know
that over time the surprised/delighted items become performance items, the perfor-
mance items become basic, and the basic items become inherent attributes of the
product. A classic example is the brakes of an automobile. The traditional brakes
were the default item. However, when disc brakes came in as a new technology,
they were indeed the excitement item of the hour. They were replaced, however,
FIGURE 2.6 Basic quality depicted in the Kano model.
FIGURE 2.7 Performance quality depicted in the Kano model.
FIGURE 2.8 Excitement quality depicted in the Kano model.
+ Y axis (Customer satisfaction)
+ product functionality
Brakes
Horn
Windshield wipers
-
-
Performance
Customer satisfaction
+
Quiet gear shift
+ X axis (product functionality)
Wind noise
Power
Fuel economy
-
-
Customer satisfaction
+
Style
Ride
-
Features
- + X axis (product functionality)
SL3151Ch02Frame Page 70 Thursday, September 12, 2002 6:13 PM
Customer Understanding 71
with the ABS brake system, and now even this is about to be displaced by the
electronic brake system. This evolution may be seen in the Kano model in Figure 2.9.
Developing these surprised and delighted items requires activities that gain
insight into the customers emotions and motivations. It requires an investment of
time to talk with and observe the customer in the customers own setting, and the
use of the potential product. Above all, it requires the ability to read the customers
latent needs and unspoken requirements.
Is there a way to sustain the delight of the customer? We believe that there is.
Once the attributes have been identied, a robust design must be initiated with two
objectives in mind.
1. Minimize the degradation of these items.
2. Preserve the basic quality beyond expectations.
These two steps will create an outstanding reliability and durability reputation.
QUALITY FUNCTION DEPLOYMENT (QFD)
Now that we have nished the Kano analysis, and we know pretty much what the
customer sees as functional and value added items, we are ready to organize all
these attributes and then prioritize them. The methodology used is that of QFD.
QFD is a planning tool that incorporates the voice of the customer into features that
satisfy the customer. It does this by portraying the relationships between product or
process whats and hows in a matrix form. The matrix form in its entirety is called
the House of Quality see Figure 2.10.
One of the reasons why QFD is used is because it allows us to organize the Ys
and ys and xs into a workable framework of understanding. QFD does not generate
the Ys, ys, or xs. Ultimately, however, QFD will help in identifying the transfer
function in the form Y = f(x, n)
QFD was developed in Japan, with the intent to achieve competitive advantage
in quality, cost, and timing. To understand this need, one must comprehend what
quality control is all about from Japans point of view. Japans industrial standards
dene Quality Control (QC) as a system of means to economically produce goods
and/or services that satisfy customer requirements. It is this denition of QC that
FIGURE 2.9 Excitement quality depicted over time in the Kano model.
Customer satisfaction
Surprise/delight + Performance
- + product functionality
Excitement quality over time
-
SL3151Ch02Frame Page 71 Thursday, September 12, 2002 6:13 PM
72 Six Sigma and Beyond
propelled the Japanese to nd not only a tool but a planning tool that implements
the business objectives, of which the right application is product development. The
denition of QFD is a systematic approach for translating customer wants/require-
ments into company-wide requirements. This translation takes place at each stage
from research and development to engineering and manufacturing to marketing and
sales and distribution. The QFD system concept is based on four key documents:
1. Overall customer requirement planning matrix. This document provides
a way of turning general customer requirements into specied nal prod-
uct control characteristics.
2. Final product characteristic deployment matrix. This document translates
the output of the planning matrix into critical component characteristics.
3. Process plan and quality control charts. These documents identify critical
product and process parameters as well as benchmarks for each of those
parameters.
FIGURE 2.10 A typical House of Quality matrix.
Competitive
assessment
Technical
difficulty
How much
Competitive
assessment
Important
control items
Importance
What
Correlation
matrix
HOW
Importance
Relationship
matrix
I
M
P
O
R
T
A
N
C
E
I
M
P
O
R
T
A
N
C
E
SL3151Ch02Frame Page 72 Thursday, September 12, 2002 6:13 PM
Customer Understanding 73
4. Operating instructions. These documents identify operations to be per-
formed by plant personnel to assure that the important parameters are
achieved.
TERMS ASSOCIATED WITH QFD
There are six key terms associated with QFD:
Quality function deployment An overall concept that provides a means
of translating customer requirements into the appropriate technical require-
ments for each stage of product development and production (i.e., marketing
strategies, planning, product design and engineering, prototype evaluation,
production process development, production, sales). This concept is further
broken down into product quality deployment and deployment of the
quality function (described below).
Voice of the customer The customers requirements expressed in their own
terms.
Counterpart characteristics An expression of the voice of the customer
in technical language that species customer-required quality; counterpart
characteristics are critical nal product control characteristics.
Product quality deployment Activities needed to translate the voice of
the customer into counterpart characteristics.
Deployment of the quality function Activities needed to ensure that cus-
tomer-required quality is achieved; the assignment of specic quality
responsibilities to specic departments. (The phrase quality function does
not refer to the quality department, but rather to any activity needed to ensure
that quality is achieved, no matter which department performs the activity.)
Quality tables A series of matrices used to translate the voice of the
customer into nal product control characteristics.
BENEFITS OF QFD
QFD certainly appears to be a sensible approach to dening and executing the myriad
of details embodied in the product development process, but it also appears to be a
great deal of extra work. What is it really worth? Setting the logical arguments aside,
there are a number of demonstrated benets resulting from the use of QFD:
Demonstrated results
Preservation of knowledge
Fewer startup problems
Lower startup cost
Shorter lead time
Warranty reduction
Customer satisfaction
Marketing advantage
SL3151Ch02Frame Page 73 Thursday, September 12, 2002 6:13 PM
74 Six Sigma and Beyond
Preservation of knowledge The QFD charts form a repository of knowledge,
which may (and should) be used in future design efforts. For example: Toyota is
convinced that the QFD process will make good engineers into excellent engineers.
An American engineering expert once commented, There isnt anything in the QFD
chart I dont already know. Upon reection, he realized that few other engineers
knew everything on that chart. The QFD charts can be a knowledge base from which
to train engineers.
Fewer startup problems/lower startup cost Toyota and other Japanese
automobile manufacturers have found that the use of QFD more effectively front
loads the engineering effort. This has substantially reduced the number of costly
engineering changes at startup through a marked reduction of problems at startup.
QFD has helped to identify potential problems early in design or avoid oversights
through its disciplined approach.
Shorter lead time Toyota has reduced its product development cycle to less
than 24 months.
Warranty reduction The corrosion problems with Japanese cars of the 1960s
and 1970s led to enormous warranty expenses, signicantly impacting protability.
The Toyota rust QFD study resulted in virtually eliminating corrosion and the
resulting warranty expense.
Customer satisfaction The Japanese automobile manufacturers tend to focus
on products that satisfy customers (as opposed to eliminating problems). The QFD
approach has greatly facilitated the satisfying of customer wants. Domestic customer
satisfaction surveys show that Japanese products have consistently scored higher
than many American products.
Marketing advantage A Japanese manufacturer of earth moving equipment
introduced a series of ve new models that offered substantial advantages over their
Caterpillar corporation counterparts, resulting in redistribution of market share.
QFD brings several benets to companies willing to undertake the study and
training required to put the system in place. Some of these benets as they relate to
marketing advantage are:
Product objectives based on customers requirements are not misinter-
preted at subsequent stops.
Particular marketing strategies sales points do not become lost or
blurred during the translation process from marketing through planning
and on to execution.
Important production control points are not overlooked. Everything nec-
essary to achieve the desired outcome is understood and in place.
Tremendous efciency is achieved because misinterpretation of program
objectives, marketing strategy, and critical control points is minimized.
See Figure 2.2.
All of the above translate into signicant marketing advantages, that is, speedy
introduction of products that satisfy customers without problems.
In addition to all the benets already mentioned, Table 2.2 shows some of the
benets from the total development process perspective, which is a synergistic result
starting with QFD.
SL3151Ch02Frame Page 74 Thursday, September 12, 2002 6:13 PM
Customer Understanding 75
ISSUES WITH TRADITIONAL QFD
The use of traditional QFD raises several issues for business people, including the
following:
1. Change is uncomfortable.
Counterpoint: There is an old saying, If we do what we have done, we
will get what we have. To truly improve, we must explore new pat-
terns of logical thinking and let go of outdated ways. We must be will-
ing to change.
2. Success is not realized until the product is released.
Counterpoint: The truest measure of customer satisfaction comes after the
product or service is introduced. It is easy to lose sight of improvements
that do not materialize until years after the improvement effort. We
TABLE 2.2
Benets of Improved Total Development Process
Cash Drain Old Process Improved Process
Technology push, but
wheres the pull?
Concepts with no needs,
needs with no concept
Technology strategy and technology transfer
bring right technology to the product
Disregard for voice of
the customer
The voice of the engineer and
other corporate specialists is
emphasized
House of Quality and all steps of QFD deploy
the voice of the customer throughout the
process
Eureka concept Mad dash with singular
concept, usually vulnerable
Pugh process converges on consensus and
commitment to invulnerable concept
Pretend designs Initial design is not
production intent and
emphasizes newness rather
than superior design
Two step design and design competitive
benchmarking lead to superior design
Pampered product Make it look good for
demonstration
Taguchi optimization positions product as far
as possible away from potential problems
Hardware swamps Large number of highly
overlapped prototype
iterations leaves little time
for improvement
Only four iterations, each planned to make
maximum contribution to optimization
Here is the product;
where is the factory?
Product is developed, then
factory reacts to it
One total development process, product, and
production capability
We have always made
it this way
Old process parameters used
repetitiously without design
improvement
Taguchi process parameter improves quality,
reduces cycle times
Inspection Inspection creates scrap,
rework, adjustments, and
eld quality loss
Taguchis optimal checking and adjusting
minimizes costs of inspection
Give me my targets, let
me do my thing
Lack of teamwork Teamwork and competitive benchmarking
beat contracts, and targets lead the process,
do not manage problems
SL3151Ch02Frame Page 75 Thursday, September 12, 2002 6:13 PM
76 Six Sigma and Beyond
would be remiss not to seek ways to achieve the end goal of customer
satisfaction in our design and development process.
3. QFD is a long process.
Counterpoint: QFD saves the teams time and resources with new ap-
proaches and tools. Avoiding multiple redesigns and multiple prototype
levels in response to customer input recovers the time spent on QFD.
The upstream time saves multiples of downstream time.
4. It is not as much fun as re ghting.
Counterpoint. Finding and fixing problems may be personally gratifying.
It is the stuff from which heroes/heroines are made. But emergencies
are not in the companys best interest and certainly not in the custom-
ers interest. Management must provide a system that rewards problem
prevention as well as problem solving.
5. The relation to the traditional product development process is not understood.
Counterpoint: QFD replaces some traditional product design and develop-
ment events, i.e., target setting and functional assumptions, and thereby
does not add time.
6. It is difcult to accept customer input when the voice of the engineer
contradicts.
Counterpoint: Engineering has delivered about 80% customer satisfac-
tion; getting to 9095% is a tough challenge requiring enhancements to
current methods for achieving quality.
PROCESS OVERVIEW
The easiest way to think of QFD is to think of it as a process consisting of linked
spreadsheets arranged along a horizontal (Customer) axis and intersecting vertical
(Technical) axis. Important details include the following:
From a macro perspective, the horizontal arrangement is referred to as
the Customer Axis because it organizes the Customer Wants.
Customers are the people external to the organization who purchase,
operate, and service your products. Customers can also be internal, i.e.,
the end users of your work within the organization.
The vertical arrangement is referred to as the Technical Axis Customer
Wants into technical metrics.
The intersection of the axes (referred to as the Relationship Matrix)
identies how well engineering metrics correlate to customer satisfaction.
A closer look reveals that the interrelated matrices build upon one another
beginning with a validated list of Customer Wants.
DEVELOPING A QFD PROJECT PLAN
Perhaps one of the most important issues in QFD is the selection of appropriate
teams. Teams must share a common vision and mission to accomplish their objec-
tives. Some of the reasons are:
SL3151Ch02Frame Page 76 Thursday, September 12, 2002 6:13 PM
Customer Understanding 77
Building a project plan is the rst critical team-building exercise
The project plan has been standardized in QFD, so all teams follow a
basic strategy that includes the following steps:
Develop Project Plan to include safety standards and any governmental
regulations, as well as timing.
Review Project Plan with program management for buy-in.
Complete the Customer Axis.
Review Customer Axis interim report with program management.
Complete Technical Axis.
Develop corporate strategy.
Develop nal report.
Develop Deployment Plan for integrating into business cycle.
Communicate results to all programs and affected activities.
The Customer Axis
The steps necessary for completion of the customer axis include the following:
Determining Customer Wants
a. Obtain Customer Wants.
b. Select relevant Customer Wants about 30% of total Wants.
c. Add applicable Wants.
d. Set up focus groups, interviews, surveys, etc.
e. Rene Customer Wants list.
f. Enter Customer Wants into QFD net.
g. Give Customer Wants to strategic standardization organization (SSO).
Obtaining customer competitive evaluations
a. Submit Customer Wants to market research (team).
b. Develop mail-out questionnaire and/or clinic (market research).
c. Send mail-out questionnaire and/or conduct clinic (market research).
d. Report results to project team (market research).
e. Enter customer competitive evaluation data into the internal team base.
Setting customer targets
a. Identify Customer Want (team).
b. Review its Customer Desirability Index (CDI) rating and rank (team).
c. Identify baseline product (team).
d. Review customer competitive evaluations (team).
e. Identify corporate strategy (team).
Calculate image ratio for each Customer Want: customer target/base-
line product.
Calculate strategic CDI for each Customer Want: CDI image ratio
sales point.
f. Enter corporate strategy into customer targets matrix (team).
g. Set customer targets either opportunity to copy or sales point.
h. If opportunity to copy, enter symbol into customer targets matrix.
i. If sales point, enter values into customer targets matrix (team).
j. End.
SL3151Ch02Frame Page 77 Thursday, September 12, 2002 6:13 PM
78 Six Sigma and Beyond
Determining Technical System Expectations (TSE)
a. Review and adapt TSE template (team).
b. Review past and current projects for additional TSEs (team).
c. Identify and dene new TSEs (team).
d. Organize adapted list of TSEs (team).
e. Enter TSEs into internal base (team).
Determining relationships
a. Review the relationship (team).
b. Conrm/establish relationships (team and subject matter experts
[SMEs]).
c. Seek team consensus (team).
d. Collect data and/or conduct experiments (team and SMEs) to nd out
whether disagreements exist.
e. Check that each Want is satised by at least one TSE (team and SMEs).
f. Enter into internal base.
Technical competitive benchmarking
Buy, rent, lease or borrow competitive products (team).
Select TSEs to be benchmarked (team).
Establish inventory of benchmarking tests and data (team and SMEs).
Identify additional benchmarking tests required (team and SMEs).
Develop new tests (team and SMEs).
Conduct benchmark tests (team and SMEs).
Enter data into QFDNET (team).
Establish customer/engineer correlations (team and SMEs).
Setting technical targets
a. Develop technical targets (team and SMEs).
b. Review existing program targets for existing TSEs (team).
c. Recommend technical targets to program ofce (team and SMEs).
d. Reconcile program targets and technical targets for existing TSEs (pro-
gram ofce).
e. Enter technical targets into QFDNET (team).
The steps listed above will result in the following QFD deliverables for the Customer
Axis:
Validated list of Customer Wants for the product, system, subsystem, or
component
Customer Wants prioritized to focus engineering attention
a. Customer Desirability Index of the most to least desirable Customer
Wants
b. Customer satisfaction targets for all Customer Wants, expressed as a
percent over/under satisfaction of base product, system, subsystem, or
component
c. A nal rank ordered strategic index of Customer Wants based on
corporate strategies and competitive opportunities
SL3151Ch02Frame Page 78 Thursday, September 12, 2002 6:13 PM
Customer Understanding 79
Technical Axis
On the Technical Axis, the following items will need to be produced:
Rank ordered list of key Technical System Expectations that when cor-
rectly targeted will satisfy Customer Wants at a strategically competitive
level
Target values for key TSEs derived from technical competitive bench-
marking that correlate with customers competitive evaluations. These
target values aid program management two ways:
a. By driving the product and engineering program toward integrated
business and technical propositions that program management can
prove
b. With managing the program teams performance at program comple-
tion
Internal Standards and Tests
New or modied tests or other verication methods that make certain
basic and product performance wants achieved
Institutionalizing revised tests and standards into real world usage customer
dependent, of course customer requirements, corporate engineering test proce-
dures, and other documents both generic and program specic that support the
organizations design verication system.
THE QFD APPROACH
The rst concern of QFD is the customer. Therefore, in planning a new product we
start with customer requirements, dened through market research. Generally, we
call this the product development process, and it includes the program planning,
conceptualization, optimization, development, prototyping, testing, and manufactur-
ing functions.
One can see that this development process is indeed very complex. Quite often,
it cannot be performed by one individual. This is because it consists of several trade-
offs, such as:
Shared responsibilities
Interpretations
Priorities
Technical knowledge
Long time experience
Resource changes
Communication
Lots of work
SL3151Ch02Frame Page 79 Thursday, September 12, 2002 6:13 PM
80 Six Sigma and Beyond
It is precisely this complexity that all too often causes the product development
process to create a product that fails to meet the customer requirements. For example:
Customer requirement
Design requirements
Part characteristics
Manufacturing operations
Production requirements
Note: It is of paramount importance that the communication process within an
organization does not fall victim to the use of jargon.
QFD METHODOLOGY
QFD is accomplished through a series of charts that appear to be very complex.
They do contain a great deal of information, however. That information is both an
asset and a liability.
All the charts are interconnected to what is called the House of Quality because
of the roof-like structure at its top. Since this house is made up of distinct parts or
rooms, let us nd the function of each part, so that we can comprehend what QFD
is all about see Figure 2.10.
QFD begins with a list of objectives or the what that we want to accomplish
see Figure 2.11. This is usually the voice of the customer and as such is very general,
vague, and difcult to implement directly. It is given to us in raw form, that is, in
the everyday language of the customer. (Example: I dont want a leaky window
when it rains.)
For each what, we rene the list into the next level of detail by listing one or more
hows for each what. The hows are an engineering task. Figure 2.11 shows the rela-
tionship between the what and the how. Figure 2.12 shows that it is possible to have
an iterative process between the what and the how, with a possible renement of the
old how into the new what and ultimately to generate a very good new how.
Even though this step shows greater detail than the original what list, it is by
itself often not directly actionable and requires further denition. This is accom-
plished by further renement until every item on the list is actionable. This level is
important because there is no way of ensuring successful realization of a requirement
that no one knows how to accomplish. (Note: Remember that our level of renement
within the how list may affect more than one how or what and can in fact adversely
affect one another. That is why the arrows in Figure 2.11 are going in multiple
directions.)
To reduce possible confusion we represent the what and how in the following
manner. The enclosed matrix becomes the relationships. The relationships are shown
at the intersections of the what and how. Some common symbols are:
Medium relationship
Weak relationship
Very strong relationship
SL3151Ch02Frame Page 80 Thursday, September 12, 2002 6:13 PM
Customer Understanding 81
The method of using symbols allows very complex relationships to be shown,
and the interpretation is easy and is not dependent on experience. There are many
variations of this, and readers are encouraged to use what is comfortable for them.
Figure 2.13 presents a typical matrix.
Once the what, how, and relationships have been identied, the next step is to
establish a how much for each how see Figure 2.14. The intent here is to provide
specic objectives that guide the subsequent design and provide a means of objec-
tivity to the process. The result is minimum interference from opinion. (Note: This
how much is another cross check on our thinking process. It forces us to think in a
very detailed, measurable fashion.)
To summarize:
The what identies the customers requirements in everyday language.
The how renes the customers requirements (from an engineering perspective).
The relationship denes the relationship between what and how via a symbolic
language.
The how much provides an objective means of assessing that requirements
have been met and provides targets for further detail development. Picto-
rially, the ow is shown in Figure 2.14.
FIGURE 2.11 The initial what of the customer.
FIGURE 2.12 The iterative process of what to how.
What How
What How/What How
SL3151Ch02Frame Page 81 Thursday, September 12, 2002 6:13 PM
82 Six Sigma and Beyond
At this point, even though a lot of information is at hand, it is not unusual to
rene the hows even further until an actionable level of detail is achieved. This is
done by creating a new chart in which the hows of the previous chart become the
whats of the new chart. The how much information as a general rule is carried
along to the next chart to facilitate communication. This is done to ensure that the
objectives are not lost.
The process is repeated as necessary. In the product development process, this
means taking the customer requirements and dening design requirements that are
carried on to the next chart to establish the basis for the part characteristic. This is
continued to dene the manufacturing operations and the production requirements
see Figure 2.15. (Note: The greatest gains using QFD can be realized only when
FIGURE 2.13 The relationship matrix.
FIGURE 2.14 The conversion of how to how much.
How

Where
= 3
= 9
= 1
How
much
Therefore: (4x9) + (2x3) = 42 and so on. Make sure that the ratings differentiate to the
point of discrimination between each other. Remember, you are interested in great
differentiation rather than a simple priority.
Importance
ratings
42 21 33 28 24
4
5
1
3
2
What Importance
HOW
What
How much
SL3151Ch02Frame Page 82 Thursday, September 12, 2002 6:13 PM
Customer Understanding 83
taken down to the work detail level of production requirements. The QFD process
is well suited to simultaneous engineering in which product and process engineers
participate in a team effort.) For more information on the cascading process of the
QFD methodology, see the Appendix.
So far, we have talked about the basic charts in the House of Quality, and as a
result we have gained much information about the problem at hand. However, there
are several useful extensions to the basic QFD charts that enhance their usefulness.
These are used as required based on the content and purpose of each particular chart.
One such extension is the correlation matrix.
The correlation matrix see Figure 2.10 is a triangular table often attached
to the hows. The purpose of such placement is to establish the correlation between
each how item, i.e., to indicate the strength of the relationship and to describe the
direction of the relationship. To do that, symbols are used, most commonly:
Positive X Negative
Strong positive # Strong negative
A second extension is the competitive assessment see Figure 2.10. This is a
pair of graphs that shows item for item how competitive products compare with
current company products. Its strength is the fact that it can be done for the whats,
hows, and how muchs.
The competitive assessment may also be used to uncover gaps in engineering
judgment. What and how items that are strongly related should also exhibit a
relationship in the competitive assessment. For example, if we believe superior
dampening will result in an improved ride, the competitive assessment would be
expected to show that products with superior dampening also have a superior ride.
If this does not occur, it calls attention to the possibility that something signicant
may have been overlooked. If not acted upon, we may achieve superior performance
FIGURE 2.15 The ow of information in the process of developing the nal House of
Quality.
Where:
VOC = Voice of the customer
Requirements
analysis
Functional
spec
Design
System
design
Methods,
tools,
procedures
Technical
assessment
Resource plan
Implementation
plan
VOC
SL3151Ch02Frame Page 83 Thursday, September 12, 2002 6:13 PM
84 Six Sigma and Beyond
to our in house tests and standards but fail to achieve expected results in the hands
of our customers.
Why are we doing this? Basically, for two reasons:
1. To establish the values of the objectives to be achieved
2. To uncover engineering judgment errors
Remember that the correlation must be related to real world usage from the
customers perspective. What and how items that are strongly related should also be
shown to relate to one another in the competitive assessment. If the correlation does
not agree, it may mean that we overlooked something very signicant.
A third extension is the importance rating see Figure 2.10. This is a mech-
anism for prioritizing efforts and making trade-off decisions for each of the whats
and hows. It is important to keep in mind that the values by themselves have no
direct meaning; rather, their meaning surfaces only when they are interpreted by
comparing their magnitudes. The importance rating is useful for prioritizing efforts
and making trade-off decisions. (Some of the trade-offs may require high level
decisions because they cross engineering group, department, divisional, or company
lines. Early resolution of trade-offs is essential to shorten program timing and avoid
non-productive internal iterations while seeking a nonexistent solution.) The rating
itself may take the form of numerical tables or graphs that depict the relative
importance of each what or how to the desired end result. Any rating scale will
work, provided that the scale is a weighted one. A common method is to assign
weights to each relationship matrix symbol and sum the weights, just as we did in
Figure 2.13. Another more technical way is the following:
w
function i
=
where = unnormalized function importance; w
yj
= importance rating; r
ij
=
individual rating of functions; and w
function i
=

weighted function importance.
Applying this methodology to Figure 2.13 yields Figure 2.16.
QFD AND PLANNING
Contrary to what the name implies, quality function deployment (QFD) is not just a
quality tool. QFD was developed in Japan, growing out of the need to simultaneously
achieve a competitive advantage in quality, cost, and timing. To better comprehend
QFD, it is important to understand what the Japanese mean by the word quality.
The word quality, which we generally dene as conformance to requirements,
tness for use, or some other measure of goodness, takes on a much broader meaning
=

w w r
functioni yj ij
j
5( )
max ( )

w
w
functioni
i functioni
w
functioni
SL3151Ch02Frame Page 84 Thursday, September 12, 2002 6:13 PM
Customer Understanding 85
in Japan (there is probably no exact English translation of the Japanese version).
However, according to Japanese industrial standard Z81011981, quality control
is a system of means to economically produce goods or services which satisfy
customer requirements. (Italics added.)
Thus to the Japanese, quality means conducting the business effectively, not
just producing a good product. In this context, QFD really becomes a planning tool
for implementing business objectives, of which the most widely known application
is to product development.
In planning a new product, we start with customer needs, wants, and expectations,
often dened through market research. We wish to design and manufacture a product
that satises the customers perception of intended function, as well as or better than
our competitors (subject to certain internal company constraints). In other words:
CUSTOMER REQUIREMENTS

PRODUCT
FIGURE 2.16 Alternative method of calculating importance.

W
function
i = (4x9) + (2x3) = 42 and so on
W
function
i = 5
42
5(42)
= and so on
Keep in mind that when you are addressing the hows in essence you are
dealing with customer functionalities. Therefore, it is recommended to design for
the average, based on each functions importance according to its capability to
supply each original Y.
HOW
What Importance
4
5
1
3
2
How
much
Importance
ratings
unnormalized
42 21 33 28 24
Importance of
how
5 2.5 or 3 3.9 or 4 3.3 or 3 2.9 or 3
SL3151Ch02Frame Page 85 Thursday, September 12, 2002 6:13 PM
86 Six Sigma and Beyond
Let us call the process of translating these requirements into a viable product
the product development process. This process includes program planning, con-
cepting, optimization, development, prototyping, and testing, as well as the corre-
sponding manufacturing functions. Thus:
CUSTOMER REQUIREMENTS

PRODUCT DEVELOPMENT
PROCESS

PRODUCT
In a large organization, the product development process is so detailed that often
no one individual can comprehend it all. For some, the process looks like a maze
or a mysterious black box. For others the process is an intricate network of
activities. Regardless of how it is represented, the product development process is
exceedingly complex, consisting of numerous trade-offs.
Shared responsibilities and interpretation differences often result in conicting
priorities. That is the reason the team must have ownership of the projects and must
have a substantial body of technical knowledge over a relatively long time frame
while enduring resource changes. This, of course, requires a great deal of commu-
nication and a substantial work effort.
PRODUCT DEVELOPMENT PROCESS
The complexity of the product development process makes it a natural haven for
Murphys law, with nearly an innite number of opportunities for problems to occur.
Despite the best of intentions and efforts, all too often the product development
process creates a product that fails to meet the customer requirements. Such failures
may occur due to:
Trade-offs
Shared responsibilities
Interpretations
Priorities
Technical knowledge
Long time frame
Resource changes
Communication lots of work
The QFD approach focuses on customer requirements in a manner that directs
efforts toward achieving those requirements see Figure 2.17. In Figure 2.17, for
SL3151Ch02Frame Page 86 Thursday, September 12, 2002 6:13 PM
Customer Understanding 87
each of the customer requirements, a set of design requirements is determined, which
if satised will result in achieving the customer requirements. In like manner, each
design requirement is evolved into part characteristics, which in turn are used to
determine manufacturing operations and specic production requirements. The ow
is as follows:
CUSTOMER REQUIREMENTS

DESIGN REQUIREMENTS

FIGURE 2.17 The development of QFD.


Product planning
Part deployment
Process planning
Production planning
Design
requirements
Part
characteristic
Manufacturing
operations
Production
requirements
SL3151Ch02Frame Page 87 Thursday, September 12, 2002 6:13 PM
88 Six Sigma and Beyond
PART CHARACTERISTICS

MANUFACTURING OPERATIONS

PRODUCTION REQUIREMENTS

So, for example: The customer requirement of years of durability may be


achieved in part by the design requirement of no visible rust in three years. This in
turn may be achieved in part by ensuring part characteristics that include a minimum
paint lm build and maximum surface treatment crystal size. The manufacturing
process that provides these part characteristics consists of a three-coat process that
includes a dip tank. The production requirements are the process parameters within
the manufacturing process that must be controlled in order to achieve the required
part characteristics (and ultimately the customer requirements). Therefore, we can
present this in a summary form as:
CUSTOMER REQUIREMENT: Years of durability
DESIGN REQUIREMENT: No visible exterior rust in 3 years
PART CHARACTERISTICS: Paint weight 22.5 gm/m
2
;
Crystal size 3 max
MANUFACTURING OPERATIONS: Dip tank; 3 coats
PRODUCTION REQUIREMENTS: Time = 2.0 minutes; Acidity = 1.5 to 2.0;
Temperature = 4555

C
CONJOINT ANALYSIS
WHAT IS CONJOINT ANALYSIS?
We introduced conjoint analysis in Volume III of this series. Recall that conjoint
analysis is a multivariate technique used specically to understand how respondents
develop preferences for products or services. It is based on the simple premise that
consumers evaluate the value of a product/service/idea (real or hypothetical) by
combining the separate amounts of value provided by each attribute.
It is this characteristic that is of interest in the DFSS methodology. After all, we
want to know the bundle of utility from the customers perspective. (The reader is
encouraged to review Volume III, Chapter 11.) So in this section, rather than dwelling
on theoretical statistical explanations, we will apply conjoint analysis in a couple
of hypothetical examples. The examples are based on the work of Hair et al. (1998)
and are used here with the publishers permission.
SL3151Ch02Frame Page 88 Thursday, September 12, 2002 6:13 PM
Customer Understanding 89
A HYPOTHETICAL EXAMPLE OF CONJOINT ANALYSIS
As an illustration of conjoint analysis, let us assume that HATCO is trying to develop
a new industrial cleanser. After discussion with sales representatives and focus
groups, management decides that three attributes are important: cleaning ingredients,
convenience of use, and brand name. To operationalize these attributes, the research-
ers create three factors with two levels each:
A hypothetical cleaning product can be constructed by selecting one level of
each attribute. For the three attributes (factors) with two values (levels), eight (2
2 2) combinations can be formed. Three examples of the eight possible combina-
tions (stimuli) are:
HATCO phosphate-free powder
Generic phosphate-based liquid
Generic phosphate-free liquid
HATCO customers are then asked either to rank-order the eight stimuli in terms
of preference or to rate each combination on a preference scale (perhaps a 1-to-10
scale). We can see why conjoint analysis is also called trade-off analysis, because
in making a judgment on a hypothetical product, respondents must consider both
the good and bad characteristics of the product in forming a preference. Thus,
respondents must weigh all attributes simultaneously in making their judgments.
By constructing specic combinations (stimuli), the researcher is attempting to
understand a respondents preference structure. The preference structure explains
not only how important each factor is in the overall decision, but also how the
differing levels within a factor inuence the formation of an overall preference
(utility). In our example, conjoint analysis would assess the relative impact of each
brand name (HATCO versus generic), each form (powder versus liquid), and the
different cleaning ingredients (phosphate-free versus phosphate-based) in deter-
mining the utility to a person. This utility, which represents the total worth or
overall preference of an object, can be thought of as based on the part-worths for
each level. The general form of a conjoint model can be shown as
(Total worth for product)
ij,n
= Part-worth of level i for factor 1
+ Part-worth of level j for factor 2 +...
+ Part-worth of level n for factor m
where the product or service has m attributes, each having n levels. The product
consists of level i of factor 2, level j of factor 2, and so forth, up to level n for factor m.
Factor
Level
Ingredients Phosphate-free Phosphate-based
Form Liquid Powder
Brand name HATCO Generic brand
SL3151Ch02Frame Page 89 Thursday, September 12, 2002 6:13 PM
90 Six Sigma and Beyond
In our example, a simple additive model would represent the preference structure
for the industrial cleanser as based on the three factors (utility = brand effect +
ingredient effect + form effect). The preference for a specic cleanser product can
be directly calculated from the part-worth values. For example, the preference for
HATCO phosphate-free powder is:
Utility = Part-worth of HATCO brand
+ Part-worth of phosphate-free cleaning ingredient
+ Part-worth of powder
With the part-worth estimates, the preference of an individual can be estimated
for any combination of factors. Moreover, the preference structure would reveal the
factor(s) most important in determining overall utility and product choice. The
choices of multiple respondents could also be combined to represent the competitive
environment faced in the real world.
AN EMPIRICAL EXAMPLE
To illustrate a simple conjoint analysis, assume that the industrial cleanser
experiment was conducted with respondents who purchased industrial supplies. Each
respondent was presented with eight descriptions of cleanser products (stimuli) and
asked to rank them in order of preference for purchase (1 = most preferred; 8 = least
preferred). The eight stimuli are described in Table 2.3, along with the rank orders
given by two respondents.
As we examine the responses for respondent 1, we see that the ranks for the
stimuli with the phosphate-free ingredients are the highest possible (1, 2, 3, and 4),
whereas the phosphate-based product has the four lowest ranks (5, 6, 7, and 8).
Thus, the phosphate-free product is much more preferred than the phosphate-based
cleanser. This can be contrasted to the ranks for the two brands, which show a
mixture of high and low ranks for each brand. Assuming that the basic model (an
TABLE 2.3
Stimuli Descriptions and Respondent Rankings for Conjoint
Analysis of Industrial Cleanser
Stimuli Descriptions Respondent Rankings
Form Ingredients Brand Respondent 1 Respondent 2
1 Liquid Phosphate-free HATCO 1 1
2 Liquid Phosphate-free Generic 2 2
3 Liquid Phosphate-based HATCO 5 3
4 Liquid Phosphate-based Generic 6 4
5 Powder Phosphate-free HATCO 3 7
6 Powder Phosphate-free Generic 4 5
7 Powder Phosphate-based HATCO 7 8
8 Powder Phosphate-based Generic 8 6
SL3151Ch02Frame Page 90 Thursday, September 12, 2002 6:13 PM
Customer Understanding 91
additive model) applies, we can calculate the impact of each level as differences
(deviations) from the overall mean ranking. (Readers may note that this is analogous
to multiple regression with dummy variables or ANOVA.) For example, the average
ranks for the two cleanser ingredients (phosphate-free versus phosphate-based) for
respondent 1 are:
Phosphate-free: (1 + 2 + 3 + 4)/4 = 2.5
Phosphate-based: (5 + 6 + 7 + 8)/4 = 6.5
With the average rank of the eight stimuli of 4.5 [(1 + 2 + 3 + 4 + 5 + 6 + 7 +
8)/8 = 36/8 = 4.5], the phosphate-free level would then have a deviation of 2.0 (2.5
4.5) from the overall average, whereas the phosphate-based level would have a
deviation of +2.0 (6.5 4.5). The average ranks and deviations for each factor from
the overall average rank (4.5) for respondents 1 and 2 are given in Table 2.4. In our
example, we use smaller numbers to indicate higher ranks and a more preferred
stimulus (e.g., 1 = most preferred). When the preference measure is inversely related
to preference, such as here, we reverse the signs of the deviations in the part-worth
calculations so that positive deviations will be associated with part-worths indicating
greater preference. Deviation is calculated as: deviation = average rank of level overall
average rank (4.5). Note that negative deviations imply more preferred rankings.
The part-worths of each level are calculated in four steps:
Step 1: Square the deviations and nd their sum across all levels.
Step 2: Calculate a standardizing value that is equal to the total number
of levels divided by the sum of squared deviations.
Step 3: Standardize each squared deviation by multiplying it by the stan-
dardizing value.
Step 4: Estimate the part-worth by taking the square root of the standard-
ized squared deviation.
Let us examine how we would calculate the part-worth of the rst level of
ingredients (phosphate-free) for respondent 1. The deviations from 2.5 are squared.
The squared deviations are summed (10.5). The number of levels is six (three factors
with two levels apiece). Thus, the standardizing value is calculated as .571 (6/10.5
= .571). The squared deviation for phosphate-free (2
2
; remember that we reverse
signs) is then multiplied by .571 to get 2.284 (2
2
.571 = 2.284). Finally, to calculate
the part-worth for this level, we then take the square root of 2.284, for a value of
1.1511. This process yields part-worths for each level for respondents 1 and 2, as
shown in Table 2.5.
Because the part-worth estimates are on a common scale, we can compute the
relative importance of each factor directly. The importance of a factor is represented
by the range of its levels (i.e., the difference between the highest and lowest values)
divided by the sum of the ranges across all factors. For example, for respondent 1,
the ranges are 1.512 [.756 (.756)], 3.022 [1.511 (1.511)], and .756 [.378
(.378)]. The sum total of ranges is 5.290. The relative importance for form, ingredients,
SL3151Ch02Frame Page 91 Thursday, September 12, 2002 6:13 PM
92 Six Sigma and Beyond
and brand is calculated as 1.512/5.290, 3.022/5.290, and .756/5.290, or 28.6, 57.1,
and 14.3 percent, respectively. We can follow the same procedure for the second
respondent and calculate the importance of each factor, with the results of form
(66.7 percent), ingredients (25 percent), and brand (8.3 percent). These calculations
for respondents 1 and 2 are also shown in Table 2.5.
To examine the ability of this model to predict the actual choices of the
respondents, we predict preference order by summing the part-worths for the dif-
ferent combinations of factor levels and then rank ordering the resulting scores. The
calculations for both respondents for all eight stimuli are shown in Table 2.4. Com-
paring the predicted preference order to the respondents actual preference order
assesses predictive accuracy. Note that the total part-worth values have no real
meaning except as a means of developing the preference order and, as such, are not
compared across respondents. The predicted and actual preference orders for both
respondents are given in Table 2.6.
TABLE 2.4
Average Ranks and Deviations for Respondents 1 and 2
Factor Level Ranks Across Stimuli Average Rank of Level
Deviation from
Overall Average Rank
Respondent l
Form
Liquid 1, 2, 5, 6 3.5 1.0
Powder 3, 4, 7, 8 5.5 +1.0
Ingredients
Phosphate-free 1, 2, 3, 4 2.5 2.0
Phosphate-based 5, 6, 7, 8 6.5 +2.0
Brand
HATCO 1, 3, 5, 7 4.0 .5
Generic 2, 4, 6, 8 5.0 +.5
Respondent 2
Form
Liquid 1, 2, 3, 4 2.5 2.0
Powder 5, 6, 7, 8 6.5 +2.0
Ingredients
Phosphate-free 1, 2, 5, 7 3.75 .75
Phosphate-based 3, 4, 6, 8 5.25 +.75
Brand
HATCO 1, 3, 7, 8 4.75 +.25
Generic 2, 4, 5, 6 4.25 .25
SL3151Ch02Frame Page 92 Thursday, September 12, 2002 6:13 PM
Customer Understanding 93
TABLE 2.5
Estimated Part-Worths and Factor Importance for Respondents 1 and 2
Estimated Part-Worths Calculating Factor Importance
Factor Level
Reversed
Deviation
a
Squared
Deviation
Standardized
Deviation
b
Estimated
Part-Worth
c
Range of
Part-Worths
Factor
Importance
d
Respondent 1
Form
Liquid +1.0 1.0 +.571 +.756
Powder 1.0 1.0 .571 .756 1.512 28.6%
Ingredients
Phosphate-free +2.0 4.0 +2.284 +1.511
Phosphate-based 2.0 4.0 2.284 1.511 3.022 57.1%
Brand
HATCO +.5 .25 +.143 +.378
Generic .5 .25 .143 .378 .756 14.3%
Sum of squared
deviations
10.5
Standardizing value
e
.571
Sum of part-worth
ranges
5.290
Respondent 2
Form
Liquid +2.0 4.0 +2.60 +1.612
Powder 2.0 4.0 2.60 1.612 3.224 66.7%
Ingredients
Phosphate-free +.75 .5625 +.365 +.604
Phosphate-based .75 .5625 .365 .604 1.208 25.0%
Brand
HATCO .25 .0625 .04 .20
Generic +.25 .0625 +.04 +.20 .400 8.3%
Sum of squared
deviations
9.25
Standardizing value .649
Sum of part-worth
ranges
4.832
a
Deviations are reversed to indicate higher preference for lower ranks. Sign of deviation used to indicate sign
of estimated part-worth.
b
Standardized deviation equal to the squared deviation times the standardizing value.
c
Estimated part-worth equal to the square root of the standardized deviation.
d
Factor importance equal to the range of a factor divided by the sum of the ranges across all factors, multiplied
by 100 to yield a percentage.
e
Standardizing value equal to the number of levels (2 + 2 + 2 = 6) divided by the sum of the squared deviations.
SL3151Ch02Frame Page 93 Thursday, September 12, 2002 6:13 PM
94 Six Sigma and Beyond
The estimated part-worths predict the preference order perfectly for respondent
1. This indicates that the preference structure was successfully represented in the
part-worth estimates and that the respondent made choices consistent with the
preference structure. The need for consistency is seen when the rankings for
respondent 2 are examined. For example, the average rank for the generic brand is
lower than that for the HATCO brand (refer to Table 2.4), meaning that, all things
being equal, the stimuli with the generic brand will be more preferred. Yet, examining
the actual rank orders, this is not always seen. Stimuli 1 and 2 are equal except for
brand name, yet HATCO is preferred. This also occurs for stimuli 3 and 4. However,
the correct ordering (generic preferred over HATCO) is seen for the stimuli pairs
of 56 and 78. Thus, the preference structure of the part-worths will have a difcult
time predicting this choice pattern. When we compare the actual and predicted rank
orders (see Table 2.6), we see that respondent 2s choices are many times mispre-
dicted but most often just miss by one position due to the brand effect. Thus, we
would conclude that the preference structure is an adequate representation of the
choice process for the more important factors, but that it does not predict choice
perfectly for respondent 2, as it does for respondent 1.
TABLE 2.6
Predicted Part-Worth Totals and Comparison of Actual
and Estimated Preference Rankings
Stimuli Description Part-Worth Estimates Preference Rankings
Size Ingredients Brand Size Ingredients Brand Total Estimated Actual
Respondent 1
Liquid Phosphate-free HATCO .756 1.511 .378 2.645 1 1
Liquid Phosphate-free Generic .756 1.511 .378 1.889 2 2
Liquid Phosphate-based HATCO .756 1.511 .378 .377 5 5
Liquid Phosphate-based Generic .756 1.511 .378 1.133 6 6
Powder Phosphate-free HATCO .756 1.511 .378 1.133 3 3
Powder Phosphate-free Generic .756 1.511 .378 .377 4 4
Powder Phosphate-based HATCO .756 1.511 .378 1.889 7 7
Powder Phosphate-based Generic .756 1.511 .378 2.645 8 8
Respondent 2
Liquid Phosphate-free HATCO 1.612 .604 .20 2.016 2 1
Liquid Phosphate-free Generic 1.612 .604 .20 2.416 1 2
Liquid Phosphate-based HATCO 1.612 .604 .20 .808 4 3
Liquid Phosphate-based Generic 1.612 .604 .20 1.208 3 4
Powder Phosphate-free HATCO 1.612 .604 .20 1.208 6 7
Powder Phosphate-free Generic 1.612 .604 .20 .808 5 5
Powder Phosphate-based HATCO 1.612 .604 .20 2.416 8 8
Powder Phosphate-based Generic 1.612 .604 .20 2.016 7 6
SL3151Ch02Frame Page 94 Thursday, September 12, 2002 6:13 PM
Customer Understanding 95
THE MANAGERIAL USES OF CONJOINT ANALYSIS
It is beyond the scope of this section to discuss the statistical basis of conjoint
analysis. However, in DFSS, we should understand the technique in terms of its role
in decision making and strategy development. The simple example we have just
discussed presents some of the basic benets of conjoint analysis. The exibility of
conjoint analysis gives rise to its application in almost any area in which decisions
are studied. Conjoint analysis assumes that any set of objects (e.g., brands, compa-
nies) or concepts (e.g., positioning, benets, images) is evaluated as a bundle of
attributes. Having determined the contribution of each factor to the consumers
overall evaluation, the marketing researcher could then:
1. Dene the object or concept with the optimum combination of features
2. Show the relative contributions of each attribute and each level to the
overall evaluation of the object
3. Use estimates of purchaser or customer judgments to predict preferences
among objects with differing sets of features (other things held constant)
4. Isolate groups of potential customers who place differing importance on
the features to dene high and low potential segments
5. Identify marketing opportunities by exploring the market potential for
feature combinations not currently available
The knowledge of the preference structure for each individual allows the
researcher almost unlimited exibility in examining both individual and aggregate
reactions to a wide range of product- or service-related issues.
REFERENCES
Fowler, T.C., Value Analysis in Design, Van Nostrand Reinhold, New York, 1990.
Hair, J.F., Multivariate Data Analysis, 5th ed., Prentice Hall, Upper Saddle River, NJ, 1998.
Harry, M.,The Vision of Six Sigma: A Roadmap for Breakthrough, 5th ed., Vol. 1, TriStar
Publishing, Phoenix, 1997.
Porter, M., Competitive Advantage, Free Press, New York, 1985.
Rechtin, E. and Maier M., The Art of Systems Architecting, CRC, Boca Raton, FL, 1997.
Shoji, S., A New American TQM, Productivity Press, Portland, OR, 1993.
SELECTED BIBLIOGRAPHY
Afors, C. and Michaels, M.Z., A Quick, Accurate Way to Determine Customer Needs, Quality
Progress, July 2001, pp. 8288.
Anon., Quality Function Deployment, American Supplier Institute, Inc., Dearborn, MI, 1988.
Bialowas, P. and Tabaszewska E., How to Evaluate the Internal Customer Supplier Relation-
ship, Quality Progress, July 2001, pp. 6367.
Carlzon, J., Moments of Truth, HarperCollins, New York, 1989.
Fredericks, J. O. and Salter, J.M., What Does Your Customer Really Want? Quality Progress,
Jan. 1998, pp. 6370.
SL3151Ch02Frame Page 95 Thursday, September 12, 2002 6:13 PM
96 Six Sigma and Beyond
Gale, B.T., Managing Customer Value: Creating Quality and Service that Customers Can See,
Free Press, New York, 1994.
Gobits, R., The Measurement of Insight, unpublished paper presented at the 2nd International
Symposium on Educational Testing, Montreux, 1975.
Goncalves, K.P. and Goncalves, M.P., Use of the Kano Method Keeps Honeywell Attuned to
the Voice of the Customer, Quirks Marketing Research Review, Apr. 2001, pp. 1825.
Gutman, J. and Miaoulis, G., Past Experience Drives Future CS Behavior, Marketing News,
Oct. 22, 2001, pp. 4546.
Harry, M., The Vision of Six Sigma: A Roadmap for Breakthrough, 5th ed., Vol. 2, TriStar
Publishing, Phoenix, 1997.
James, H.L., Sasser, W.E., and Schlesinger, L.A., The Service Prot Chain: How Leading
Companies Link Prot and Growth to Loyalty, Satisfaction and Value, Free Press,
New York, 1997.
Mariampolski. H, Qualitative Market Research, Sage Publications, Newbury Park, CA, 2001.
Morais, R., The End of Focus Groups, Quirks Marketing Research Review, pp. 153154,
May 2001.
Mudge, A.E., Numerical Evaluation of Functional Relationships, Proceedings, Society of
American Value Engineers, 1967.
Murphy, B., Methodological Pitfalls in Linking Customer Satisfaction with Protability,
Quirks Marketing Research Review, Oct. 2001, pp. 2227.
Murphy, B., Qualitatively Speaking: Of Bullies, Friends and Mice, Quirks Marketing
Research Review, Oct. 2001, pp. 16, 61.
Saliba, M.T. and Fisher, C.M., Managing Customer Value, Quality Progress, June 2000, pp.
6370.
Shillito, M.L., Pareto Voting. Proceedings, Society of American Value Engineers, 1973.
Stamatis, D.H., Total Quality Management: Engineering Handbook, Marcel Dekker, New
York, 1997.
Stamatis, D.H., Total Quality Service, St. Lucie Press, Delray Beach, FL, 1996.
Sullivan, L.P., The Seven Stages in Company Wide Quality Control, Quality Progress, May
1986, pp. 7783.
Sullivan, L.P., Quality Function Deployment, Quality Progress, June 1986, 1986, pp. 3950.
Thomas, J. and Sasser, W.E., Why Satised Customers Defect, Harvard Business Review,
Nov.-Dec. 1995, pp. 8889.
VanVierah, S. and Olosky, M., Achieving Customer Satisfaction: Registrar Satisfaction Survey
Counterbalances the Myth About Registrars, Automotive Excellence, Winter 1999,
pp. 1015.
Veins, M., Wedel, M., and Wilms, T., Metric conjoint segmentation methods: a Monte Carlo
comparison, Journal of Marketing Research, 33, 7385, 1996.
Wittink, D.R. et al., Commercial use of conjoint analysis: an update, Journal of Marketing,
53, 9196, 1989.
SL3151Ch02Frame Page 96 Thursday, September 12, 2002 6:13 PM

97

3

Benchmarking

Benchmarking is a tool, a technique or process, a philosophy, and a new name for
old practices. It involves operations research and management science for determin-
ing (a) what to do or goal setting and (b) how to do it or action plan identication.
Benchmarking can be applied (a) systematically and comprehensively or (b) ad
hoc project by project. In both cases it can require (a) sophisticated statistical
analysis, (b) utilization of a wide variety of analytical tools, and (c) a wide range
of data sources. The basic requirements for success are:
Time, effort, and resources
A willingness to learn and to change
Continuing, long-term top management support
An external focus on customers and competitors
A common-sense approach and active listening
The ability to look at the old in a new way

GENERAL INTRODUCTION TO BENCHMARKING
A B

RIEF

H

ISTORY



OF

B

ENCHMARKING

The term benchmarking was coined by Xerox in 1979. Xerox has now performed
over 400 benchmark studies, and the process is totally integrated at all levels as part
of the business planning process. The approach has actually been in use for a number
of years although it was often called by different names. (Reverse engineering
is an approach used to study the design and manufacturing characteristics of com-
petitive products. Benchmarking of computer hardware and software is a very
common practice.)
Benchmarking extends the concept to consider administrative and all manage-
ment processes. There is a conscious attempt to compare with the best of the best
even especially if that is not a direct competitor.
The fundamental process in starting benchmarking is to think about the area to
be benchmarked, which can be just about anything, and ask yourself, Who is
especially good at that? What can I creatively imitate? A typical process for doing
a benchmarking is shown in Figure 3.1.

P

OTENTIAL

A

REAS



OF

A

PPLICATION



OF

B

ENCHMARKING

Benchmarking is a methodology that can be used along with other systematic,
comprehensive management approaches to improve performance. It is not an end
unto itself. Some examples of applications of benchmarking include:

SL3151Ch03Frame Page 97 Thursday, September 12, 2002 6:12 PM

98

Six Sigma and Beyond

Broad management focus
Cost reduction
Prot improvement
Business strategy development
Total quality management
Individual management processes
Improving customer service
Reducing product development time
Market planning
Product distribution
Highly specic focus
Invoice design
Sales force compensation
Fork lift truck maintenance
The critical questions to ask are:
What are the areas that potentially could be benchmarked?
How do you prioritize and focus the efforts?

FIGURE 3.1

The benchmarking continuum process.
Builds customer
goodwill
Builds network of
benchmarking
partners
May provide in
to target
companies
but
time consuming
Must be managed
to gain long term
benefit
Must be viewed as
an investment
Provides true
improvement
opportunities
Answers How do the
best do it?
Provides actionable data
But
Time consuming, must
be focused
Disciplined approach
builds results
Must be treated as an
ongoing way of doing
business
Define a process
A benchmarking study
IS a project
Make sure you have
clearance with legal
department
Involve the process
owner
Avoid the following
mentality:
We are unique
We know it all
It was not invented
here
It is too complex
We already tried it and
it does not work here
Prepare and
respond to
surveys
Agree to site
visits
Two-way
site visits
Informal
search for
the best
Follow a
model
Form
consortium
group

SL3151Ch03Frame Page 98 Thursday, September 12, 2002 6:12 PM

Benchmarking

99

BENCHMARKING AND BUSINESS
STRATEGY DEVELOPMENT

Hall (1980) observed that certain industry leaders had exceptional performance even
in the bad times of 19791980. For example:
How can this be so? What strategy did the more successful competitors follow?

L

EAST

C

OST



AND

D

IFFERENTIATION

Halls study itself is an early example of successful benchmarking. By extensive
interviewing and data analysis, Hall reached conclusions based on the performance
and the experience of a group of highly successful companies.
As determined by Hall and also described in the book

Competitive Strategy

by
Michael Porter, the successful competitor tends to follow one of two strategies:
Least cost
Differentiation
Those competitors who do not explicitly follow one strategy or the other tend
to get stuck in the middle and do not have the highest return on investment. Halls
ndings do, however, indicate that some rms can successfully manage both strategy
options. The generic strategies identied by Hall and Porter have been supported by
a number of research studies (see Higgins and Vincze, 1989).
For a successful business strategy to be developed, a company must decide what
course it will follow. It must also be certain that it is, in fact, realistically able to
pursue that alternative. Some questions to be asked include the following:
Does a company really have the least cost? How do they know? What is
the basis for the claim?
Is the company really differentiated in the eyes of the customer? How do
they know? What is the basis for the claim?
How might competitive conditions change in the future?
Benchmarking can provide in part the information necessary to answer
these questions by providing focus and insight on what the best companies are doing.

Company ROE Industry ROE

Goodyear 9.2 7.4
Inland Steel 10.9 7.1
Paccar 22.8 15.4
Caterpillar 23.5 15.4
General Motors 19.8 15.4
Maytag 27.8 10.1
G. Heilman Brewing 25.8 14.1
Philip Morris 22.7 18.2
Average 20.2 12.9

SL3151Ch03Frame Page 99 Thursday, September 12, 2002 6:12 PM

100

Six Sigma and Beyond

In addition to making a choice relative to least cost versus differentiation, an
important strategy choice is that of being a mass marketer versus supplying the
needs of a specic market segment. Therefore, when benchmarking is performed
the following must always be present:
Build a relationship with your benchmarking partner.
Establish trust and mutual interest.
Be worthy of trust.
Make it last.
Be open to reciprocity.
Follow a code of conduct.
Principle of condentiality
Principle of rst party contact
Principle of preparation
Principle of third party contact

C

HARACTERISTICS



OF



A

L

EAST

C

OST

S

TRATEGY

A rm following the least cost strategy must be able to deliver a product or service
with acceptable quality at a lower total cost than any of its competitors. Note that
total cost is the critical concern. The company does not have to be least cost in every
aspect of the business. The fact that the total cost is the lowest does not necessarily
mean that the price that is charged is the lowest. To determine if the least cost
strategy is viable, it is necessary to perform competitive benchmarking and gain
information relative to the following:
What is the relative market share of the company? Does the experience
curve have a signicant effect on cost reduction?
Is the industry one that can be affected by automation possibilities, con-
veyorized assembly, or new production technology? Is the capital available
for investment in efcient scale facilities and product and process engi-
neering innovation?
Do competitors have a different mix of xed and variable costs?
What is the percent capacity utilization by competitive rms?
Are the competitive rms using activity-based accounting?
How critical is raw material supply? Does the rm have preemptive
sources of supply?
Does the rm have a tight system of budgeting and cost control for all
functions?
Are productions designed for low cost productions? Are products simplied
and product lines reduced in number? Are bills of material standardized?
What is the level of product/service quality versus competition?
How labor intensive is the process? How effective are labor/management
relations?
Are marginal accounts minimized?

SL3151Ch03Frame Page 100 Thursday, September 12, 2002 6:12 PM

Benchmarking

101

Improved quality through benchmarking can lead to lower costs. The cost of
quality really the cost of non-quality consists of the costs of prevention,
appraisal (inspection), internal quality failures, and external quality failures. This
cost can amount to as much as 3040% of the cost of goods sold. Costs include the
following:
Costs of prevention
Training
Equipment
Costs of appraisal (inspection)
Inspectors
Equipment
Cost of internal quality failures
Scrap
Rework
Machine downtime
Missed schedules
Excess inventory
Cost of external quality failures
Warranty expense
Customer dissatisfaction
Studies have shown that the average quality improvement project results in
$100,000 of cost reduction. The associated cost to diagnose and remedy the problem
has averaged $15,000. Consequently, the payout from benchmarking in this area can
be signicant. Velcro reported a 50% reduction in waste as a percentage of total
manufacturing cost in the rst year and an additional 45% decrease in the second
year of its quality program.
Motorola achieved a quality level in 1991 that was 100 times better than it was
in 1987. By 1992, this company was striving for six sigma quality. That means three
defects per million or 99.9997 percent perfection. Motorola believes that super
quality is the lowest cost way of doing things, if you do things right the rst time.
Their director of manufacturing at that time pointed out that each piece of
equipment has 17,000 parts and 144,000 opportunities for mistakes. A 99 percent
quality rate is equivalent to 1,440 mistakes per piece. The cost to hire and train
people to x those mistakes would put the company out of business.

C

HARACTERISTICS



OF



A

D

IFFERENTIATED

S

TRATEGY

A rm following the differentiation strategy must be able to provide a unique product
or service to meet the customers expectations. The challenge of being unique is
that of providing a sustainable source of differentiation. It is very difcult to create
something that is totally sustainable. This may depend upon a corporate culture
producing a positive attitude toward quality and customer service or perhaps the
value of information or computer-to-computer linkages.

SL3151Ch03Frame Page 101 Thursday, September 12, 2002 6:12 PM

102

Six Sigma and Beyond

Following a differentiation strategy does not mean that a company can be
inefcient relative to costs. Although cost is not the primary driving force, costs still
must be minimized for the degree of differentiation provided. To determine if the
differentiation strategy is viable it is necessary to perform competitive benchmarking
and gain information relative to

segmentation

.
When developing corporate or marketing strategy, it is important to identify the
different market segments that make up the total market. A market segment is a
group of customers with similar or related buying motives. The members of the
segment have similar needs, wants, and expectations. A focus on market segments
allows a company to tailor its products, services, pricing, distribution, and commu-
nication message to meet the specic needs of a market. The opposite of market
segmentation is mass marketing.
Segmentation allows a smaller company to successfully compete with a larger
company by concentrating resources at the specic point of competition. Any market
can be segmented. The toothpaste market, for example, can be segmented into the
sensory segment (principal benet sought is avor or product appearance), the
sociable segment (brightness of teeth), the worriers (decay prevention), and the least
cost buyer. To segment a market you need to know

who

the customers are,

what

they buy,

how

they buy,

when

they buy,

why

they buy, and

where

they buy. Some
typical questions in this area are:
How do you segment your market?
What do you do differently for each of these segments?
How does the competition segment the market?
What new segments are likely to develop due to changes in sociological
factors, technology, legislation, economic conditions, or growing interna-
tionalism?

BENCHMARKING AND STRATEGIC
QUALITY MANAGEMENT

Strategic Quality Management (SQM) or Total Quality Management (TQM), as dened
by J.M. Juran, W. Edwards Deming and others, consists of a systematic approach for
setting and meeting quality goals throughout a company. Just as companies have set
out to achieve nancial goals through a process of corporate business planning, so also
can companies achieve quality goals by SQM or TQM or six sigma.
An overly simplied denition of TQM is Doing the right thing, right the rst
time, on time, all the time; always striving for improvement, and always satisfying the
customer. This requires a focus on customer needs, people, systems and process, and
a supportive cultural environment. But this really is not any different from what the six
sigma methodology proposes. The essential steps of the quality management process
consist of:
Quality planning
Identifying target market segments
Determining specic customers needs, wants, and expectations

SL3151Ch03Frame Page 102 Thursday, September 12, 2002 6:12 PM

Benchmarking

103

Translating the customer needs into product and process requirements
Designing products and processes with the required characteristics
(Competitive benchmarking can assist in this part of the process.)
Quality control
Measuring actual quality performance versus the design goals
Diagnosing the causes of poor quality and initiating the required cor-
rective steps
Establishing controls to maintain the gains
Quality improvement
Establishing a benchmarking process
Providing the necessary resources
It is important to note that the process:
Is strategic in nature, proactive
Is competitively focused on meeting customer needs as opposed to tech-
niques of analysis
Is goal oriented
Is comprehensive in terms of level and functions
Manages in quality, not simply defect reduction
The following are very closely linked:
Six sigma
Business strategic planning
Strategy development (least cost versus differentiation)
TQM
Pricing strategy
Benchmarking
The classical approach to benchmarking viewed as process which has become
the

de facto

process has the following characteristics:
Inspection to control defects is primary tool.
Better quality means higher costs.
Signicant scrap and rework activity takes place.
Quality control is found only in manufacturing.
SPC is used as an example; other tools are used occasionally.
Top management commitment
Level 5: Continuous improvement is a natural behavior even for routine
tasks.
Level 4: Focus is on improving the system.
Level 3: Adequate money and time are allocated to continuous
improvement and training.
Level 2: There is a balance of long-term goals with short-term objectives.

SL3151Ch03Frame Page 103 Thursday, September 12, 2002 6:12 PM

104

Six Sigma and Beyond

Level 1: The traditional approach is in place.
Note that the Level 1 commitment is the status quo, and not much is
happening. It is the least effective way of demonstrating to the orga-
nization at large that management commitment is a way of life. On
the other hand, Level 5 is the most effective and demands change of
some kind.
Obsession with excellence
Level 5: Constant improvement in quality, cost, and productivity
Level 4: Use of cross-functional improvement teams
Level 3: TQM and six sigma support system set up and in use
Level 2: Executive steering committee set up
Level 1: Traditional approach
Organization is customer satisfaction driven
Level 5: Customer satisfaction is the primary goal. More customers
desire a long-term relationship.
Level 4: Striving to improve value to customers is a routine behavior.
Level 3: Customer feedback is used in decision making.
Level 2: Customer rating of company is known.
Level 1: The traditional approach is in place.
Supplier involvement
Level 5: Suppliers fully qualied in all benchmark areas
Level 4: Suppliers actively implementing TQM and aware of the six
sigma demands
Level 3: Direct involvement in supplier awareness training; supplier
criteria in place
Level 2: Suppliers knowledgeable about your TQM as well as the six
sigma direction; supplier number reduction started
Level 1: Traditional approach
Continuous learning
Level 5: Training in TQM and six sigma tools is common among all
employees.
Level 4: Top management understands and applies TQM and the six
sigma philosophy.
Level 3: Ongoing training programs are in place.
Level 2: A training plan has been developed.
Level 1: The traditional approach is in place.
Employee involvement
Level 5: People involvement; self-directed work groups.
Level 4: Manager denes limits, asks group to make decisions.
Level 3: Manager presents problem, gets suggestions, makes decision.
Level 2: Manager presents ideas and invites questions, makes decision.
Level 1: The traditional approach is used.
Use of incentives
Level 5: Gainsharing
Level 4: More team than individual incentives and rewards
Level 3: Quality-related employee selection and promotion criteria

SL3151Ch03Frame Page 104 Thursday, September 12, 2002 6:12 PM

Benchmarking

105

Level 2: Effective employee suggestion program used
Level 1: Traditional approach
Use of tools
Level 5: Statistics a common language among all employees
Level 4: More team than individual incentives and rewards
Level 3: SPC used for variation reduction
Level 2: SPC used in manufacturing
Level 1: Traditional approach
The Malcolm Baldrige National Quality Award encapsulates the essential ele-
ments of Strategic Quality Management. The key attributes considered when making
this award are listed below. Many agree that the criteria provide the blueprint for a
better company. The urgency to win the award can accelerate change within an
organization. Some companies have told their suppliers to compete or else. These
are the criteria:
Quality is dened by the customer.
The senior management of a business needs to have clear quality values and
build the values into the way the company operates on a day-to-day basis.
Quality excellence derives from well-designed and well-executed systems
and processes.
Continuous improvement must be part of the management of all systems
and processes.
Companies need to develop goals, as well as strategic and operational
plans, to achieve quality leadership.
Shortening the response time of all company operations and processes
needs to be part of the quality improvement effort.
Operations and decisions of the company need to be based on facts and
data.
All employees must be suitably trained and involved in quality activities.
Design quality and defect and error prevention should be major elements
of the quality system.
Companies need to communicate quality requirements to suppliers and
work with suppliers to elevate supplier quality performance.
Achievement of the award requires extensive top management effort and support.
All of the Quality Award winners have been in highly competitive industries and
either had to improve or get out of the business. On a scale of 10 (best) to 1 (poor),
how would you rate your company on each of these attributes? If you nd yourself
on the low end, there may be a need for benchmarking.

B

ENCHMARKING



AND

S

IX

S

IGMA

Within the information and analysis part of the examination or survey, the practi-
tioners of benchmarking look specically at competitive comparisons and bench-
marks. It has been reported in the literature that many companies do not do enough

SL3151Ch03Frame Page 105 Thursday, September 12, 2002 6:12 PM

106

Six Sigma and Beyond

in the way of benchmarking. They compare themselves against other manufacturers
but do not make comparisons with outside businesses or even true best-in-class
companies.
A six sigma company is expected to describe the companys approach to select-
ing quality-related competitive comparisons and world-class benchmarks to support
quality planning, evaluation, and improvement. The specic areas to address are:
Criteria and rationale the company uses for making competitive compar-
isons and benchmarks. These include:
The relationship to company goals and priorities for the improvement
of product and service quality and/or company operations
The companies for comparison within or outside the industry
Current scope of competitive and benchmark involvement and data col-
lection relative to:
Product and service quality
Customer satisfaction and other customer needs
Supplier performance
Employee data
Internal operations, business processes, and support services
Other
For each, the company is directed to list sources of comparisons and
benchmarks, including companies benchmarked and independent testing
or evaluation, and:
How each type of data is used
How the company evaluates and improves the scope, sources, and uses
of competitive and benchmark data
The company must also indicate how this data is used to support:
Company planning
Setting of priorities
Quality performance review
Improvement of internal operations
Determination of product or service features that best predict customer
satisfaction
Quality improvement projects
Specic uses of benchmarking are to assist in:
Developing plans
Goal setting
Continuous process improvement
Determining trends and levels of product and service quality, the effec-
tiveness of business practices, and supplier quality
Determining customer satisfaction levels
A closer review of the criteria indicates several factors that are essential for effective
quality excellence and benchmarking activities within a company, including:

SL3151Ch03Frame Page 106 Thursday, September 12, 2002 6:12 PM

Benchmarking

107

Customer-driven quality
Quality is judged by the customer. The customers expectations of
quality dictate product design and this, in turn, drives manufacturing.
All product and service attributes that lead to customer satisfaction and
preference must be taken into consideration.
Customer driven quality is a strategic concept. Why do people buy
your product? How do you know?
Leadership is crucial. A companys senior management must create
clear quality values, specic goals, and well-dened systems and meth-
ods for achieving the goals.
Ongoing personal involvement is essential. The attitude must be
changed from a management control focus to a management com-
mitted to help you focus.
Continual improvement
Constant improvement in many directions is required: improved prod-
ucts and services, reduced errors and defects, improved responsiveness,
and improved efciency and effectiveness in the use of resources. All
of this takes time. If you do not have the time, do not start.
Fast response
An increasing need exists for shorter new product and service devel-
opment and introduction cycles and a more rapid response to custom-
ers.
Actions based on facts, data and analysis
A wide range of facts and data is required, e.g., customer satisfaction,
competitive evaluations, supplier data, and data relative to internal
operations.
Performance indicators to track operational and competitive perfor-
mance are critical. These performance indicators or goals can act as
the cohesive or unifying force within an organization. They can also
provide the basis for recognition and reward.
Participation by all employees is important. Reward and recognition sys-
tems need to reinforce total participation and the emphasis on quality.
Factors bearing on the safety, health, and well being of employees need
to be included in the improvement objectives.
Effective training is required. The emphasis must be on preventing
mistakes, not merely correcting them. Employees must be trained to
inspect their own work on a continuous basis.
Participation with suppliers is essential. It is important to get suppliers
to improve their quality standards.

N

ATIONAL

Q

UALITY

A

WARD

W

INNERS



AND

B

ENCHMARKING

Example Cadillac

To show the strong relationship between National Quality Award winners and bench-
marking, we provide a historical perspective. The rst example comes to us from

SL3151Ch03Frame Page 107 Thursday, September 12, 2002 6:12 PM

108

Six Sigma and Beyond

Cadillacs approach to excellence. (Cadillac was the 1990 winner of the National
Quality Award.) The brief case study that follows indicates the integration of business
planning, excellent quality management, and benchmarking.
The Business Plan was the Quality Plan. The plan was designed to ensure that
Cadillac is the Standard of the World in all measurable areas of quality and
customer service.
The major components of the plan were:
Mission
Objectives
Quality Emphasis on six major vehicle systems:
Exterior component and body mechanical
Chassis/powertrain
Seats and interior trim
Electrical/electronics
Body in white
Instrument panel
Competitiveness
Disciplined planning and execution
Leadership and people
Goals
For each objective, the following issues were addressed:
What are the measurable performance indicators of quality and customer
service? When answering, consider both the product itself and the man-
agement process that led to the improved product or service.
What does the customer need or want?
What levels are achieved by the best-of-class companies considering both
direct competitors and any other company?
What are the time-phased quality improvement goals?
Action plans
Took appropriate and applicable action to fulll

all

the requirements so
that the customer could be satised.

A Second Example Xerox

In the early 1980s, Xerox realized that Japanese competition was selling products for
less than the Xerox cost. Many of the required reforms focused on Xerox suppliers
because the cost of purchases amounted to 80% of the copiers cost of goods sold.
Xerox asked suppliers to restate their company performance data so that the supplier
could be compared with the best of class Xerox could nd anywhere in the world.
Some of the benchmarks Xerox used to measure operations prociency included:
Ratio of functional cost to revenue (percent)
Headcount per unit of output
Overhead rate (dollars/hour)

SL3151Ch03Frame Page 108 Thursday, September 12, 2002 6:12 PM

Benchmarking

109

Cost per order entered
Cost per engineering drawing
Customer satisfaction rating (index value)
Internal and external defect rates (parts per million)
Service response time (hours)
Billing error rates
Days inventory on hand
Total manufacturing lead time (days)
New product development time (weeks)
Percent of parts delivered on time
New ideas per employee
Xerox reduced its number of suppliers from 5,000 in 1980 to 300 by 1986 based
on performance data and attitude. Suppliers were classied as: (a) does not think
improvement is necessary, (b) slow to accept or manage change, and (c) willing to
go for it and strong enough to be a survivor. Xerox reallocated its internal efforts
to concentrate on the companies in the third group. Xerox provided extensive training
to these companies, and defect rates in incoming materials dropped 90 percent in
three years.
In addition to performance improvement, the suppliers were asked to participate
in copier design, as early in the concept phase as possible, and to make suggestions
so that overall quality could be improved and costs reduced. When this information
was used, the cost of purchased material dropped by 50 percent.

Third Example IBM Rochester

IBM Rochester describes its quality journey as follows:

1981 Vision Product reliability
Goal Zero defects
1984 Vision Process effectiveness and efciency
Goal All process rated
1986 Vision Customer and supplier partnerships
Competitive and functional benchmarks
Goals Best of competition
Over 350 benchmarking teams are in place; scores of benchmarking studies
have been completed; strategic targets are derived from the comprehensive
benchmarking process
1989 Vision Market-driven customer satisfaction
Total business process focus
Closed loop quality management system
Goal Total customer satisfaction
19901994 Vision Customer the nal arbiter
Quality excellence in execution
Products and services rst with the best
People enabled, empowered, excited, rewarded
Goal Undisputed leadership in customer satisfaction

SL3151Ch03Frame Page 109 Thursday, September 12, 2002 6:12 PM

110

Six Sigma and Beyond

Results:
A 30 percent improvement in productivity occurred between 1986 and 1989.
This was a period of extensive benchmarking activity.
Product development time has been reduced by more than half, and manufac-
turing cycle time has been trimmed by 60 percent since 1983.

Fourth Example Motorola

Each of the rms six major groups and sectors have benchmarking programs that
analyze all aspects of a competitors products to assess their manufacturability,
reliability, manufacturing cost, and performance. Motorola has measured the prod-
ucts of some 125 companies against its own standards, verifying that many Motorola
products rank as best in their class. (It is imperative for the reader to understand
that the result of a benchmarking study may indeed provide the researcher with data
to support the assertion that the current practices of your own organization are the
best in class.)

B

ENCHMARKING



AND



THE

D

EMING

M

ANAGEMENT

M

ETHOD

There is a very close relationship between the approach of W. Edwards Deming and
that specied by the requirements of the National Quality Award. The potential role
of benchmarking to implement certain aspects of the Deming approach is apparent.
Demings fourteen points are summarized below:
1. Create constancy of purpose for the improvement of product and services.
2. Adopt the new philosophy that quality is critical for the competitive
survival of a company.
3. Cease dependence on mass inspection, and create the processes that build
a quality product from the start.
4. End the practice of awarding business based on price alone, and take into
consideration the quality of products and services received.
5. Improve constantly and forever the system of production and service. This
begins with product design and goes through every phase of business
operations.
6. Institute training and retraining.
7. Provide leadership and the resources required to get the job done.
8. Drive out the fear of admitting problems and suggesting new and different
ways of doing things. Get around the not invented here syndrome.
9. Break down interdepartmental barriers so that all departments can work
toward the common objective of satisfying the customer.
10. Eliminate slogans, exhortations, and targets for the workforce without
providing the ways and means for accomplishment. Do not tell people
what to do without telling them how to do it and providing the systems
and support necessary.

SL3151Ch03Frame Page 110 Thursday, September 12, 2002 6:12 PM

Benchmarking

111

11. Eliminate numerical quotas. These often promote poor quality. Instead
analyze the process to determine the systemic changes required to enable
superior performance.
12. Remove barriers to pride in workmanship by providing the training, com-
munication, and facilities required.
13. Institute a vigorous program of education and retraining. Help people to
improve every day.
14. Take action to accomplish the transformation required.

B

ENCHMARKING



AND



THE

S

HEWHART

C

YCLE



OR

D

EMING

W

HEEL

Plan

Study a process to determine what changes might be made to improve it. What type
of performance is achieved by the best of the best? What do they do that we are not
doing? What results do they achieve? What changes would we have to make? What
does the customer expect? What is the customer level of satisfaction? Is the change
economically justied?

Do

Determine the specic plan for improvement and implement it. This involves the
development of creative alternatives by work teams and the conscious choice of a
strategy to be followed. This may require internal or external benchmarking.

Study Observe the Effects

Was the root cause of the problem identied and corrected? Will the problem recur?
Are the expected results being achieved?

Act

Study the results and repeat the process. Was the plan a good one? What was learned?
This approach amounts to the application of the scientic method to the solution
of business problems. It is the basis of organizational learning.

W

HY

D

O

P

EOPLE

B

UY

?

Differentiation and quality management both focus on the need to meet customer
needs, wants, and expectations. Why does a person buy a particular product?
One view: (marketing based)
A second view: (psychologically based)
How can we dene quality? This is a very critical question and may indeed
prove the most important question in pursuing benchmarking. The importance of
this question is that it will focus the research on best in a very customized fashion

SL3151Ch03Frame Page 111 Thursday, September 12, 2002 6:12 PM

112

Six Sigma and Beyond

from the organizations perspective. This is a question that must be addressed as
early as possible.

A

LTERNATIVE

D

EFINITIONS



OF

Q

UALITY

People buy a combination of products and services for a price that depends upon
the perception of the value received. In order to conduct benchmarking studies
relative to quality, it is important to dene the elusive term quality. Garvin (1988)
and Stamatis (1996, 1997) provide various denitions of quality as follows:
Garvins eight dimensions of product quality are:

Performance

Performance refers to the ability of the product to perform
up to expectations relative to its primary operating characteristics. For
example, a camera can be self-focusing and automatically adjust the lens
opening. Products can often be ranked in terms of levels of performance,
i.e., good, better, best. Peoples expectations differ depending upon the task
to be performed. Products are designed for different uses. Therefore, a
failure to perform might simply indicate another product class or market
segment focus and not inferior quality.

Features

Features are secondary attributes that affect a products perfor-
mance. For example, the camera mentioned above can weigh less than two
pounds. A car can have power steering as a feature. Features can often be
bundled or unbundled. The distinction between performance and features
is arbitrary. One persons performance characteristics can be another per-
sons features.

Reliability Reliability reects the ability of a product to perform properly
over a period of time. A car, for example, might perform without major
repairs for 50,000 miles. Measures used to evaluate reliability are factors
such as the mean time between failures, the mean time to rst failure, and
the failure rate per 1000 items.
Conformance Conformance measures whether product quality specica-
tions have been met. Is a shaft the required diameter? Are the parts of
impurity per million within the specied limits? Individual parts can be
within tolerance; however, there can be a problem of tolerance stackup.
Four parts, each 1.000 inch wide plus or minus .0005 inch, when stacked
up will not be 4.000 inches tall plus or minus .0005 inch.
Durability Durability measures a products expected operating life. Product
life can be limited due to technical failure (mechanical, electrical, hydraulic,
pneumatic), technical obsolescence, or the economics of continued repair.
For example, a light bulb has technical failure when the lament burns out.
An automobile has economic failure when the owner decides it is no longer
economically advantageous to repair it.
Serviceability Serviceability refers to the speed, ease, cost, certainty, and
effectiveness of repair. Of critical concern are the courtesy of the repair
people, the speed of getting the product back, and whether or not it is really
xed.
SL3151Ch03Frame Page 112 Thursday, September 12, 2002 6:12 PM
Benchmarking 113
Aesthetics Aesthetics are concerned with the look, taste, feel, sound, and
smell of an item. This can be critical for products such as food, paint, works
of art, fashion, and decorative items.
Perceived quality Perceived quality is determined by factors such as image,
advertising, brand identity, and word of mouth reputation.
Stamatis, on the other hand, has introduced a modied version of the above
points with some additional points especially for service organizations. They are:
Function The primary required performance of the service
Features The expected performance (bells and whistles) of the service
Conformance The satisfaction based on requirements that have been set
Reliability The condence of the service in relationship to time
Serviceability The ability to service if something goes wrong
Aesthetics The experience itself as it relates to the senses
Perception The reputation of the quality
To be effective and efcient, the following characteristics must be present:
Be accessible
Provide prompt personal attention
Offer expertise
Provide leading technology
Depend quite often on subjective satisfaction
Provide for cost effectiveness
What is interesting about these two lists is the fact that both Garvin and Stamatis
recognize that design for optimum customer satisfaction is a design issue. Design,
indeed, is the integrating factor. The designer has to make the tough trade-offs.
Concurrent engineering and Quality Function Deployment suggest that the product
designer, the manufacturing engineer, and the purchasing specialist work jointly
during the product design phase to build quality in from the start. The focus, of
course, is to design all the above characteristics as a bundle of utility for the customer.
That bundle must address in holistic approach the following:
Image
Transcendent view This view defines quality as that property that you
will innately recognize as such once you have been exposed to it.
Something about the product or service or the way it has been promot-
ed/communicated to you causes you to recognize it as a quality
offering perhaps an excellent one.
Performance
Product-based view This view defines quality in terms of a desirable
attribute or series of attributes that a product contains. A high-quality
fuel product could have a high BTU content and a low percentage of
sulfur.
SL3151Ch03Frame Page 113 Thursday, September 12, 2002 6:12 PM
114 Six Sigma and Beyond
User-based view This view defines quality in terms of how well a
product or service meets the expectation of the customer. If the product
meets expectations, it is considered to be of high quality. Expectations
vary widely, and meeting expectations may not lead to the best product.
For example, a bestseller may not be the best literature.
Manufacturing-based view This view defines quality in terms of con-
formance to manufacturing specifications. This view may, however, pro-
mote manufacturing efficiencies at the expense of suitability to the user.
For example, problems of tolerance stackup are particularly noteworthy.
Value
Value-based view This view, which is gaining in popularity, looks at val-
ue as the trade-off between quality and price. From this perspective, qual-
ity consists of all of the non-price reasons to buy a product or service.
To come up with reasonable denitions and actions for the above characteristics,
a team must be in place and team dynamics at work. A very good approach for this
portion of benchmarking that we recommend is the nominal group process:
The process features are as follows:
Group size: ve to nine core individuals
Group composition: multidisciplinary and cross-functional
Reection 20 minutes: all participants allowed to express their views as to
what the problem is and how the team should progress
Sharing of ideas: Discussion of the presented ideas
Voting: evaluation of ideas and selection of the best
Tabulation: Final resolution of what is at stake and how to proceed so that
success will result
The discussion and direction of the nominal process must not focus on price
alone because that is a very narrow point of view. Some examples of non-price
reasons to buy are:
Product non-price reasons to buy
Ease of product use
Performance
Features
Reliability
Conformance
Durability
Serviceability
Aesthetics/style
Perceived quality
Ability to provide a bundled package
Service and image non-price reasons to buy
Speed of delivery
Dependability of delivery
SL3151Ch03Frame Page 114 Thursday, September 12, 2002 6:12 PM
Benchmarking 115
Fill rate
Fun to deal with
Number/location of stocking warehouses
Repair facilities and location
Technical assistance
Service before, during, and after sale
Willingness to hold inventory
Flexibility
Access to salespeople
Access to multiple supply sources
Reputation
Life cycle cost
Financing terms
Turnkey operations
Consulting/training
Warehousing
Guarantees/warranty
Services provided by salespeople
Ease of resale
Computer placement of orders
Professional endorsement
Packaging
Up front engineering
Vendor nancial stability
Condence in salespeople
Backup facilities
Courtesy
Credibility
Understandability
Responsiveness
Accessibility of key players
Flexibility
Condentiality
Safety
Delivery
Ease of installation
Ability to upgrade
Customer training
Provision of ancillary services
Product brieng seminars
Repair service and parts availability
Warranty
Image
Brand recognition
Atmosphere of facilities
Sponsor of special events
SL3151Ch03Frame Page 115 Thursday, September 12, 2002 6:12 PM
116 Six Sigma and Beyond
The service and image features dene the augmented product. They answer
the questions:
What does your customer want in addition to the product itself? (the
unspoken requirements)
What does your customer perceive to have value?
What does your customer view as quality?
In order to focus benchmarking efforts, it is critical to dene the unique selling
proposition or the product concept. A statement of product concept requires the de-
nition of both attribute(s) and benet(s). Attributes consist of both form and features
(specic product or service characteristics) and technology (how they are to be pro-
vided). For example, a new brewing technique brings a double-strength beer to add to
your enjoyment by capturing the taste of the 1800s (technology, form, benet).
So what do you expect to get out of this team effort and integration? Simply
put, you should get the answers to some very fundamental questions about your
organization and the product/service you offer. Some typical questions are:
What are the non-price reasons to buy your product? How do they compare
with the product and service attributes listed above?
How do your customers dene quality? How does your company dene
quality?
What is more important? Product or service?
Can specic, measurable attributes be dened?
How does your competitor dene quality?
How do you compare with your competitor?
What other companies or industries inuence your customer as to what
should be expected relative to each of these characteristics?
What does this suggest in the way of benchmarking opportunities?
For example, here are some non-price reasons to buy that might apply to a
supermarket:
Large parking lot
Zoo in parking lot
Lots of giveaways
Makes shopping fun for the entire family
Clowns
Disneyland gures
Well-stocked, attractive displays
Rock hard containers of ice cream
Complaint box (policy to respond the next day to the customer)
Fast cash out
Forget your checkbook? Pay next time.
Trains all associates
SL3151Ch03Frame Page 116 Thursday, September 12, 2002 6:12 PM
Benchmarking 117
Uses Dale Carnegie courses
Walt Disney people management
One aisle that rambles through the store
No question return policy
Bus for senior citizens
Customer focus groups every three weeks
Associates who take the initiative to please customers
In-store dairy and bakery
None of these, in itself, is earth-shaking. But they could make the difference in
an industry that operates with a prot margin of less than 1%.
We cannot pass up the opportunity to address non-price issues for the WAL-
Mart corporation, which allegedly spends 1% of 100 details in the following items:
Aggressive hospitality
People greeters
Associates not employees
Tough people to sell to
Weekly top management meetings
Low cost, no frills environment
Good computerized database
Rapid communications by phone
Managers in the eld Monday through Thursday
High-efciency distribution centers
Emphasis on training of people
Department managers having cost and margin data
Prot sharing if store meets goals
Bonus if shrinkage goal is met
Open door policy
Grass-roots meetings
Constant improvements
Competitive ads shown in store
DETERMINING THE CUSTOMERS PERCEPTION OF QUALITY
Differentiation is uniqueness in the eyes of the customer. Quality is meeting the
unique needs, wants, and expectations of the customer in terms of the non-price
reasons to buy. But who is the customer? Depending on (a) dening the customer
for multiple channels of distribution or (b) identifying the multiple buying inuences
in a business-to-business sale, the customer may be:
User
Technical buyer
Economic buyer
Corporate general interest buyer
SL3151Ch03Frame Page 117 Thursday, September 12, 2002 6:12 PM
118 Six Sigma and Beyond
Who is the competitor? Assume for example a recreation environment. Here are
some questions you might ask that would help you to determine who the competitors are:
What is the desire I want to satisfy? (Desire competitor)
Recreation
Education
What kind of recreation do I enjoy? (Generic competitor)
Baseball
Boating
What kind of boating? (Form competitor)
Power boat
Sailboat
What brand boat? (Brand competitor)
Bayliner
Boston Whaler
Once these questions have been addressed, now we are ready to do the compet-
itive evaluation in the following stages:
Survey design
Attributes considered
Relative weight given to each
Direct competitors
Performance versus competition
Approaches to making the survey
Internal
Sales force
Sales management (Remember, the more accurate data you have,
the better the survey. For example: Colgate Palmolive audits 75,000
customers for all products. People know what they want and will
not settle for happy mediums.)
External
Market research rms/universities
Attribution/non-attribution
Use of customer service hot line GE progressed from receiving
1000 calls per week in 1982 to receiving 65,000 calls per week with
the installation of an 800 number answer center. The 150 phone
reps need a college degree and sales experience. They have been
effective in spotting trends in complaints as well as increasing sales.
The increase in sales has been estimated at more than twice the
operating cost of the center. (Did this trigger off a benchmarking
candidate for you?)
Groups to be surveyed
Current customers
Lost customers
Prospects
SL3151Ch03Frame Page 118 Thursday, September 12, 2002 6:12 PM
Benchmarking 119
Survey frequency
Comparison of company internal view versus the customer view
QUALITY, PRICING AND RETURN ON INVESTMENT (ROI) THE PIMS RESULTS
Being perceived as being the best or having the product with the highest quality can
have signicant bottom line results. Buzzell and Gale (1987) introduced the PIMS
(Prot Impact of Marketing Strategies) system, which is an elaborate benchmarking
database developed by the Strategic Planning Institute in Cambridge, Mass. The data-
base contains information for over 450 companies and over 3000 business experience
pools in a wide variety of industries, including manufacturers, raw material producers,
service companies, distributors, and durable and non-durable consumer products. Data
are collected for independent business units, each with a dened served market.
The objectives of the Strategic Planning Institute and benchmarking are to help
organizations in the process of becoming excellent organizations. How do they do
it? By:
1. Using the statistical analysis and modeling of business experience
2. Isolating the key factors that determine return on investment (ROI)
ROI equals net income before interest and taxes divided by the total of working
capital and xed capital.
As a result the Institute can help organizations with:
Understandability
Predictability
of their own organizations behavior and their own products and services.
Of course, the choice of strategy depends upon several factors, including but
not limited to:
Market growth rate and product life cycle
Current market share
Price/quality sensitivity by segment
Competitive response proles
Current and planned capacity
Cost and feasibility of quality improvements
Market perception of quality improvements
Financial and marketing goals long and short term (The period
described as short term and long term will differ widely among
various strategies and organizations.)
BENCHMARKING AS A MANAGEMENT TOOL
So far we have talked about benchmarking but we really have not dened it. A
formal denition, then, is that benchmarking is a systematic, continual (ongoing)
SL3151Ch03Frame Page 119 Thursday, September 12, 2002 6:12 PM
120 Six Sigma and Beyond
management process used to improve products, services, or management processes
by focusing on and analyzing the best of the best practices, by direct competitors
or any other companies, to determine standards of performance and how to achieve
those standards, to provide least cost, quality or differentiation, in the eye of the
customer.
Key words in this denition are systematic and ongoing, which imply that in
order to have a successful benchmarking one must be familiar with the Kano model,
Shewhart-Deming cycle, and principle of Kaizen improvement. This systematic and
ongoing pursuit of excellence is applicable to all aspects of business and in all
methodologies including the six sigma. It is an integral part of the strategic, opera-
tional, and quality planning process. It is not an end in itself.
Benchmarking identies the best of class and determines standards of excellence
based on the market considering both customers and competitors. It is a challenge
with a solution. It provides the what and how. (A narrow focus on what you want
to get done a results orientation that controls performance with a carrot and a
stick is not effective without a broader focus on how best to do it a process
orientation that identies the process changes that need to be made in order for the
results to be achieved consistently.)
Benchmarking is a creative imitation because part of its goal setting process that
encourages the development of proactive plans is the action to bring about change.
To do that, of course, analysis is required to determine all of the factors necessary
for a solution to work, as appropriate and applicable to a given organization. In
addition, it is necessary to project the future performance of the competition to set
improvement goals. Otherwise, a company is always playing catch-up. Some of the
key factors in this analysis are:
People/culture/compensation
Process/procedure
Facilities/systems
Material
WHAT BENCHMARKING IS AND IS NOT
Benchmarking is not:
A way to cut costs or headcount, necessarily
A quick x or a panacea
A cookbook approach
Rather, it is a methodology that is an integral part of the management process
and provides the organization with many benets including but not limited to:
Identifying the specic action plans required to achieve success in com-
pany growth and protability
Assessing objectively strengths and weaknesses versus competition and
the best in class
SL3151Ch03Frame Page 120 Thursday, September 12, 2002 6:12 PM
Benchmarking 121
Improving quality as perceived by the customer (The customer can be
external to the company or the next department in the company.)
Determining goals objectively and realistically based on the actual
achievements of others
Providing a vision of what can be accomplished in terms of both what
and how
Providing hard, reliable data as a basis for performance improvement
Causing people to think creatively and to look at proactive alternative
solutions to a problem
Promoting an opportunity for personal and corporate growth, learning,
and development
Raising the company level of awareness of the outside world and of
customer needs
Stimulating change Others are doing this, why cant/shouldnt we?
Identifying all of the factors required to get a job done
Promoting an in-depth analysis and quantication of operations and man-
agement processes
Encouraging teamwork and communication within an organization
Creating an awareness of problems and stimulating change
Documenting the fact that a good job is being done and that you are the
best in class
Allowing a company to leapfrog competition by looking outside of an
industry
Changing the rules of the game by breaking with the traditions of an
industry
THE BENCHMARKING PROCESS
The benchmarking process can differ from company to company. However, the ten-
step process below is generally followed.
I. Benchmark planning and prioritization
1. Identication of benchmarking alternatives
2. Prioritization of the benchmarking alternatives
II. Benchmark data collection
3. Identication of the benchmarking sources
4. Benchmarking performance and process analysis company opera-
tions
What do we do?
What is the process?
What are the resource inputs?
What are the outputs?
What is the resource cost per unit of output?
What are the limitations?
What are possible changes?
5. Benchmarking performance and process analysis partners operations
SL3151Ch03Frame Page 121 Thursday, September 12, 2002 6:12 PM
122 Six Sigma and Beyond
III. Benchmark implementation
6. Gap analysis
7. Goal setting
8. Action plan identication and implementation
IV. Benchmark monitoring and control
9. Monitor company performance and action plan milestones
10. Identify the new best in class
TYPES OF BENCHMARKING
Benchmarking can be performed for any product, service, or process. Different
classication schemes have been suggested. For example, Xerox classies bench-
marking in the following categories:
Internal benchmarking
Direct product competitor benchmarks
Functional benchmarking This is a comparison with the best of the
best even if from a different industry.
Generic benchmarking This is an extension of functional benchmark-
ing. It requires the ability to creatively imitate any process or activity to
meet a specic need. For example, the technique used for high speed
checking of paper currency (into the categories of good, mutilated, or
counterfeit) by a bank could be adapted for high speed identication and
sorting of packages in a warehouse.
ATT, on the other hand, uses the classication indicated below. Specic examples
of benchmarking studies for each are shown. These are not limited to ATT examples:
Task
Invoicing
Order entry
Invoice design
Customer satisfaction
Supplier evaluation
Flow charting
Accounts payable
Functional
Promotion by banks
Purchasing
Advertising by media type
Pricing strategy
Safety
Security
Management process
PIMS par report
Prot margin/asset turnover
SL3151Ch03Frame Page 122 Thursday, September 12, 2002 6:12 PM
Benchmarking 123
Strategic planning
Operational planning
Capital project approval process
Technology assessment
Research and development (R and D) project selection
Innovation
Training
Time-based competition
Benchmarking
Self-managed teams
Operations process
Warehouse operations
Make versus buy
Another classication of benchmarking projects is by:
Function sales and marketing
Process missionary selling
Activity cold calling
Task preparation of target list
Still another classication is in terms of:
Overall nancial performance
Department or functional benchmarking
Cost benchmarking
ORGANIZATION FOR BENCHMARKING
Ad hoc benchmarking studies can be helpful and productive. However, many com-
panies are attempting to institutionalize benchmarking as part of the business plan-
ning and six sigma process.
The business planning process consists of strategic planning followed by oper-
ational planning. Both phases require the development of functional area plans.
However, the time periods considered, the alternatives of interest, and the level of
detail are very different. The general ow of the planning process is:
What should we do?
Situation analysis performed to determine critical success factors,
strengths, weaknesses, opportunities, and threats
Mission development
Statement of objectives and goals
How should we to do it?
Strategy determination
Tactics identied
Action plans specied
SL3151Ch03Frame Page 123 Thursday, September 12, 2002 6:12 PM
124 Six Sigma and Beyond
What are the expected results?
Budgets and nancial projections
How did we do?
Monitoring and control
Who should get rewarded?
Performance evaluation and compensation
Benchmarking is often an integral part of the situation analysis. It can also have
a major impact on the mission statement, the goals, the strategy, the tactics, and the
identication and determination of action plans. Benchmarking can provide major
guidance when determining what to do, how to do it, and what can be expected.
Benchmarking for strategic planning might concentrate on the determination of
the critical success factors for an industry (based on customer and competitive inputs)
and identifying what has to be done to be the success factors. This then leads to the
development of detailed action plan with effort and result goals. Benchmarking for
operational planning might concentrate on the cost and cost structure for each
functional area relative to the outputs produced.
All quality initiatives including six sigma have a signicant inuence on
the mission statement and the objectives and goals of an organization. As such, they
can provide an added impetus to do benchmarking to satisfy the quality goals.
Benchmarking can be centralized (ATT) or decentralized (Xerox). Xerox has several
functional area benchmarking specialists, including specialists for nance, admin-
istration, marketing, and manufacturing. The big advantage of a decentralized
approach is a greater likelihood of organizational buying of the nal results of the
benchmarking study. The effort required to perform a benchmarking study can vary
signicantly. For example, the L.L. Beam study performed by Xerox took one person
year of effort. Generally, three to six companies are included in the benchmark.
However, some companies use only one or two. Also, some studies are performed
in depth, while others are fairly casual. The One Idea Club was a simple approach
with a substantial reward.
REQUIREMENTS FOR SUCCESS
All initiatives have requirements for success. Benchmarking is no different. Some
of these requirements are:
Management vision and support to ensure the conditions necessary for
the success of the strategy people, money, time.
Goal-focused management with a customer/competitive focus on contin-
uously improved quality
Performance- or results-based compensation
Action plan prioritization and focus
Dened roles and responsibilities for a multidisciplinary approach
Dened organizational approach central versus decentralized
Integration with other management processes
SL3151Ch03Frame Page 124 Thursday, September 12, 2002 6:12 PM
Benchmarking 125
Ability to maintain focus on the continuous improvement of hundreds of
small items a little bit at a time
Willingness to deal with the conict caused by a lack of goal congruity
and the need to share scarce resources and to make tough decisions
Tolerance to deal with the ambiguity of results as research is conducted
to determine when, where, and how to improve operations
Openness to learn and to change; results can affect organization structure,
allocation of resources, corporate culture, and individual work assign-
ments
Use of the scientic method: hypothesis formation, data collection, test-
ing, and learning
Humility and the willingness to admit weakness and the possibility for
improvement
Identication of the impediments to change and the development of a plan
for change
Patience and resources to perform the analytical studies and to complete
the required documentation
Long-term commitment to achieving results
Flexibility and discipline to implement the required changes
Communication of intent and approach, ndings, concerns, and appre-
hensions
Training and total employee involvement, empowerment, and teamwork
A process that starts slow, showcases, and picks up speed as experience
and condence are gained
Market segmentation focus and a dened corporate strategy
It sounds good. But does benchmarking work? Let us see what the SAS Airlines
did, as an example.
When Jan Carlzon took over as president of Scandinavian Airlines (SAS) in 1980,
the company was losing money. For several previous years, management had dealt with
this problem by cutting costs. After all, this was a commodity business. Carlzon saw
this as the wrong solution. In his view, the company needed to nd new ways to compete
and build its revenue. SAS had been pursuing all travelers with no focus on superior
advantage to offer to anyone. In fact, it was seen as one of the least punctual carriers
in Europe. Competition had increased so much that Carlzon had to gure out:
Who are the customers?
What are their needs?
What must we do to win their preference?
Carlzon decided that the answer was to focus SASs services on frequently ying
business people and their needs. He recognized that other airlines were thinking the
same way. They were offering business class and free drinks and other amenities.
SAS had to nd a way to do this better if it was to be the preferred airline of the
frequent business traveler.
SL3151Ch03Frame Page 125 Thursday, September 12, 2002 6:12 PM
126 Six Sigma and Beyond
The starting point was market research to nd out what frequent business
travelers wanted and expected in the way of airline service. Carlzons goal was to
be one percent better in 100 details rather than 100 percent better in only one detail.
The market research showed that the number one priority was on-time arrival.
Business travelers also wanted to check in fast and be able to retrieve their luggage
fast. Carlzon appointed dozens of task forces to come up with ideas for improving
these and other services. They came back with ideas for hundreds of projects, of
which 150 were selected with an implementation cost of $40 million.
One of the key projects was to train a total customer orientation into all of SASs
personnel. Carlzon gured that the average passenger came into contact with ve
SAS employees on an average trip. Each interaction created a moment of truth
about SAS. At that point of contact, the person was SAS. Given the 5 million
passengers per year ying SAS, this amounted to 25 million moments of truth where
the company either satised or dissatised its customer.
To create the right attitudes toward customers within the company, SAS sent
10,000 front line staff to service seminars for two days and 25,000 managers to
three-week courses. Carlzon taught many of these courses himself. A major emphasis
was getting people to value their own self-worth so that they could, in turn, treat
the customer with respect and dignity. Every person was there to serve the customer
or to serve someone who was serving the customer.
The results: Within four months, SAS achieved the record as the most punctual
airline system in Europe, and it has maintained this record. Check-in systems are
much faster, and they include a service where travelers who are staying at SAS
hotels can have their luggage sent directly to the airport for loading on the plane.
SAS does a much faster job of unloading luggage after landings as well. Another
innovation is that SAS sells all tickets as business class unless the traveler wants
economy class.
The companys improved reputation among business yers led to an increase in
its full fare trafc in Europe of 8 percent and its full fare intercontinental travel of
16 percent, quite an accomplishment considering the price cutting that was taking
place and zero growth in the air travel market. Within a two-year period, the company
became a protable operation.
Carlzons impact on SAS illustrates the customer satisfaction and prots that a
corporate leader can achieve by creating a vision and target for the company that
excites and gets all the personnel to swim in the same direction namely, toward
satisfying the target customers. As a leader, Carlzon created the conditions necessary
to ensure the success of the strategy by implementing the projects required for the
front line people to do their jobs well.
BENCHMARKING AND CHANGE MANAGEMENT
Several behavioral models underscore the psychological requirements for change in
a person or an organization. The classic equation for change is:
D V F > R
SL3151Ch03Frame Page 126 Thursday, September 12, 2002 6:12 PM
Benchmarking 127
where D = dissatisfaction with current situation; V = vision of a better future; F =
the rst steps of a plan to convert D to V; and R = resistance to change.
Typical attitudes/comments of resistance are:
Perceived threat of loss power, position
Everything is OK. Why x it?
What should we change? How?
What is management trying to tell me?
Takes a long time to see results!
We do not have time to do that stuff.
If this is so good, why arent they doing it?
Benchmarking can accelerate the change process by offering the organizations
managers facts that relate to their needs and expectations, by understanding the
psychology of change. For example, while the previous mathematical formula is a
quantiable entity on its own, it gives us little opportunity to explore change from
the individuals perspective. Change begins with an individual. That individual must:
1. Believe that he or she has the skill necessitated by the change. Can I do it?
2. Perceive a reasonable likelihood of personal value fulllment as a result
of making the change. What will I get out of it?
3. Perceive that the total personal cost of making the change is more than
offset by the expectation of personal gain. Is it worth making the change?
This model suggests that we manage change by education and communication
to inuence what a person thinks and that this, in turn, causes a change in behavior.
Thought is affected by:
Beliefs
Facts
Values
Feelings
Benchmarking can help implement change by providing the required facts and
challenging beliefs, especially if there are data to be supported from other organi-
zations. Other models to manage change are:
Facilitation and support
Participation and involvement
Negotiation
Manipulation
Explicit and implicit coercion
Corporate culture is important:
SL3151Ch03Frame Page 127 Thursday, September 12, 2002 6:12 PM
128 Six Sigma and Beyond
Reward risk taking
Encourage passionate champions
Focus on base hits versus home runs
Sources of dissatisfaction that can drive change include:
Financial pressure
Quarterly earnings
Cash ow (Need: to improve operational efciency)
STRUCTURAL PRESSURE
Cyclical business mix
Customer mix
Cash ow conicts
Product life cycle mix (Need: To improve business mix or effectiveness)
ASPIRATION FOR EXCELLENCE
The need to improve is an internal perception. You do not have to be bad to
improve. Organization positions can be viewed as having either innovation and/or
maintenance responsibilities relative to change. How does the mix change for work-
ers, supervisors, middle management, and top management in an organization that
strives for excellence?
Current success can mask underlying problems and can prevent or delay action
from taking place when it should, i.e., when the company has the time and resources
to do something. Consider the classic story of the boiled frog as an example. (If
you recall, the frog was boiled when the temperature was increasing at a very slow
rate. The frog was adapting. The frog could not differentiate the change and ulti-
mately, was boiled. On the other hand, the frog that was thrown into hot water
jumped out right away and saved its life.)
FORCE FIELD ANALYSIS
Force eld analysis is a systematic way of identifying and portraying the forces (often
people) for or against change in an organization. The specic forces will differ depend-
ing upon the area where benchmarking is applied. Here is how the process works:
1. Dene the current situation.
2. Dene the desired position based on the results of the benchmarking study.
3. Dene the worst possible situation.
4. What are the forces for change? What is their relative strength?
5. What are the forces against change? What is their relative strength?
6. What forces can you inuence?
7. Dene the specic action to be taken relative to each of those forces that
you can inuence.
SL3151Ch03Frame Page 128 Thursday, September 12, 2002 6:12 PM
Benchmarking 129
One effective way to start the benchmarking process is to select one high
visibility area of concern to the inuence leaders in a company and produce results
that can showcase the benchmarking process. This might start with a library search
to highlight the results that are possible.
IDENTIFICATION OF BENCHMARKING
ALTERNATIVES
As indicated earlier, benchmarking candidates can be identied in a wide variety of
ways. They can be detected, for example, during the business planning process, as
part of a quality initiative, during a six sigma project, or during a prot improvement
campaign. Both external and internal analysis can lead to potential candidates.
EXTERNALLY IDENTIFIED BENCHMARKING CANDIDATES
Industry Analysis and Critical Success Factors
Based on the structure of an industry and the dynamics of the customer/supplier
interface, certain factors are critical to the success of a business. An identication
of the critical success factors and an evaluation of the companys current capabilities
can lead to benchmarking opportunities.
The competitive rivalry among rms in an industry has a signicant impact on
total demand and the level and stability of prices. Competitive rivalry is a function
of several interrelated factors that affect the supply and demand for products and
services. The balance of supply and demand at a particular time affects the percent
capacity utilization in an industry. The percent capacity utilization directly affects
price levels and price elasticity.
The factors affecting demand are:
The strategy of the buyer to be least cost or differentiated How well
do you meet your customers specic needs?
The availability of and knowledge about substitute products What are
existing and new competitors likely to do?
The ease of switching from one product to another How can you
increase the cost of switching to another supplier?
Governmental regulations What can you do to inuence these?
The factors affecting supply are:
The ease of market entry What can you do that will make it hard to
enter the business?
The barriers to market exit What can you do to make it easy for a
competitor to get out of the business?
Governmental regulations What can you do to inuence these?
SL3151Ch03Frame Page 129 Thursday, September 12, 2002 6:12 PM
130 Six Sigma and Beyond
Based on an analysis of the industry as it exists now and might exist in the
future, what are the factors absolutely critical for success? Five or six critical success
factors can usually be identied for a company. Examples are:
Customer service
Distribution
Technically superior product
Styling
Location
Product mix
Cost control
Dealer system
Product availability
Supply source
Production engineering
Advertising and promotion
Packaging
Staff/skill availability
Quality
Convenience
Personal attention
Innovation
Capital
Once the critical success factors have been identied, the company can assess its
current position to determine whether benchmarking is required. One technique for
performing this analysis is to make a tabulation showing how the major competitors in
an industry rank for each critical success factor. As a cross check, there should be a
correlation between the tabulated results, market share and return on equity.
PIMS Par Report
The PIMS par report indicates the nancial results that companies in similar cir-
cumstances have been able to achieve. As such, it provides a quantitative benchmark.
The PIMS report also indicates those factors that should enable you to earn greater
than par and those factors that would cause you to earn less than par.
Financial Comparison
If PIMS data are not available, a comparison of the companys nancial performance
versus that of other companies in the same industry can suggest the value of
benchmarking in specic areas. Potential areas that might be identied are:
Gross margin improvement
Overhead cost reduction
SL3151Ch03Frame Page 130 Thursday, September 12, 2002 6:12 PM
Benchmarking 131
Fixed asset utilization
Inventory or accounts receivable reduction
Liquidity improvement
Financial leverage
Sales growth
Competitive Evaluations
As discussed earlier, a competitive evaluation is a periodic assessment made to
determine, objectively, what factors a buyer takes into consideration when deciding
to buy from one supplier versus another, the relative weight given to each of those
factors, the competing rms, and the relative performance of each rm with respect
to each buying motive.
Focus Groups
Focus groups are used to determine what a customer segment thinks about a product
or service and why it thinks that way. Participants are invited to join the group
usually with some type of personal compensation. A focus group starts with a series
of open-ended questions relative to a specic subject. Representatives of the spon-
soring company view the entire process either through a one-way mirror or by closed
circuit TV. As a second phase, the company representatives ask specic follow-up
questions (through the facilitator), based on the open-ended probing.
Importance/Performance Analysis
The customer perception of performance versus importance can be used to identify
benchmark alternatives. A list of attributes can be prepared using either a nominal
group process or a focus group. The customer is asked to rate each attribute in terms
of both importance and company performance. A matrix is then prepared showing
high and low performance versus high and low importance. It has the following
implications:
Continue with high importance, high performance.
Reduce emphasis on low importance, low performance.
Increase emphasis on high importance, low performance.
Reduce emphasis on low importance, low performance.
In addition to determining the customers perception of performance versus
importance, it is also valuable to determine the customers versus the companys
perception of importance. This can also be used to determine areas for intensication
and reduction of effort and benchmarking possibilities, including:
1. Customer-oriented goals
2. Service/quality goals
SL3151Ch03Frame Page 131 Thursday, September 12, 2002 6:12 PM
132 Six Sigma and Beyond
INTERNALLY IDENTIFIED BENCHMARKING CANDIDATES INTERNAL
ASSESSMENT SURVEYS
An internal assessment of strengths and weaknesses can be used to identify bench-
marking candidates. This determination can be made using a Business Assessment
Form followed by group discussion or by using the Nominal Group Process. The
assessment can be made by the owner of a product, process, or service and/or by
the department being served.
An internal assessment can also be approached from the viewpoint of the generic
value added chain. For each block of the chain, two questions can be asked:
1. What are the alternatives for least cost operation?
2. What are the alternatives to provide differentiation?
The value added chain can also provide customer perspective by suggesting the
questions:
1. How does our product or service help customers to minimize their cost?
2. How does our product or service help customers to differentiate their
product?
Nominal Group Process: General Areas in Greatest Need
of Improvement
Improving the precision of the sales forecast
Reducing the cycle time to bring out new products
Increasing the success rate in bidding for new business
Reducing the time required to ll customer orders
Reducing the errors in invoices
Major problems or issues
Areas of competitive disadvantage
Pareto Analysis
Pareto analysis is a form of data analysis that requires that each element or possible
cause of a problem be identied along with its frequency of occurrence. Items are
then displayed in order of decreasing frequency of probability of occurrence. This
can help to identify the most signicant problem to attack rst. A common expression
of the Pareto Law is the 80/20 rule, which states that 20% of the problem causes
80% of the difculties. A Pareto analysis of setup delay might include factors such
as: necessary material not available, tooling not ready, lack of gages, setup personnel
not available, another setup has priority, material handling equipment not available,
error incorrect setup. Develop a Pareto analysis for the production of scrap. (There
is a tremendous difference between knowing the facts and guessing).
SL3151Ch03Frame Page 132 Thursday, September 12, 2002 6:12 PM
Benchmarking 133
Statistical Process Control
Statistical process control is a technique for identifying random (or common) causes
versus identiable (or special) causes in a process. Both of these are potential sources
for improvement. The amount of random variation affects the capability of a machine
to produce within a desired range of dimensions. Hence, benchmarking could be
performed to determine machine processing capabilities and how to achieve those
levels. The determination and correction of recurring systematic changes is also a
benchmarking possibility.
The reduction of the random variation or the uncertainty of the process and the
identication and correction of special causes are critical aspects of the total quality
management process. Correction often requires a change in the total manufacturing
process, tooling, the equipment being used, and/or training in setup and operations.
The rst step in process improvement is to control the environment and the
components of the system so that variations are within natural, predictable limits.
The second step is to reduce the underlying variation of the process. Both of these
are candidates for benchmarking.
Trend Charting
Historic data can be used to develop statistical forecasts and condence intervals
that depict acceptable random variation. When data fall within the condence inter-
vals, you have no cause to suspect unusual behavior. However, data outside of the
condence intervals could provide an opportunity for benchmarking. It might also
be informative to pursue benchmarking as a device to reduce the range of variation
or the size of the condence interval.
A simple trend analysis of your own past data can also provide a basis for
improvement. The following data relative to the percent scrap and rework illustrate
the improvement made and could provide the basis for benchmarking:
Product and Company Life Cycle Position
Products tend to go through a dened life cycle starting with an introductory phase
and proceeding through growth, maturity, and decline. The management style and
business tactics are very different at each stage. Anticipating and managing the
transitions can be important. This could lead to opportunities for benchmarking of
product life cycle management and product portfolio management. Product portfolio
management can lead to the need for new product identication and introduction.
These areas have both been the subjects of benchmarking studies.
In addition to the changes that products go through, companies tend to go through
various stages of development and crises. Again, the management of the transitions
can be an important benchmarking candidate.
1987 1988 1989 1990
2.1% 3.0% 1.0% 0.7%
SL3151Ch03Frame Page 133 Thursday, September 12, 2002 6:12 PM
134 Six Sigma and Beyond
Failure Mode and Effect Analysis
Failure Mode and Effect Analysis (FMEA) is a systematic way to study the operating
environment for equipment or products and to determine and characterize the ways
in which a product can fail. Benchmarking can be used to determine component and
system design goals and alternatives (see Chapter 6).
Cost/Time Analysis
To evaluate its new product introduction process, a company may plot cost per unit
produced versus elapsed time for each element of the process, e.g., design and
engineering, production, sub-assembly, and assembly. The area under the curve
represents money tied up (inventory), and smaller is better.
NEED TO IDENTIFY UNDERLYING CAUSES
Problem, Causes, Solutions
When solving a problem, it is critical to attack the underlying cause of the problem
and not the symptoms. The underlying cause can be identied by listing all possible
causes and identifying the most probable cause based on data collection and a Pareto
analysis. This sometimes leads to multiple benchmarking opportunities. Failure to
diagnose a problem (ready, re, aim) can lead to an inefcient use of resources and
frustration.
The Five Whys
When identifying underlying causes, it can also be useful to ask ve sequential
whys to get to the heart of a problem. For example:
Problem: The milling machine is down.
Why? The chucking mechanism is broken.
Why? A piece got jammed when being loaded.
Why? There was excess ash from the stamping operation.
Why? The stamping die was not changed.
Why? The die usage control log was not updated daily.
Cause and Effect Diagram
The development of a Cause and Effect Diagram or Fishbone Diagram or Ishikawa
Diagram is another way to identify and display the underlying causes of a problem.
Causes are usually displayed in terms of major categories such as human or person-
nel, machines or technology, materials and methods or procedures. Once causes are
identied, an analysis is made to determine actionable solutions.
The determination of cause and effect can require the use of designed experi-
ments to measure effects and interaction. For example, to reduce conveyor belt
spillage, it was necessary to determine the effects of belt wipers, belt surface, dryness
of the belt and material, and particle size in various combinations of each factor.
SL3151Ch03Frame Page 134 Thursday, September 12, 2002 6:12 PM
Benchmarking 135
BUSINESS ASSESSMENT STRENGTHS
AND WEAKNESSES
You will be asked to evaluate the organization relative to sales and marketing,
manufacturing and operations, R & D, and general management. A typical assess-
ment is shown in Table 3.1.
TABLE 3.1
A Typical Assessment Instrument
Please indicate how you evaluate the organization using the following key:
(There are many ways to use a key. This is only one example.)
++ Extremely strong, denite leaders
+ Better than average
E Average
Weak, should do better
Extremely weak, area of major concern
Sales and marketing
Customer base
Market share
Market research
Customer knowledge
Brand loyalty
Company business image
Response to customers
Breadth of product line
Product differentiation
Product quality
Distributors
Locations
Size
Warehousing
Transportation
Communication
Influencing customers
Sales force
People and skills
Size
Type
Location
Productivity
Morale
Advertising
SL3151Ch03Frame Page 135 Thursday, September 12, 2002 6:12 PM
136 Six Sigma and Beyond
TABLE 3.1 (Continued)
A Typical Assessment Instrument
National regional cooperative
Promotion devices
Prices/incentives
Customer communication
Service
Before sale
After sale
Credit
Long term
Short term
Trade allies
Costs
Selling
Distribution
Manufacturing/operations
Materials management
People and skills
Sourcing
Inventory P & C
Production P & C
Capability P & C
Computer system
Physical plant
Capacity
Utilization
Flexibility
Plant
Size location
Number
Age
Equipment
Automation
Maintenance
Flexibility
Processes
Uniqueness
Flexibility
Degree of integration
Engineering
Process
Tool design
Cost improve
SL3151Ch03Frame Page 136 Thursday, September 12, 2002 6:12 PM
Benchmarking 137
TABLE 3.1 (Continued)
A Typical Assessment Instrument
Time standards
Quality control
People and skills
Workforce
Skills mix
Utilization
Availability
Turnover
Safety
Unionization
Costs
Productivity
Morale
Direct/indirect
Research and development
Basic research
Concepts and studies
Emphasis
People and skills
Conversions to applications
Patents
Applied research
Finding
Emphasis
People and skills
Conversion to prototype
Patents
Basic engineering
Prototypes
Emphasis
People and skills
Convert to product
Design engineering
Designs
Patents and copyrights
Emphasis
People and skills
Design for production
Funding
Amount
Consistency
Sources
Project selection
SL3151Ch03Frame Page 137 Thursday, September 12, 2002 6:12 PM
138 Six Sigma and Beyond
TABLE 3.1 (Continued)
A Typical Assessment Instrument
General management
Leadership
Vision
Risk/return profile
Clarity of purpose
Implementation skills
Turnover
Experience
Motivation skills
Leadership style
Delegation
Strategic emphasis
Organization
Type
Size
Location
Communication
Defined responsibility
Coordination
Speed of reaction
Fix with strategy
Commitment
Planning and control
Early alert system
Forecasting
Operational budget
Control
MBO program
Capital planning
Long range planning
Contingency planning
Cost analysis
Resource allocation
Accounting and finance
Financial public relations
Financial relations
Auditing
Decision making
Style
Techniques used
Responsiveness
Position in org. criteria used
SL3151Ch03Frame Page 138 Thursday, September 12, 2002 6:12 PM
Benchmarking 139
TABLE 3.1 (Continued)
A Typical Assessment Instrument
Personnel
Effectiveness
Hourly labor
Clerical labor
Sales people
Scientists and engineers
Supervisors
Middle management
Top management
Comp. and reward
Management development
Management depth
Turnover
Morale
Information systems
Decision support system
Customer data
Product line data
Fixed/variable costs
Exception reporting
Culture
Shared values
Pluralism
Conflict resolution
Openness
Optional Information
Name:
Date:
Title:
Dept:
PRIORITIZATION OF BENCHMARKING
ALTERNATIVES PRIORITIZATION PROCESS
A variety of prioritization approaches are available. Use the one most appropriate
to a specic situation.
PRIORITIZATION MATRIX
The following steps are required to complete a prioritization matrix:
SL3151Ch03Frame Page 139 Thursday, September 12, 2002 6:12 PM
140 Six Sigma and Beyond
1. List all items to be prioritized.
2. List the goals or the prioritization criteria.
3. Specify the goal weights.
4. Indicate the impact score of each item relative to each goal.
5. Determine the value index for each item by totaling the cross product of
each goal weight times the impact score.
6. Sort the items from highest to lowest value index.
QUALITY FUNCTION DEPLOYMENT (HOUSE OF QUALITY)
Quality function deployment is an extension of the prioritization matrix described
above. However, the rows and the columns are interchanged. The rows become the
evaluation criteria (or goals) and the columns represent the alternative solutions to
be prioritized. The following procedure is used to complete the Quality Function
Deployment analysis:
1. List the items indicating what you want to accomplish. These are the
evaluation criteria.
2. List how you will accomplish what you want to do. These are the
alternatives to be evaluated.
3. Indicate the degree of importance for each of the whats. This is a number
ranging from 1 to 10 (10 is most important).
4. Indicate the company and the competitive rating using a scale from 1 to
10 (10 is best). Plot the competitive comparison.
5. Specify the planned or desired future rating.
6. Calculate the improvement ratio by dividing the planned rating by the
company current rating.
7. Select at most four items to indicate as sales points. Use a factor of 1.5
for major sales points and a factor of 1.2 for minor sales points.
8. Calculate the importance rate as the degree of importance times the
improvement ratio times the sales points.
9. Calculate the relative weight for each item by dividing its importance rate
by the total of the importance rates for all whats.
10. Indicate the relationship value between each what and how. Use values
of 9, 3, and 1 to indicate a strong, moderate or light interrelationship.
11. Calculate the importance weight for each how. This is the total of the
cross products of the relationship value and the relative weight of the
what.
12. Indicate the technical difculty associated with the how. Use a scale of
5 to 1 (5 is the most difcult).
13. Indicate the company, competitive values, and benchmark values for the
how.
14. Specify the plan for each of the hows.
Quality function deployment is usually applied at four different interrelated
levels:
SL3151Ch03Frame Page 140 Thursday, September 12, 2002 6:12 PM
Benchmarking 141
1. Product planning
What customer requirements
How product technical requirements
2. Product design
What product technical requirements
How part characteristics
3. Process planning
What part characteristics
How process characteristics
4. Production planning
What process characteristics
How process control methods
IMPORTANCE/FEASIBILITY MATRIX
Importance is a function of urgency and potential impact on corporate goals. It is
expressed in terms of high, medium, and low. Feasibility takes into consideration
technical requirements, resources, and the cultural and political climate. It is also
expressed in terms of high, medium, and low.
Paired Comparisons
This approach is based on a pair-by-pair comparison of each set of alternatives to
determine the most important. Count the total number of times each alternative was
selected to determine the overall prioritization.
Improvement Potential
To determine how to prioritize cost improvement benchmarking alternatives, perform
the following analysis:
1. Make a Pareto analysis of cost components
2. Assess the percent improvement possible for each of the most signicant
cost components.
3. Multiply the cost times the percent improvements possible to determine
the improvement potential.
4. Prioritize the benchmark studies based on improvement potential.
This approach can be used to prioritize other areas as well.
Prioritization Factors
When prioritizing benchmarking candidates, it is important to take into consideration
many factors. Some of these factors are listed below. It is important to narrow projects
down to the signicant few and to choose a good starting project to showcase the
value of the approach.
SL3151Ch03Frame Page 141 Thursday, September 12, 2002 6:12 PM
142 Six Sigma and Beyond
The rst project should be a winner. It should address a chronic problem, there
should be a high likelihood of completion in several weeks, and the results should
be (a) correlated to customer needs and wants, (b) signicant to the company, and
(c) measurable. Factors to be used subsequently are:
Importance of business need long term
Basis for a sustainable competitive advantage
Percent improvement possible
Customer impact
Realism of expectations
Urgency
Ease of implementation/degree of difculty
Time to implement
Consistency with mission, values, and culture
Organizational buy in
Passionate champion identied
Resource requirements and availability
Capital expenditures
Working capital
Time by skill category
Synergy
Risk versus return
Measurability of result
Modularity of approach
Anticipated problems
Potential resistance
ARE THERE ANY OTHER PROBLEMS? WHAT IS THE RELATIVE IMPORTANCE
OF EACH OF THESE?
The Japanese approach to improvement is called Kaizen. This philosophy espouses
an innovative, small-step-at-a-time approach that is implemented by creating an
awareness of need and empowerment throughout an organization. This contrasts to
the Western approach, which tends to be higher tech, capital intense, and focused
on major innovative changes. (Several studies have demonstrated that the U.S. is
much better at discovery and invention than Japan, but that we lag in commercial
development and implementation of the ideas.) Could the low tech, people-oriented
focus work in your competitive situation? What does this suggest in terms of
benchmarking prioritization?
IDENTIFICATION OF BENCHMARKING SOURCES
TYPES OF BENCHMARK SOURCES
The benchmarking process often starts with a library search to identify alternative
views, issues, approaches, and possible benchmarking sources. Benchmarking
SL3151Ch03Frame Page 142 Thursday, September 12, 2002 6:12 PM
Benchmarking 143
sources can be internal best performers, competitive best performers, or best in class
worldwide.
Internal Best Performers
Xerox used internal benchmarking when it studied Fuji-Xeroxs manufacturing
methods (but not until Florida Power and Light began to emulate them). Different
divisions, plants, distribution outlets, and departments tend to do things differently.
Much can often be learned by looking at these company operations.
Competitive Best Performers
The advantage of making comparisons with direct competitors is obvious. However,
it can be difcult to get competitors to share their source of competitive advantage.
When working with direct competitors, it can also be difcult to get out of the
industry mind-set and come up with creative ideas. It could be that the competitors
in an industry are not particularly good at what they do and hence provide little
stimulus for improvement.
Xerox regularly benchmarks all direct competitors, all their suppliers, and all
major competitors to those suppliers. Updates are important. Knowing how fast
competitors are moving is just as important as knowing where they are.
Best of Class
There is, in general, no way to know the best of the best. Companies generally
pick the best based on reputation through publications, speeches, news releases,
etc. A company might start out with four to ten best candidates and narrow them
down based on initial discussions.
Xerox looked at IBM and Kodak but also L.L. Bean, the catalog sales company,
known for effective and efcient warehousing and distribution of products. Addi-
tional benchmarking partners used by Xerox were:
Customer satisfaction, customer retention USAA (Insurance Co.)
Financial stability and growth A.G. Edwards & Sons
SPC and quality Florida P&L
Customer care and training Walt Disney
Milliken & Company, winner of the 1989 National Quality Award, provided the
following partial list of benchmarks:
Strategy
Safety DuPont
Customer satisfaction ATT, IBM
Innovation 3M, KBC
Education IBM, Motorola
Strategic planning Frito-Lay, IBM, ATT
Time based competition Lenscrafters
SL3151Ch03Frame Page 143 Thursday, September 12, 2002 6:12 PM
144 Six Sigma and Beyond
Quality Process
Benchmarking Xerox
Self-managed teams Goodyear, P&G
Continuous improvement Japanese
Heroic goals concept Motorola
Role model evaluation Xerox
Environmental practice DuPont, Mobay, Ciba-Geigy
Statistical methods Motorola
Flow charting Sara Lee
Quality process FP&L, Westinghouse, Motorola
Miscellaneous
Security DuPont
Accounts payable Mobay
Order handling L.L. Bean
SELECTION CRITERIA
How do you know who is the best? Here are some ways to get that information:
Library search
Reputation
Consultants
Networking
Characteristics to be examined when seeking partners include:
Company size
Customer non-price reasons to buy
Industry critical success factors
Availability of data
Data collection costs
Innovation
Receptivity
One hundred percent accuracy of information is not required. You only need
enough to head you in the right direction.
SOURCES OF COMPETITIVE INFORMATION
Read everything and ask, Has anyone faced this or a similar problem? What have
they done?
Do not forget to ask people in your own organization, including:
Past employees of benchmark company
Family members
Market researchers
Sales and marketing
SL3151Ch03Frame Page 144 Thursday, September 12, 2002 6:12 PM
Benchmarking 145
It is also helpful to make use of trade associations and consultants and to network.
Review studies in which people have identied the characteristics of best per-
formers. Good sources here are Clifford and Cavanagh (1988), Smith and Brown
(1986), and Berle (1991).
Another good source is the Encyclopedia of Business Information Sources,
published frequently by Gale Research, Detroit, Michigan. This source contains
references by subject to the following:
Abstracting and indexing services
Almanacs and yearbooks
Bibliography
Biographical sources
Directories
Encyclopedias
Financial ratios
Handbooks and manuals
Online databases
Periodicals and newsletters
Research centers and institutes
Trade associations/professional associations
Other
Additional sources may also be found in the John Wiley publication entitled
Where to Find Business Information, as well as the following:
Books and periodicals
Trade journals
Functional journals
F.W. Dodge reports
Technical abstracts
Local newspapers, national newspapers
Nielson Market Share
Yellow Pages
Textbooks
Special interest books
City, region, state business reviews
Standard and Poors industry surveys
Directories
Trade show directory
Directory of Associations
Brands and Their Companies
Who Runs the Corporate 1000
Corporate Technology Directory
American Firms in Foreign Countries
Corporate Afliations
Foreign Manufacturers in U.S.
Directory of Company Histories
SL3151Ch03Frame Page 145 Thursday, September 12, 2002 6:12 PM
146 Six Sigma and Beyond
International Trade Names
Leading Private Companies
Marketing Economics Key Plants
Directory of Advertisers
Books of business lists
Thomas Register
Wards Directory
Lists of 9 Million Businesses ABI
Computer databases CD-ROM or online
Text databases
Business dateline articles
BusinessWire press releases
Intelligence Tracking Service consumer trends
Dow Jones Business and Financial Report
Newsearch
Trade and Industry Index
Statistical business information
BusinessLine
Cendata
Consumer Spending Forecast
Disclosure Database
CompuServe
Retail Scan Data
Moodys 5000 Plus
Demographic data
Census Projection 19891993
Donnelley demographics
Directories
Duns Million Dollar Directory
Thomas Register
Company direct
Advertising
Benchmarking partner
Company newsletters
Minority interest partners
Speeches
Direct contact
Financial sources
Annual reports, 10k, proxies, 13D
Investment reports
Prospectus
Filings with regulatory agencies
Dun and Bradstreet, Robert Morris
Moodys Manuals
S&P Reports
SL3151Ch03Frame Page 146 Thursday, September 12, 2002 6:12 PM
Benchmarking 147
Individuals
Company employees
Past employees/retirees
Social events
Construction contractors
Landlords, leasing agents
Salesmen
Service personnel
Focus groups
Professional societies
Professional society members
Trade shows/conventions
National associations
User groups
Seminars
Rating services
Newsletters
Government
Public bid openings
Proposals
National Technical Information Center
Freedom of Information Act
Occupational Safety and Health Administration (OSHA) lings
Environmental Protection Agency (EPA) lings
Commerce Business Daily
Government Printing Ofce Index
Federal depository libraries
Court records
Bank lings
Chamber of Commerce
Government Industrial Program reviews
Uniform Commerce Code lings
State corporate lings
County courthouse
U.S. Department of Commerce
Federal Reserve banks
Legislative summaries
The Federal Database Finder
Patents
Customers
New customers
Consumer groups
Industry members
Suppliers
Equipment manufacturers
SL3151Ch03Frame Page 147 Thursday, September 12, 2002 6:12 PM
148 Six Sigma and Beyond
Distributors
Buying groups
Testing rms
Snooping
Reverse engineering
Hire past employees
Interview current employees
Dummy purchases
Shopping
Request a proposal
Hire to do one job
Apply for a job
Mole
Site inspections
Trash
Chatting in bars
Surveillance equipment
Schools and universities
Directories of case studies
Industry studies
Consultants
Business schools on a consulting basis
Jointly sponsored studies
Information brokers
Industry studies
Market research studies
Seminars
GAINING THE COOPERATION OF THE BENCHMARK PARTNER
Without condentiality, benchmarking will not work. Some items for consideration
in gaining this condentiality and cooperation are:
Use consultants or trade associations or universities to ensure condenti-
ality.
Make sure that there is mutual sharing could be different areas.
Be prepared to give and receive.
Focus on mutual learning and self improvement.
Benet of probing questions and debate
Opening up of a vision
Conrmation of good practice
Consider benchmarking a circuit of companies.
Important that all know in advance
Consider all security and legal implications of sharing data.
SL3151Ch03Frame Page 148 Thursday, September 12, 2002 6:12 PM
Benchmarking 149
MAKING THE CONTACT
When making the contact for benchmarking, follow these steps:
Call and express interest in meeting.
Send/receive a detailed list of questions.
Make sure that you have prepared your questions carefully. (The quality
of the questions can be the signal for a worthwhile use of time.)
Follow up by telephone.
Visit keep an open mind document everything
BENCHMARKING PERFORMANCE AND
PROCESS ANALYSIS
PREPARATION OF THE BENCHMARKING PROPOSAL
Factors to be considered in the preparation of the benchmarking proposal include:
Mission
Objective/scope
Statement of importance
Information available
Critical questions
Ethical and legal issues
Partner selection
Roles and responsibilities
Visit schedule
Data analysis requirements
Form of recommendation
ACTIVITY BEFORE THE VISIT
The approach that follows is very comprehensive. It might not be economical to
follow all the steps in every study. Let practical common sense be the guide to action.
Understanding Your Own Operations
You need to understand your own operations very thoroughly before comparing
them with the operations of others. Here are some steps you should take to make
sure that you understand your current methods:
Ask open-ended questions. For example, for who:
Who does it per the job description?
Who is actually doing it?
Who else could do it?
Who should be doing it?
SL3151Ch03Frame Page 149 Thursday, September 12, 2002 6:12 PM
150 Six Sigma and Beyond
Ask similar questions for what, where, when, why, and how.
Activity Analysis
Activity analysis consists of the following steps:
1. Dene the Activity
Activities can be dened through:
Function
Process
Marketing and sales
Sell products
Activity: These are the major action steps of a process. For example, make a
proposal.
Task: Prepare proposal draft
Operation: Type proposal
2. Determine the Triggering Event
Identify what happens to trigger the activity. Why does the activity get performed
at a specic time? What is the status of material or information before the activity
occurs? What documentation signies that the activity is to start?
Example: Receive material
Material receipt document
3. Dene the Activity
Document how to perform the activity. Indicate what has to be done and the order
in which it is done. This will dene all business procedures, policies, and controls.
Questions to ask include:
What are the key process variables?
What controls these variables?
What levels lead to optimum performance?
What are the causes of variation?
What are the limitations?
Activities should be classied in terms of repetitive versus non-repetitive, pri-
mary versus secondary, and required versus discretionary. It is important in this
analysis to determine limitations, sources of error, rejects, and delays.
Example: Raw material
Inspection process manual
SL3151Ch03Frame Page 150 Thursday, September 12, 2002 6:12 PM
Benchmarking 151
4. Determine the Resource Requirements
Identify the resources to perform the activity. Include factors such as direct material,
direct labor (hours and grade), equipment requirements, information requirements,
and space requirements. The resources might come from more than one department.
It is crucial to trace all of the resources required to perform the activity.
The resources can be determined by a careful analysis of the chart of accounts.
When making the cost analysis, carefully choose among using actual, budgeted,
standard, or planned cost information.
Example: Inspector, material, handling equipment, inspection equipment, inspection
area, inspection manual
5. Determine the Activity Drivers
What are the factors external to the activity that cause more or less of the resources
to be used? What drives the need for the activity and the level of resources required?
Consider both efciency and effectiveness, as follows:
1. Efciency: Doing things right.
2. Effectiveness: Doing the right things
Example: bad weather, poor product quality, automated equipment, workplace layout
6. Determine the Output of the Activity
What units can be used to measure the output of the activity? This will be a measure
of production level such as pieces produced, lots produced, invoices processed,
checks written, or standard hours earned.
Example: Lots of raw material inspected, pieces inspected, or material acceptance
forms completed
7. Determine the Activity Performance Measure
Identify that output measure that most closely controls the level of resources
required. For example, when looking at clerical activities, the number of invoices
is more signicant than the dollar volume of the invoices. When moving material,
the tons moved is more signicant than the number of invoices represented. In
general, the activity measure will be a resource input per unit of an output measure.
Examples:
Cost/lot
Pieces/hour
Cost/unit
Square foot per person
Patents per engineer
Drawings per engineer
SL3151Ch03Frame Page 151 Thursday, September 12, 2002 6:12 PM
152 Six Sigma and Beyond
Lines of code per programmer
Contact labor/company labor
Sales dollars/sales manager
Machine changeover % total
Model the Activity
Modeling an activity involves the following:
Dene the process
Dene the cost or resource requirements
Dene the output variables
Determine the metric or resource per unit of output (This may require the
use of regression analysis or the design of experiments.)
Critical considerations are:
What is the relationship between xed and variable costs?
What determines the capacity limitations of the process?
How much does overhead change with a change in the volume of business?
It is important to distinguish between the metric (resource per unit of output)
and the cost drivers.
The metric or activity measure for inserting pins might be cost per pin inserted.
However, the cost drivers might be the product design and the technology used. A
different design might require fewer insertions, and a different technology might
avoid the need for any insertions.
Examples of Modeling
The modeling of raw material cost per unit produced might consider the following
variables:
The number of parts to be produced
The standard raw material per part
The percent scrap produced
Raw material unit price
Raw material quantity discounts
Exchange rates
Note that a simple comparison of raw material cost as a percentage of sales
dollars provides little real basis for comparing costs and cost improvement.
The number of units sold of an item could be modeled as the number of potential
buyers times the percentage who become aware of the product if they can get it times
the percentage of potential buyers who can get the product times the percentage of triers
who will be repeat buyers times the number of units purchased by a repeat buyer.
SL3151Ch03Frame Page 152 Thursday, September 12, 2002 6:12 PM
Benchmarking 153
When working with salaries and wages, it is necessary to take into consideration
factors such as headcount, rate by grade, straight time/overtime ratios, benets, skill
level, age, education, union vs. non-union, and incentives. Salary and wage ratios
that can be benchmarked are:
Skilled/unskilled labor
Direct/indirect labor
Training cost per employee
Overtime hours/straight time hours
Flow Chart the Process
To determine the sales dollars from a new account, start by ow charting the steps
required to sell a new account. Start with cold calls and work through to a close.
Use of symbols in ow charting:
Start or stop
Flow lines
Activity
Document
Decision
Connector
Then ask some key questions:
What are the major activities?
What are the ratios required to forecast sales?
What factors affect the selling cost per rep or the revenue per rep? Does
looking at these ratios tell you very much? What would you benchmark?
Here is an example of activity performance measures for warehouse operations:
Picking operations
Orders filled per person per day
Line items per person per day
Pieces per person per day
Number of picks per order
Standard hours earned per day
Line items per order
Receiving operations
Number of trucks unloaded per shift
Number of pallets received per day
Number of cases received per day
Number of errors per day
Direct labor hours unloading trucks
SL3151Ch03Frame Page 153 Thursday, September 12, 2002 6:12 PM
154 Six Sigma and Beyond
Incoming QC operations
Number of inspections per period
Number of rejects per period
Direct inspection labor hours
Putaway/storage operations
Number of full pallet putaways per period
Number of loose case putaways per period
Direct labor hours putaway or storage
Cube utilization
Truck loading
Number of units loaded per truck per period
Number of trucks per period
Time per trailer
Customer service operations
Fill rate
Elapsed time between order and shipment
Error rate
Customer calls taken per day
Number of problems solved per call
Number requiring multiple calls
Number of credits issued
Number of backorders
At this stage we are ready to identify information required when meeting with
the benchmark partner. The following information is typical and may be used to
focus the meeting with the benchmark partner and to highlight information require-
ments:
1. Description of company activity and results:
2. Alternative ways of performing the activity:
Alternative 1:
Alternative 2:
Alternative 3:
3. The pros and cons of the alternatives are:
Pros Cons
Alternative 1:
Alternative 2:
Alternative 3:
4. Information required to reach a conclusion as to the best approach:
Review the assumptions for the study to make sure that the outcomes are
correlated to what you were studying. (At this stage, it is not unusual to nd surprises.
That is, you may nd items that you overlooked or you thought were unimportant
and so on.)
SL3151Ch03Frame Page 154 Thursday, September 12, 2002 6:12 PM
Benchmarking 155
ACTIVITIES DURING THE VISIT
By far the most important characteristic of the visit is to:
Observe, question, analyze and learn
Make sure to notice:
What are they doing?
How is it different from what we are doing?
Why are they doing it that way?
How can the results be measured?
Ask open-ended questions, just as you did when observing your own operations.
For example, for who:
Who does it per the job description?
Who is actually doing it?
Who else could do it?
Who should be doing it?
Ask similar questions for what, where, when, why, and how.
Understand the Benchmark Partners Activities
Follow the procedures described above for analyzing the company activities. You
may encounter some analytical difculties because of the following factors.
Accounting differences
Account definitions vary in terms of what is included in the account. For
example, does the cost of raw material include the cost of freight in and
insurance? Where is scrap accounted for?
Cost allocations.
Identification of all multi-department costs.
Different economies of scale/learning curve
Specialization
Automation
Time/unit
Identication of All of the Factors Required for Success
Factors to consider when trying to determine if you have identied all the factors
required for success include the following:
Analysis and intuition
MRP and inventory cycle count
Salary and wage comparisons are the jobs really comparable?
SL3151Ch03Frame Page 155 Thursday, September 12, 2002 6:12 PM
156 Six Sigma and Beyond
The use of manufacturing work cells (This may require a change in
socialization.)
Level of advertising per dollar of sales (Just knowing this may not be
very helpful. The relevant question is, How effectively are the adver-
tising dollars spent?)
Regression analysis
Warehouse study
Design of experiments
ACTIVITIES AFTER THE VISIT
Key activities after the visit include the following:
Be sure to send a thank you note promptly.
Document ndings for each visit.
Summarize all ndings analysis and synthesis.
Compare current operations with ndings.
Gather more specic data if required.
Identify opportunities for improvement combine, eliminate, change
order, etc.
Develop team recommendation.
Distribute benchmark report.
Benchmarking Examples
1. Functional Analysis
2. Cost Analysis
3. Technology Forecasting
The benchmarking of competitive technologies can be very critical. This is partic-
ularly true when the product or technical life cycle is very short. Keys are:
Hours/1000 pcs
Functions Company Best Company Gap
Primary machining .75 .50 .25
Heat training
Grinding
Assembly
Packing
Cost per Unit
Cost Item % Total Cost Cum % Total Company Best
Raw material 40 40 17.50 15.50
Direct labor 20 60 7.50 5.50
SL3151Ch03Frame Page 156 Thursday, September 12, 2002 6:12 PM
Benchmarking 157
Knowing the current technology and its limitations
Identifying the emerging technologies that become the new benchmarks.
Knowing what customers really buy and relating this to the emerging
technology.
Having the courage and foresight to change.
4. Financial Benchmarking
Financial benchmarking compares a company (or the major segments of a com-
pany) relative to the nancial performance of other companies. The modied Du
Pont chart provides a convenient way to do this. The idea of the modied Du Pont
plan is to start with the return on equity and progressively calculate the return on
assets, prot margin, gross margin, sales, cost of goods sold (COGS), sales per
day, cost of goods sold per day, days inventory (COGS), days receivable (COGS),
and days payable (COGS). Company results can be compared with data provided
by:
Dun and Bradstreet Industry Norms and Key Business Ratios
Robert Morris Associates Annual Statement Summary
Prentice Hall Almanac of Business and Industrial Financial Ratios
5. Sales Promotion and Advertising
The comparison of company strategy versus industry strategy can lead to the need
for more specic benchmarking studies.
6. Warehouse Operations
The performance of units engaged in essentially the same type of activity can be
compared using statistical regression analysis. This technique can be used to deter-
mine the signicant independent variables and their impact on costs. Exceptionally
good and bad performance can be identied and this provides the basis for further
benchmarking studies.
7. PIMS Analysis
The PIMS analysis is a further application of multiple regression analysis. It can be
used to identify the major determinants of company protability.
8. Purchasing Performance Benchmarks
The Center for Advanced Purchasing Studies (Tempe, Arizona) has benchmarked
the purchasing activity for the petroleum, banking, pharmaceutical, food service,
telecommunication services, computer, semiconductor, chemical, and transportation
industries. For a wide variety of activity measures, the reports provide the average
value, the maximum, the minimum, and the median value.
SL3151Ch03Frame Page 157 Thursday, September 12, 2002 6:12 PM
158 Six Sigma and Beyond
Motorola Example
Perhaps one of the most famous examples of benchmarking in recent history is the
Motorola example. Motorola, through Operation Bandit, was able to cut the prod-
uct development time for a new pager in half to 18 months based on traveling the
world and looking for islands of excellence. These companies were in various
industries: cars, watches, cameras, and other technically intensive products. The
solution required a variety of actions:
Automated factories
Removing barriers in the workplace
Training of 100,000 employees
Technical sharing alliance with Toshiba
Motorola was particularly impressed by the P200 program of a Hitachi plant.
This stands for a 200% increase in productivity by year end. The plant set immutable
deadlines for the solutions to problems. All departments had six sigma goals.
GAP ANALYSIS
DEFINITION OF GAP ANALYSIS
There are at least two ways to view gap.
1. Result Gap A result gap is the difference between the company per-
formance and the performance of the best in class as determined by the
benchmarking process. This gap is dened in terms of the activity per-
formance measure. The gap can be positive or negative.
2. Practice or Process Gap A practice or process gap is the difference
between what the company does in carrying out an activity and what the
best in class does as determined by the benchmarking process. This gap
is measured in terms of factors such as organizational structure, methods
used, technology used, or material used. The gap can be positive or
negative.
The determination of a gap can be a strong motivator toward the improvement
of performance. It can create the tension necessary for change to occur.
CURRENT VERSUS FUTURE GAP
It is critical to distinguish between the current gap and the likely future gap. Remem-
ber that the benchmark partners performance will not remain at the current levels
nor will the expectations of the marketplace. More likely than not, performance will
improve as time goes on. A company that concentrates only on closing the current
gap will nd itself in a constant game of catch-up.
SL3151Ch03Frame Page 158 Thursday, September 12, 2002 6:12 PM
Benchmarking 159
The company that ignores likely improvements of the benchmark gets caught
in the Z trap. The Z trap, of course, is the step-wise improvement that is OK for
catching up but never good enough to be the best in class.
To summarize the benchmark ndings, it is often helpful to make a tabulation
showing the current practice and metric and the expected future practice and metric
for the company, the competition, and the best in class. In order for the benchmarking
process to be effective, it is critical that management accept the validity of the gap
and provide the resources necessary to close the gap.
GOAL SETTING
GOAL DEFINITION
Two terms that are often used interchangeably are objective and goal. There is,
of course, no one correct denition. As long as the terms are used consistently within
an organization, it does not really matter. For our purposes, however, objectives are
broad areas where something is to be accomplished, such as sales and marketing or
customer service. Goals, on the other hand, are specic and measurable and have a
time frame. For example, Answer all inquiries within 2 hours by the 3rd quarter
of 2002.
GOAL CHARACTERISTICS
For best results, goals should be (a) tough (you need to stretch to attain them) and
(b) attainable (realistic).
When evaluating these two characteristics, always take into consideration the
current capabilities of the company versus the benchmark candidate now and pro-
jected. A good way to monitor progress towards attainment is through trend charting.
RESULT VERSUS EFFORT GOALS
Result goals dene the specic performance measure to be achieved. For example,
Sell $4 million of product x to company y in 2003.
Effort goals dene specic accomplishments that are completely under the
control of the goal setter. They are necessary to achieve the result goals. They can
be thought of as action plans. For example, an effort goal would be, Make x cold
calls a week to new departments of x company.
GOAL SETTING PHILOSOPHY
Best of the Best versus Optimization
There can be a clear difference between implementing an inventory control system
that ensures that a company never runs out of stock and an inventory control system
that optimizes the level of inventory. The optimum inventory balances off the cost
of holding the inventory and the cost of carrying the inventory.
SL3151Ch03Frame Page 159 Thursday, September 12, 2002 6:12 PM
160 Six Sigma and Beyond
A similar consideration is that of determining the optimum feature set for a
product, taking into consideration what specic market segments value and will pay
for. Differentiation that is not valued by the market could result in an unnecessary
expenditure of funds. The determination of value has to be based on the underlying
need of the customer. If this had not been done, there would be no need to have
produced a ballpoint pen, only a better fountain pen. Who asked for electricity, the
camera, or the copy machine before they existed? No one by name, but many in
terms of desire and underlying need.
There is a fundamental difference between working within the constraints facing
a business and removing the constraints. For example, a company can either (a)
optimize production given the setup time for a job or (b) reduce the setup time.
Optimization within the constraints leads to larger lots, higher inventory, perhaps
poorer quality, and delays. It is much more effective to remove the constraint. The
key to manufacturing excellence is to remove the constraints that cause the trade-
offs between cost and customer satisfaction.
Kaizen versus Breakthrough Strategies
The Kaizen philosophy of management stresses making small, constant improve-
ments as opposed to looking for the one magic silver bullet that will lead to success.
Which company is likely to be more innovative: (a) a company that is looking
for the one big idea or (b) a company that is constantly making small improvements?
Both are appropriate strategies depending on the specic situation. However, if a
company is in dire need of improvement there is no better way than to look at
benchmarking. The benchmarking in this case will be a true breakthrough. On the
other hand, the Kaizen approach tells us that we should not relax in our effort to be
the best. There is always something that we can do better.
GUIDING PRINCIPLE IMPLICATIONS
The decisions made regarding goals can have a profound interaction with the mission
statement of the company and/or the values as dened in the statement of guiding
principles. The statement of guiding principles generally consists of:
1. Mission statement a description of the product and markets served or
who, what, and how
2. Values those human and ethical principles that guide the conduct of
the business
GOAL STRUCTURE
Cascading Goal Structure
A consistent goal structure can provide focus and direction to the entire organization.
In order to create this, start with the most important goal, as viewed by the president
or chief executive ofcer, and decompose each of these by functional area working
from one management level to the next. For example, starting with a return on equity
SL3151Ch03Frame Page 160 Thursday, September 12, 2002 6:12 PM
Benchmarking 161
goal, what does this mean each department has to do? What does this suggest in the
way of specic benchmarking goals?
Interdepartmental Goals
One of the most elusive tasks of management is to get all departments to work
together toward a common set of goals. One way to manage this is to have each
department indicate its goals and what it requires in the way of performance from
other departments to reach those goals. A cross tabulation can then be used to develop
the total goals for a department or function.
ACTION PLAN IDENTIFICATION
AND IMPLEMENTATION
The benchmarking process has been used to identify the present and projected result
and performance gap. The actual solution to closing the gap may be the synthesis
of several of the benchmark partners ideas. In order to creatively identify new
solutions, the following questions can be helpful:
Put to other uses?
New ways to use as is? Other uses if modified?
Adapt?
What else is this like? What other ideas does this suggest? Does the past
offer a parallel? What could I copy? Whom could I emulate?
Modify?
New twist? Change meaning, color, motion, sound, odor, form or shape?
Other changes?
Magnify?
What to add? More time? Greater frequency? Stronger? Higher? Longer?
Thicker? Extra value? Plus ingredients? Duplicate? Multiply? Exagger-
ate?
Minimize?
What to subtract? Smaller? Condensed? Miniature? Lower? Shorter?
Lighter? Omit? Streamline? Split up? Understate?
Substitute?
Who else instead? What else instead? Other ingredients? Other material?
Other process? Other power? Other place? Other approach? Other tone
of voice?
Rearrange?
Interchange components? Other pattern? Other layout? Other sequence?
Transpose cause and effect? Change place? Change schedule?
Reverse?
Transpose positive and negative? How about opposites? Turn it back-
wards? Turn it upside down? Reverse roles? Change shoes? Turn ta-
bles? Turn other cheek?
SL3151Ch03Frame Page 161 Thursday, September 12, 2002 6:12 PM
162 Six Sigma and Beyond
Combine?
How about a blend, an alloy, an assortment, an ensemble? Combine units?
Combine purpose? Combine appeals? Combine ideas?
A CREATIVE PLANNING PROCESS
It is highly desirable that more than one alternative way to achieve a goal be
identied. It is also critical that each viable alternative be fully evaluated on its own
merits and that a conscious choice be made to select the best alternative. For each
alternative, consider the following process:
1. Develop a vision or a dream of what you would like to have happen.
2. Identify the critical success factors for achieving the vision.
3. Determine the required action programs.
4. Match resource requirements and availability.
5. Determine if the vision is feasible and either implement the required action
programs or consciously decide to drop or modify the vision.
6. Implement the plan by assigning action plan responsibility.
7. Monitor performance versus expectations and revise the plan as required.
ACTION PLAN PRIORITIZATION
If more action plans are identied than can be implemented, it will be necessary to
prioritize the action plans relative to the corporate goals and customers needs, wants,
and expectations. The process identied earlier in the discussion of prioritization of
benchmark alternatives may be used for this purpose.
One aspect of action plan prioritization is the determination of the most desirable
plan from a nancial point of view. Evaluations of this type often involve the
comparison of cash ows that occur in different years. Consequently, the time value
of money has to be taken into consideration when deciding which plan is most
desirable.
ACTION PLAN DOCUMENTATION
The action plan must be documented and the person(s) responsible for individual
tasks must be identied:
Use of Critical Path Scheduling Tools
Action plan format
Technique for sequencing activities using Post It Notes
Importance of identifying milestones, deliverables, and decision-mak-
ing roles
MONITORING AND CONTROL
A good way to maintain monitoring and control is through formalized periodic
reporting of performance versus plan. Issues to keep in mind are:
SL3151Ch03Frame Page 162 Thursday, September 12, 2002 6:12 PM
Benchmarking 163
Need to assign responsibility for ongoing review and evaluation
Use of a control chart for each variable with the responsible person identied
Just because the ofcial benchmarking study has been completed does not mean
that you are done. To the contrary, you must be vigilant in monitoring your com-
petitors activities by tracking the competitive performance versus plan. This is
because things change and modications must be made to recalibrate the results.
Some items of interest are:
Benchmarks may need to be recalibrated.
Changes may occur in industry, customers, or competitors.
How fast are things moving and in what direction?
Critical success factors may change.
New competitors may enter the eld.
Competition may be better or worse than expected.
FINANCIAL ANALYSIS
OF BENCHMARKING ALTERNATIVES
When comparing benchmarking alternatives, it is often necessary to take into con-
sideration the fact that cash is received and/or disbursed in different time periods
for each of the alternatives. Cash received in the future is not as valuable as cash
received today because cash received today can be reinvested and earn a return. In
order to compare the current value of cash received or disbursed in different periods,
it is necessary to convert a future dollar value to its present value.
For example, the present value of $1100 received a year from now is $1000 if
money can be invested at 10%. The alternative way to view this is to note that the
future value of $1000 invested for one year at 10% is equal to 1000 times 1.10 or
$1100.
The following table can be used to determine the present value of a future cash
ow depending upon the discount rate and the number of years from the present
that the investment is made. To relate to the discussion above, note that the Present
Value Factor for one year at 10% is .9091. Therefore, the present value of $1100
received a year from now is $1000, i.e., $1100 times .9091.
A typical capital project of benchmark alternative evaluation is discussed in the
following pages. The projected net income after tax as well as a summary of the
investments made in the project, the after-tax salvage value, and the cash ow
associated with the project are indicated.
The assumptions used to generate the net income are indicated below the pro-
jection. Note the separation of xed and variable cost and the relationship between
specic assumptions and the level of capacity utilization. In this case, the investment
is assumed to occur at the end of the rst year. Also, there is no increase in working
capital associated with the construction of the plant.
The cash ow can be determined in one of two ways: (a) it is equal to the net
income after tax plus depreciation or (b) it is equal to revenue minus operating
SL3151Ch03Frame Page 163 Thursday, September 12, 2002 6:12 PM
164 Six Sigma and Beyond
expenses minus taxes. The net present value is indicated for several discount rates
(10 to 40%). The net present value at 10% is determined, for example, as in Table 3.2.
If the company cost of capital is 15%, then this project would be acceptable
because the net present value is positive at that discount rate. A similar analysis can
be used to determine a breakeven product price see Table 3.3.
MANAGING BENCHMARKING FOR PERFORMANCE
To summarize this chapter, here are some dos and donts for successful benchmarking:
Requirements for success
Use goal-oriented management measure and monitor everything;
link to compensation plan.
Start small and showcase.
Recognize that conict is inevitable because of the need to share
resources to reach conicting goals. Management has to make tough
decisions to resolve the healthy conict.
Link goals to action plans.
Understand that adequate resources are necessary to ensure the success
of the plan.
Ensure continuing top management support with the recognition that
benchmarking does not necessarily supply a quick x.
Place emphasis both on the result (what to do) and the process (how
to do it).
Accept the concept of constant, incremental change.
A blend of analytical and intuitive skills requiring the ability to syn-
thesize sometimes ambiguous data is needed.
Be willing to admit that change or improvement is possible and perhaps
desirable.
Focus on the needs of specic target market segments and business
strategy when setting the priorities to benchmark.
Present Value Factors
Discount Rate
Year 10% 20% 30% 40%
1 0.9091 0.8333 0.7692 0.7143
2 0.8264 0.6944 0.5917 0.5102
3 0.7513 0.5787 0.4552 0.3644
4 0.6830 0.4823 0.3501 0.2603
5 0.6209 0.4019 0.2693 0.1859
6 0.5645 0.3349 0.2072 0.1328
7 0.5132 0.2791 0.1594 0.0949
8 0.4665 0.2326 0.1226 0.0678
9 0.4241 0.1938 0.0943 0.0484
10 0.3822 0.1615 0.0725 0.0346
SL3151Ch03Frame Page 164 Thursday, September 12, 2002 6:12 PM
Benchmarking 165
TABLE 3.2
An Example of Cash Flow and Present Value
End of Year Cash Flow Present Value Factor Present Value
1 1,000,000 0.9091 909,091
2 246,680 0.8264 203,868
3 597,764 0.7513 449,109
4 1,008,814 0.6830 689,034
Total Net Present Value 432,919
TABLE 3.3
Benchmark Project Evaluations
2001 2002 2003 2004
Sales (units) 21,000 49,000 66,500
Unit price 38.00 40.66 45.54
Revenue 798,000 1,992,340 3,028,357
Operating expense 420,200 1,029,400 1,397,000
Depreciation 50,000 50,000 50,000
Net income before tax 327,800 912,940 1,581,357
Tax 131,120 365,176 632,543
Net income after tax 196,680 547,764 948,814
Investment 1,000,000
Salvage value 10,000
Cash ow 1,000,000 246,680 597,764 1,008,814
Interest rate (%) 10 20 30 40
Net present value 432,919 170,404 2,030 107,982
Assumptions
Plant capacity (units) 70,000
Unit price start 38.00
Tax rate (%) 40
Depreciation (%) 5
Capacity utilization (%) 30 70 95
Price increase (%) 7 7 12
Operating Expense
Units Fixed Variable
10,000 200 20.00
20,000 200 20.00
30,000 200 20.00
40,000 400 21.00
50,000 400 21.00
60,000 500 21.00
70,000 500 21.00
SL3151Ch03Frame Page 165 Thursday, September 12, 2002 6:12 PM
166 Six Sigma and Beyond
Create a corporate culture that thrives on learning and self-improve-
ment with constant, though gradual, change. Constantly apply the Plan,
Do, Check, Act cycle.
Use Statistical Process Control to determine when events, results, or
processes are out of control.
Change the role of middle management. The middle manager is no
longer the boss. Middle managers must encourage and enable work-
ers to think.
Common mistakes
Giving lip service to the process and not providing the resources to
get the job done properly
Failure to effectively communicate the benchmark ndings and drive
them to implementation: all analysis and no action
Failure to precisely dene the expected results of benchmark improve-
ment and to monitor actual performance (In the absence of this, no
organizational learning occurs.)
Lack of a comprehensive prioritization of the benchmarking projects
to ensure the best cost/benet results
The expectation of quick results and a short-term focus on quarterly
earnings
Lack of constant purpose, focus, and direction
Failure to implement results in small size, meaningful modules with
specic deliverables; looking for the big win
Unwillingness to face the reality of a situation and recognize that
change is necessary and that hard choices have to be made
Not drawing the correct balance between required accuracy and the
practical ability to achieve better results; 100 percent accuracy, cer-
tainty, or performance is not required
Failure to recognize that the early follower is almost as protable as
the pioneer and sometimes even more so
Reliance on executive ofce analysis versus observation of the hands-
on experience of others both within and outside the company
Focus on problem reduction and not problems avoidance
Failure to realize that, in most cases, benchmarking follows strategy
Failure to recognize the constantly rising level of expectations in the
marketplace
Lack of contingency planning
Failure to get participation at all levels and to break down interdepart-
mental barriers so that the total resources of the organization can be
focused on the solution to common problems
REFERENCES
Berle, G., Business Information Sourcebook, Wiley, New York, 1991.
Buzzell, R.D. and Gale B.T., The PIMS Principles, Free Press, New York, 1987.
SL3151Ch03Frame Page 166 Thursday, September 12, 2002 6:12 PM
Benchmarking 167
Clifford, D.K. and Cavanagh, R.E., The Winning Performance: How Americas High Growth
Midsize Companies Succeed, Bantam Books, New York, 1988.
Garvin, D.A, Managing Quality, Free Press, New York, 1988.
Hall, W.K., Survival Strategies in a Hostile Environment, Harvard Business Review, Sept./Oct.
1980, pp. 3438.
Higgins, H. and Vincze, A., Strategic Management, Dryden Press, New York, 1989.
Smith, G.N. and Brown, P.B., Sweat Equity, Simon and Schuster, New York, 1986.
Stamatis, D.H., Total Quality Service, St. Lucie Press, Boca Raton, FL, 1996.
Stamatis, D.H., TQM Engineering Handbook, Marcel Dekker, New York, 1997.
SELECTED BIBLIOGRAPHY
Balm, G.J., Benchmarking: A Practitioners Guide for Becoming and Staying Best of the Best,
Quality & Productivity Management Association, Schaumburg, IL, 1992.
Barnes, B., Squeeze Play: Satisfaction Programs Are Key for Manufacturers Caught Between
Declining and Increasing Raw Material Costs, Quirks Marketing Research Review,
Oct. 2001, pp. 4447.
Bosomworth, C., The Executive Benchmarking Guidebook, Management Roundtable, Boston,
MA, 1993.
Boxsvell, R.J., Jr., Benchmarking for Competitive Advantage, McGraw-Hill, New York, 1994.
Camp, R., Business Process Benchmarking: Finding and Implementing Best Practices, ASQC
Quality Press, Milwaukee, WI, 1995.
Chang, R.Y. and Kelly, P.K., Improving through Benchmarking, Richard Chang Associates,
Publications Division, Irvine, CA, 1994.
Karlof, B. and Ostblom, S., Benchmarking: A Signpost to Excellence in Quality and Produc-
tivity, John Wiley & Sons, New York, 1993.
Lewis, S., Cleaning Up: Ongoing Satisfaction Measurement Adds to Japanese Janitorial Firms
Bottom Line, Quirks Marketing Research Review, Oct 2001, pp. 2021, 6870.
SL3151Ch03Frame Page 167 Thursday, September 12, 2002 6:12 PM
SL3151Ch03Frame Page 168 Thursday, September 12, 2002 6:12 PM

169

4

Simulation

As companies continue to look for more efcient ways to run their businesses,
improve work ow, and increase prots, they increasingly turn to simulation, which
is used by best-in-class operations to improve their processes, achieve their goals,
and gain a competitive edge. Simulation is used by some of the worlds most
successful companies, including Ford, Toyota, Honda, DaimlerChrysler, Volk-
swagen, Boeing, Delphi Automotive Systems, Dell Corp. Gorton Fish Co., and many
others. Both design and process simulations have become increasingly important
and integral tools as businesses look for ways to strip non-value-adding steps from
their processes and maximize human and equipment effectiveness, all parts of the
six sigma philosophy. The beauty of simulation is that, while it complements and
aids in the six sigma initiative, it can also stand alone to improve business processes.
In this chapter, we do not dwell on the mathematical justication of simulation;
rather, we attempt to explain the process and identify some of the key characteristics
in any simulation. Part of the reason we do not elaborate on the mathematical
formulas is the fact that in the real world, simulations are conducted via computers.
Also, readers who are interested in the theoretical concepts of simulation can refer
to the selected bibliography found both at the end of the chapter and at the end of
this volume.

WHAT IS SIMULATION?

Simulation is a technology that allows the analysis of complex systems through
statistically valid means. Through a software interface, the user creates a comput-
erized version of a design or a process, otherwise known as a model. The model
construction is a basic ow chart with great additional capabilities. It is the interface
a company uses to build a model of its business process.
Simulation technology has been around for a generation or more, with early
developments mostly in the area of programming languages. In the last 10 to 15
years, a number of off-the-shelf software packages have become available. More
recently, these tools have been simplied to the point that your average business
manager with no industrial engineering skills can effectively employ this technology
without requiring expert assistance. (Some companies have actually modied the
commercial versions to adopt them into their own environments.)
Simplicity is the key to todays simulation software. The basic simulation struc-
ture is as follows: after ow charting the process, the user inputs information about
how the process operates by simply lling in blanks. While completing a model,
the user answers three questions at each step of the process: how long does the step

SL3151Ch04Frame Page 169 Thursday, September 12, 2002 6:11 PM

170

Six Sigma and Beyond

take, how often does it happen, and who is involved? After the model is built and
veried, it can be manipulated to do two critical things: analyze current operations
to identify problem areas and test various ideas for improvement.
The latest improvements in simulation software have made it an excellent tool
for enhancing the design for six sigma (DFSS) process, which strives to eliminate
eight wastes: overproduction, motion, inventory, waiting, transportation, defects,
underutilized people, and extra processing. DFSS targets non-value-added
activities the same activities that contribute to poor product quality.
In this chapter we are not going to discuss commercial packages; rather we are
going to introduce three methodologies that facilitate simulation Monte Carlo,
Finite Element analysis, and Excels Solver approach.

SIMULATED SAMPLING

The sampling method, known generally as Monte Carlo, is a simulation procedure
of considerable value.
Let us assume that a product is being assembled by a two-station assembly line.
There is one operator at each of the two stations. Operation A is the rst of the two
operations. The operator completes approximately the rst half of the assembly and
then sets the half-completed assembly on a section of conveyor where it rolls down
to operation B. It takes a constant time of 0.10 minute for the part to roll down the
conveyor section and be available to operator B. Operator B then completes the
assembly. The average time for operation A is 0.52 minute per assembly and the
average time for operation B is 0.48 minute per assembly. We wish to determine
the average inventory of assemblies that we may expect (average length of the
waiting line of assemblies) and the average output of the assembly line. This may
be done by simulated sampling as follows:
1. The distributions of assembly time for operations A and B must be known
or procured.
Usually this is done through historical data, sometimes with surrogate. A
study was taken for both operations, and two frequency distributions
were constructed (not shown here). In the case of operation A, the value
0.25 minute occurred three times, 0.30 occurred twice, and so on. For
operation A, the mean was 0.52 min with N = 167 and for operation B
the mean was 0.48 with N = 115. The two distributions do not neces-
sarily fit mathematical distributions but this is not important.
2. Convert the frequency distributions to cumulative probability distributions.
This is done by summing the frequencies that are less than or equal to each
performance time and plotting them. The cumulative frequencies are
then converted to percents by assigning the number 100 to the maxi-
mum value. The cumulative frequency distribution (not shown here)
for operation A began at the lowest time, 0.25 minute; there were three
observations. Three is plotted on the cumulative chart for the time 0.25
minute. For the performance time 0.30 minute, there were two observa-
tions, but there were five observations that measured 0.30 minute or

SL3151Ch04Frame Page 170 Thursday, September 12, 2002 6:11 PM

Simulation

171

less, so the value five is plotted for 0.30 minute. For the performance
time 0.35 minute, there were 10 observations recorded, but there were
15 observations that measured 0.35 minute or less. When the cumula-
tive frequency distribution was completed, a cumulative percent scale
was constructed on the right, by assigning the number 100 to the max-
imum value, 167 in this case, and dividing the resulting scale into equal
parts. This results in a cumulative probability distribution. We can use
this distribution to say for example that 100 percent of the time values
were 0.85 minute or less, 55.1 per cent were 0.50 minute or less and so on.
3. Sample at random from the cumulative distributions to determine specic
performance times to use in simulating the operation of the assembly line.
We do this by selecting numbers between 0 and 100 at random (represent-
ing probabilities or percents). The random numbers could be selected
by any random process, such as drawing numbered chips from a box,
using a random number table, or using computer-generated random
numbers. For small studies, the easiest way is to use a table of random
numbers.
The random numbers are used to enter the cumulative distributions in or-
der to obtain time values. In our example, we start with the random
number 10. A horizontal line is projected until it intersects the distribu-
tion curve; a vertical line projected to the horizontal axis gives the mid-
point time value associated with the intersected point on the
distribution curve, which happens to be 0.40 minute for the random
number 10. Now we can see the purpose behind the conversion of the
original distribution to a cumulative distribution. Only one time value
can now be associated with a given random number. In the original dis-
tribution, two values would result because of the bell shape of the
curve.
Sampling from the cumulative distribution in this way gives time values in
random order, which will occur in proportion to the original distribu-
tion, just as if assemblies were actually being produced. Table 4.1 gives
a sample of 20 time values determined in this way from the two distri-
butions.
4. Simulate the actual operation of the assembly line.
This is done in Table 4.2, which is very similar to waiting (queuing) line
problems. The time values for operation A (Table 4.1) are first used to
determine when the half-completed assemblies would be available to
operation B. The first assembly is completed by operator A in 0.40
minute. It takes 0.10 minute to roll down to operator B, so this point in
time is selected as zero. The next assembly is available 0.40 minute lat-
er, and so on. For the first assembly, operation B begins at time zero.
From the simulated sample, the first assembly requires 0.60 minute for
B. At this point, there is no idle time for B and no inventory. At time
0.40 the second assembly becomes available, but B is still working on
the first so the assembly must wait 0.20 minute. Operator B begins

SL3151Ch04Frame Page 171 Thursday, September 12, 2002 6:11 PM

172

Six Sigma and Beyond

work on it at 0.60. From Table 4.1, the second assembly requires 0.50
minute for B. We continue the simulated operation of the line in this
way.
The sixth assembly becomes available to B at time 2.40, but B was ready
for it at time 2.30. He therefore was forced to remain idle for 0.10
minute because of lack of work. The completed sample of 20 assem-
blies is progressively worked out see Table 4.2.
The summary at the bottom of Table 4.4 shows the result in terms of the idle
time in operation B, the waiting time of the parts, the average inventory between
the two operations, and the resulting production rates. From the average times given
by the original distributions, we would have guessed that A would limit the output
of the line since it was the slower of the two operations. Actually, however, the line
production rate is less than that dictated by A (116.5 pieces per hour compared to
123 pieces per hour for A as an individual operation). The reason is that the interplay

TABLE 4.1
Simulated Samples of 20 Performance Time Values for Operations A and B

Operation A

Operation B
Random Number
Performance Time
from Cumulative
Distribution for
Operation A Random Number
Performance Time
from Cumulative
Distribution for
Operation B

10
22
24
42
37
77
99
96
89
85
28
63
9
10
7
51
2
1
52
7
Totals
0.40
0.40
0.45
0.50
0.45
0.60
0.85
0.75
0.65
0.65
0.45
0.55
0.40
0.40
0.35
0.50
0.30
0.25
0.50
0.35
9.75
79
69
33
52
13
16
19
4
14
6
30
25
38
0
92
82
20
40
44
25
0.60
0.50
0.40
0.45
0.35
0.35
0.35
0.30
0.35
0.30
0.40
0.35
0.40
0.25
0.70
0.60
0.35
0.40
0.45
0.35
8.20

SL3151Ch04Frame Page 172 Thursday, September 12, 2002 6:11 PM

Simulation

173

of performance times for A and B does not always match up very well, and sometimes
B has to wait for work. Bs enforced idle time plus Bs total work time actually
determine the maximum production rate of the line.
A little thought should convince us that, if possible, it would have been better
to redistribute the assembly work so that A is the faster of the two operations. Then
the probability



that B will run out of work is reduced. This is demonstrated by
Table 4.3, which assumes a simple reversal of the sequence of A and B. The same

TABLE 4.2
Simulated Operation of the Two-Station Assembly Line
when Operation A Precedes Operation B

Assemblies
Available
for
Operation

B

at
Operation

B


Begins at
Operation

B


Ends at
Time in
Operation

B

Idle Waiting
Time of
Assemblies
Number of
Parts in Line,
Excluding Assembly
Being Processed
in Operation

B

0.00 0.00 0.60 0 0 0
0.40 0.60 1.10 0 0.20 1
0.85 1.10 1.50 0 0.25 1
1.35 1.50 1.95 0 0.15 1
1.80 1.95 2.30 0 0.15 1
2.40 2.40 2.75 0.10 0 0
3.25 3.25 3.60 0.50 0 0
4.00 4.00 4.30 0.40 0 0
4.65 4.65 5.00 0.35 0 0
5.30 5.30 5.60 0.30 0 0
5.75 5.75 6.15 0.15 0 0
6.30 6.30 6.65 0.15 0 0
6.70 6.70 7.10 0.05 0 0
7.10 7.10 7.35 0 0 0
7.45 7.45 8.15 0.10 0 0
7.95 8.15 8.75 0 0.20 1
8.25 8.75 9.10 0 0.50 1
8.50 9.10 9.50 0 0.60 2
9.00 9.50 9.95 0 0.50 2
9.35 9.95 10.30 0 0.60 2
Idle time in operation

B = 2.10

minutes
Waiting time of parts = 3.15 minutes
Avenge inventory of assemblies between A and

B =

3.15/9.35 =

0.34

assemblies
Average production rate of A = [20


60]/9.75 = 123 pieces/hour
Average production rate of B (while working) = [20


60]/8.20 = 146 pieces/hour
Average production rate of A and

B

together = [20


60]/10.30 = 116.5 pieces/hour

Note:

In the above computations,

20

is the total number of completed assemblies; 9.75 is the total work
time of operation A for

20

assemblies from Table 4.1;

8.20

is the total work time, exclusive of idle time,
for operation B for

20

assemblies from Table 4.1.

SL3151Ch04Frame Page 173 Thursday, September 12, 2002 6:11 PM

174

Six Sigma and Beyond

sample times have been used and the simulated operation of the line has been
developed as before. With the faster of the two operations being rst in the sequence,
the output rate of the line increases and approaches the rate of the limiting operation,
and the average inventory between the two operations increases. With the higher
average inventory there, the second operation in the sequence is almost never idle
owing to lack of work. Actually, this conclusion is a fairly general one with regard
to the balance of assembly lines; that is, the best labor balance will be achieved
when each succeeding operation in the sequence is slightly slower than the one
before it. This minimizes the idle time created when the operators run out of work
because of the variable performance times of the various operations. In practical

TABLE 4.3
Simulated Operation of the Two-Station Assembly Line
when Operation B Precedes Operation A

Assemblies
Available
for
Operation

A

at
Operation

A


Begins at
Operation

A


Ends at
Time in
Operation

A

Idle Waiting
Time of
Assemblies
Number of
Parts in Line,
Excluding Assembly
Being Processed
in Operation

A

0.00 0.00 0.40 0 0 0
0.50 0.50 0.90 0.10 0 0
0.90 0.90 1.35 0 0 0
1.35 1.35 1.85 0 0 0
1.70 1.85 2.30 0 0.15 1
2.05 2.30 2.90 0 0.25 1
2.40 2.90 3.75 0 0.40 1
2.70 3.75 4.50 0 1.05 2
3.05 4.50 5.15 0 1.45 2
3.35 5.15 5.80 0 1.80 3
3.75 5.80 6.25 0 2.05 3
4.10 6.25 6.80 0 2.15 4
4.50 6.80 7.20 0 2.30 4
4.75 7.20 7.60 0 2.45 5
5.45 7.60 7.95 0 2.15 5
6.05 7.95 8.45 0 1.90 5
6.40 8.45 8.75 0 2.05 5
6.80 8.75 9.00 0 1.95 6
7.25 9.00 9.50 0 1.75 5
7.60 9.50 9.85 0 1.90 6
Idle time in operation A = 0.10 minute
Waiting time of parts = 25.75 minutes
Average inventory of assemblies between A and B = 25.75/7.60 = 3.4 assemblies
Average production rate of A (while working) = [20


60]/9.75 = 123 pieces/hour
Average production rate of B = [20


60]/8.20 = 146 pieces/hour
Average production rate of A and B together = [20


60]/9.85 = 122 pieces/hour

SL3151Ch04Frame Page 174 Thursday, September 12, 2002 6:11 PM

Simulation

175

situations, it is common to nd safety banks of assemblies between operations in
order to absorb these uctuations in performance.
We may have wanted to build a more sophisticated model of the assembly line.
Our simple model assumed that the performance times were independent of other
events in the process. Perhaps in the actual situation, the second operation in the
sequence would tend to speed up when the inventory began to build up. This effect
could have been included if we had knowledge of how inventory affected perfor-
mance time.
If we have followed this simulation example through carefully, we may be
convinced that it would work but that it would be very tedious for problems of
practical size. Even for our limited example, we would probably wish to have a
larger run on which to base conclusions, and there would probably be other alter-
natives to test. For example, there may be several alternative ways to distribute the
total assembly task between the two stations, or more than two stations could be
considered. Which of the several alternatives would yield the smallest incremental
cost of labor, inventory costs, etc.? To cope with the problem of tedium and excessive
person-hours to develop a solution, the computer may be used. If a computer were
programmed to simulate the operation of the assembly line, we would place the two
cumulative distributions in the memory unit of the computer. Through the program,
the computer would select a performance time value at random from the cumulative
distribution for A in much the same fashion as we did by hand. Then it would select
at random a time value from the cumulative distribution for B, make the necessary
computations, and hold the data in memory. The cycle would repeat, selecting new
time values at random, adding and subtracting to obtain the record that we produced
by hand. A large run could be made easily and with no more effort than a small run.
Various alternatives could be evaluated quickly and easily in the same manner.

FINITE ELEMENT ANALYSIS (FEA)

This technique is not thought of as being a reliability improvement method, yet it
can contribute signicantly to its enhancement. Finite Element Analysis (FEA) is a
technique of modeling a complex structure into a collection of structural elements
that are interconnected at a given number of nodes. The model is subjected to known
loads, whereby the displacement of the structure can be determined through a set
of mathematical equations that account for the element interactions. The reader is
encouraged to read Buchanan (1994) and Cook (1995) for a more complete and
easy understanding of the theoretical aspects of FEA.
In commercial use, FEA is a computer-based procedure for analyzing a complex
structure by dividing it into a number of smaller, interconnected pieces (the nite
elements), each with easily denable load and deection characteristics.

T

YPES



OF

F

INITE

E

LEMENTS

The library of nite elements available in general purpose codes can be subdivided
into the following categories:

SL3151Ch04Frame Page 175 Thursday, September 12, 2002 6:11 PM

176

Six Sigma and Beyond

1.

Point elements:

An example of a point element is a lumped mass element
or an element specically created to represent a particular constraint or
loading present at that point.
2.

Line elements:

Truss links, rods, beams, pipes, cables, rigid links, springs,
and gaps are examples of line elements. This type of element is usually
characterized by two grid points or nodes at each end.
3.

Surface elements:

Membranes, plates, shells and certain types of uid and
thermal elements fall into this category. The surface elements can be
triangular or quadrilateral, and thin or thick; accordingly they are char-
acterized by a connectivity of three or more grid points or nodes.
4.

Solid elements:

Examples of solid elements include wedges, prisms, cubes,
parallelepipeds and three-dimensional uid and thermal elements. Elements
in this category are usually dened using six or more grid points or nodes.
5.

Special purpose elements:

Combinations of springs, gaps, dampers, elec-
trical conductors, acoustic, uid, magnetic, mass, superelement, crack
tips, radiation links, etc., are included in this category.
For example, commonly used elements in the automotive industry (body engi-
neering) are:
Beams
Rigid links
Thin plates triangular and quadrilateral
Solid elements
Springs
Gaps (contact or interface elements)

T

YPES



OF

A

NALYSES

There are many combinations of analyses one may perform with FEA as the driving
tool. However, the two predominant types are nonlinear and dynamic. Using these types
one may focus on specic analysis of for example nonlinearities types such as:
Geometric
Stress less than yield strength
Euler (elastic) buckling


Examples:

quarter panel under jacking and towing; hood following
front crash
Material
Stress greater than yield strength or material is nonlinear elastic
Plastic ow


Examples

: seat belt pull; door intrusion beam bending
Combination of geometric and material
Stress is greater than yield strength and buckling takes place
Crippling


Examples

: rails during crash; roof crush

SL3151Ch04Frame Page 176 Thursday, September 12, 2002 6:11 PM

Simulation

177

The reader should also recognize that combinations of these types exist as well,
for example linear/static the easiest and most economical. Most of the FEA
applications involve this kind of analysis. Examples include joint stiffness and door
sag. Nonlinear/static is less frequently used. Examples include door intrusion beam,
roof crush, and seat belt pull. Linear/dynamic is rarely used. Examples include
windshield wipers or latch mechanism. Nonlinear/dynamic is the most complex and
most expensive. Examples include knee bolster crash, front crash, and rear crash.
Let us look at these combinations a little more closely:

Linear static analysis:

This is the simplest form of analytical application and
is used most frequently for a wide range of structures. The desired results
are usually the stress contours, deformed geometry, strain energy distribu-
tion, unknown reaction forces, and design optimization. Typical examples
are door sag simulation, margin/t problems, joint stiffness evaluation, high
stress location search for all components, spot weld forces, and thermal
stresses.

Euler buckling analysis:

This analysis is also relatively simple to perform and
is used to calculate critical buckling loads. Caution should be exercised
when performing this analysis because it produces analytical results that
are not conservative. In other words, the critical buckling load thus calcu-
lated is usually higher than the actual load that would be determined through
testing. A typical application is hood buckling.

Normal modes analysis:

This is an extremely useful technique for determining
the natural frequencies (eigenvalues) of components and also the corre-
sponding eigenvectors which represent the modes of deformation. Strictly
speaking, this category does not fall under dynamic analysis since the
problem is not time dependent. Typical examples include instrument panels,
total vehicle or component NVH evaluation, door inner panel utter, and
steering column shake.

Nonlinear static analysis:

In general, all nonlinear analysis requires advanced
methodology and is not recommended for use by inexperienced analysts.
Usually, a graduate degree or several graduate level courses in the theory
of elasticity, plasticity, vibrations, and solid and uid mechanics are
required to understand nonlinear behavior. Nonlinear FEA tends to be as
much an art as it is a science, and familiarity with the subject structure is
essential. Typical examples are seat back distortion, door beam bending
rigidity studies, underbody components such as front and rear rails and
wheel housings, bumper design, and crush analysis of several components.

Nonlinear dynamic analysis:

This FEA category is the most advanced. It
involves very complex ideas and techniques and has become practicable
only due to the availability of super-high-speed computers. This class of
analysis involves all the complexities of nonlinear static analysis as well as
additional problems involved with iterative time step selection and contact
simulation at impact. Typical applications are related to crash evaluation
and energy management.

SL3151Ch04Frame Page 177 Thursday, September 12, 2002 6:11 PM

178

Six Sigma and Beyond

P

ROCEDURES

I

NVOLVED



IN

FEA

The procedures involved in FEA include:
1.

Problem denition:

Specication of concerns and expected results
2.

Planning of analysis:

Making decisions regarding the applicability of
FEA, which code to use, and the size and the type of model to be
constructed
3.

Digitizing:

The translation of a drawing into line data that is available to
the modeler
4.

Modeling:

Creating the desired nite element model as planned (Many
sophisticated tools are available such as the PDGS-FAST system, PAT-
RAN, and so on.)
5.

Input of data:

Creating, editing and storing a formatted data le that
includes a description of the model geometry, material properties, con-
straints, applied loading, and desired output
6.

Execution:

Processing the input data in either the batch or the interactive
mode through the nite element code residing on the computer system
and receiving the output in the form of a printout and/or post-processor
data
7.

Interpretation of output:

A study of the output to check the validity of the
input parameters as well as the solution of the structural problem
8.

Feasibility considerations:

Utilizing the output to make intelligent tech-
nical decisions about the acceptability of the structural design and the
scope for design enhancement
9.

Parametric studies:

Redesign using parametric variation (The easiest
changes to study are those involving different gages, materials, constraints,
and loading. Geometric changes require repetition of steps 3 through 8;
the same is true about remodeling of the existing geometry.)
10.

Design optimization:

An iterative process involving the repetition of steps
3 through 9 to optimize the design from considerations of weight, cost,
manufacturing feasibility, and durability

S

TEPS



IN

A

NALYSIS

P

ROCEDURE

The steps in the analysis procedure are:
1. Establish objective.
2. What type of analysis? What program?
Statics
Mechanical Loads
Forces
Displacements
Pressure
Temperatures

SL3151Ch04Frame Page 178 Thursday, September 12, 2002 6:11 PM

Simulation

179

Heat Transfer
Conduction
Convection
1-D radiation
Dynamics
Mode frequency
Mechanical load
Transient (direct or reduced) linear
Sinusoidal
Shock spectra
Heat transfer direct transient
Special features
Nonlinear
Buckling
Large displacement
Elasticity
Creep
Friction, gaps
Substructuring
3. What is minimum portion of system or structure required?
Known forces or displacements at a point
Allows for separation
Structural symmetry
Isolation through test data
Cyclic symmetry
4. What are loading and boundary conditions?
Loading known
Loading can be calculated from simplistic analysis
Loading to be determined from test data
Support of excluded part of system established on modeled portion
Test data taken to establish stiffness of partial constraints
5. Determine model grid.
Choose element types.
Establish grid size to satisfy cost versus accuracy criterion.
6. Develop bulk data.
Establish coordinate systems.
Number node or order elements to minimize cost.
Develop node coordinates and element connectivity description.
Code load and B.C. description.
Check geometry description by plotting.

O

VERVIEW



OF

F

INITE

E

LEMENT

A

NALYSIS

S

OLUTION

P

ROCEDURE

The process of FEA may be summarized with a ow chart of linear static structural
analysis in seven steps. The steps are:

SL3151Ch04Frame Page 179 Thursday, September 12, 2002 6:11 PM

180

Six Sigma and Beyond

1. Represent continuous structure as a collection of discrete elements con-
nected by node points.
2. Formulate element stiffness matrices from element properties, geometry,
and material.
3. Assemble all element stiffness matrices into global stiffness matrix.
4. Apply boundary conditions to constrain model (i.e., remove certain
degrees of freedom).
5. Apply loads to model (forces, moments, pressure, etc.).
6. Solve matrix equation {F} = [K]{u} for displacements.
7. Calculate element forces and stresses from displacement results.

I

NPUT



TO



THE

F

INITE

E

LEMENT

M

ODEL

Once the user is satised with the model subdivision, the following classes of input
data must be prepared to provide a detailed description of the nite element model
to typical FEA software such as MSC/NASTRAN (1998):

Geometry:

This refers to the locations of grid points and the orientations of
coordinate systems that will be used to record components of displacements
and forces at grid points.

Element connectivities

: This refers to identication numbers of the grid points
to which each element is connected.

Element properties:

Examples of element properties are the thickness of a
surface element and the cross-sectional area of a line element. Each element
type has a specic list of properties.

Material properties:

Examples of material properties are Youngs modulus,
density, and thermal expansion coefcient. There are several material types
available in MSC/NASTRAN. Each has a specic list of properties.

Constraints:

Constraints are used to specify boundary conditions, symmetry
conditions, and a variety of other useful relationships. Constraints are
essential because an unconstrained structure is capable of free-body motion,
which will cause the analysis to fail.

Loads and enforced displacements:

Loads may be applied at grid points or
within elements.

O

UTPUTS



FROM



THE

F

INITE

E

LEMENT

A

NALYSIS

Once the data describing the nite element model have been assembled and submit-
ted to the computer, they will be processed by a software package such as MSC/NAS-
TRAN to produce information requested by the user. The classes of output data are:
1. Components of displacements at grid points
2. Element data recovery: stresses, strains, strain energy, and internal forces
and moments
3. Grid point data recovery: applied loads, forces of constraint, and forces
due to elements

SL3151Ch04Frame Page 180 Thursday, September 12, 2002 6:11 PM

Simulation

181

It is the responsibility of the user to verify the accuracy of the nite element
analysis results. Some suggested checks to perform are:
Generate plots to visually verify the geometry.
Verify overall model response for loadings applied.
Check input loads with reaction forces.
Perform hand checks of results whenever possible.
Review and check results.
Plot deformation and stress contour.
Check equilibrium and reaction forces.
Check concentration region for neness of grid (compare calculated stress
distribution with assumed element distribution).
Check peak deection and/or stress for ballpark accuracy.

Special note:

How a structure actually behaves under loading is determined by
four characteristics: (a) the shape of the structure, (b) the location and type of
constraints that hold the structure in place, (c) the loads applied to the structure
their magnitude, location and direction, and (d) the characteristics of the materials
that comprise the structure. For example, glass, steel, and rubber have signicantly
different characteristics and different stiffnesses.

ANALYSIS OF REDESIGNS OF REFINED MODEL
At this stage, generally a correlation is attempted even though it is very difcult
and presents many potential problems. These problems are about 60% associated
with analysis and 40% associated with the actual testing. Remember that correlations
at this stage commonly (over 50 projects) may run from 5 to 30%.
Obviously, the focus should be on testing and test-related correlation with real
world usage. Items of concern should be:
Loads:
Isolation of single component of assembly
Hard to put assumed load in controlled lab test (linear loads causing
moments)
Strain gages:
Gage locations and orientation
Single leg gages versus rosettes
Improper gage lead hookup
Non-linearities:
Plasticity
Pin joint clearance
Bolted joints
In a typical analysis, the related correlation issues/problems/concerns examples
are:
SL3151Ch04Frame Page 181 Thursday, September 12, 2002 6:11 PM
182 Six Sigma and Beyond
Mesh size (for localized stress concentration, isolate concentration region
and rene mesh)
Element type
Load distribution and B.C. isolation
Input error/bad data
Weld details
Common problems that may be encountered in the FEA are:
Part not to size
Misunderstanding or interpretation of results
Therefore, to make sure that the FEA is worth the effort, the following steps are
recommended:
1. Initially, take simple, well-isolated components, with simple well-dened
loads.
2. Do not expect miracles.
3. Use a joint test/analysis program. It can improve the capabilities of each
step and serves as a check on techniques.
4. Work together. This is the key. The test results supplement weakness of
analysis and vice versa.
SUMMARY FINITE ELEMENT TECHNIQUE: A DESIGN TOOL
Proven tool approximate but very accurate if applied properly.
Fine enough grid to match true strain eld.
Need to know loads accurately.
Are supports rigid? What spring stiffness?
Do not let FEA become just a research tool searching for an absolute
answer. Use in all stages of design cycle as relative comparison tool in
conjunction with test.
FEA if nothing else forces someone to examine in detail a component design.
A check on geometry itself.
The experimenter must think in detail about loads and interaction with
rest of system
EXCELS SOLVER
Yet another simple simulation tool is found in the Tools (add in) category of the
Excel software program. Its simplicity is astonishing, and the results may be indeed
phenomenal. What is required is the transformation function. Once that is identied,
then the experimenter denes the constraints and the rest is computed by Solver.
DESIGN OPTIMIZATION
In dealing with DFSS, a frequent euphemism is design optimization. What is
design optimization? Design optimization is a technique that seeks to determine an
SL3151Ch04Frame Page 182 Thursday, September 12, 2002 6:11 PM
Simulation 183
optimum design. By optimum design, we mean one that meets all specied require-
ments but with a minimum expense of certain factors such as weight, surface area,
volume, stress, cost, and so on. In other words, the optimum design is one that is
as efcient and as effective as possible.
To calculate an optimum design, many methods can be followed. Here, however,
we focus on the ANSYS program, as dened by Moaveni (1999), which performs a
series of analysis-evaluation-modication cycles. That is, an analysis of the initial design
is performed, the results are evaluated against specied design criteria, and the design
is modied as necessary. This process is repeated until all specied criteria are met.
Design optimization can be used to optimize virtually any aspect of the design:
dimensions (such as thickness), shape (such as llet radii), placement of supports,
cost of fabrication, natural frequency, material property, and so on. Actually, any
ANSYS item that can be expressed in terms of a parameter can be subjected to
design optimization. One example of optimization is the design of an aluminum
pipe with cooling ns where the objective is to nd the optimum diameter, shape,
and spacing of the ns for maximum heat ow.
Before describing the procedure for design optimization, we will dene some
of the terminology: design variables, state variables, objective function, feasible and
unfeasible designs, loops, and design sets. We will start with a typical optimization
problem statement:
Find the minimum-weight design of a beam of rectangular cross section subject
to the following constraints:
Total stress should not exceed
max
[
max
]
Beam deection should not exceed
max
[
max
]
Beam height h is limited to h
max
[h h
max
]
Design Variables (DVs) are independent quantities that can be varied in order
to achieve the optimum design. Upper and lower limits are specied on the design
variables to serve as constraints. In the above beam example, width and height
are obvious candidates for DVs, since they both cannot be zero or negative, so their
lower limit would be some value greater than zero.
State Variables (SVs) are quantities that constrain the design. They are also
known as behavioral constraints and are typically response quantities that are
functions of the design variables. Our beam example has two SVs: (the total stress)
and (the beam deection). You may dene up to 100 SVs in an ANSYS design
optimization problem.
The Objective Function is the quantity that you are attempting to minimize or
maximize. It should be a function of the DVs, i.e., changing the values of the DVs
should change the value of the objective function. In our beam example, the total
weight of the beam could be the objective function (to be minimized). Only one
objective function may be dened in a design optimization problem.
A design is simply a set of design variable values. A feasible design is one that
satises all specied constraints, including constraints on the SVs as well as constraints
SL3151Ch04Frame Page 183 Thursday, September 12, 2002 6:11 PM
184 Six Sigma and Beyond
on the DVs. If even one of the constraints is not satised, the design is considered
infeasible.
An optimization loop (or simply loop) is one pass through the analysis-evalua-
tion-modication cycle. Each loop consists of the following steps:
1. Build the model with current values of DVs and analyze.
2. Evaluate the analysis results in terms of the SVs and objective function.
3. Modify the design by calculating new values of DVs. These new values are
calculated by ANSYS and are used to dene the new version of the model.
At the end of each loop, new values of DVs, SVs, and the objective function
are available and are collectively referred to as a design set (or simply set).
HOW TO DO DESIGN OPTIMIZATION
Design optimization requires a thorough understanding of the concept of ANSYS
parameters, which are simply user-named variables to which you can assign numeric
values. The model must be dened in terms of parameters (which are usually the DVs),
and results data must be retrieved in terms of parameters (for SVs and the objective
function). The usual procedure for design optimization consists of six main steps:
1. Initialize the design variable parameters.
2. Build the model parametrically.
3. Obtain the solution.
4. Retrieve the results data parametrically and initialize the state variable
and objective function parameters.
5. Declare optimization variables and begin optimization.
6. Review and verify optimum results.
Details of these steps are beyond the scope of this volume. However, the reader
may nd the information in Moaveni (1999).
UNDERSTANDING THE OPTIMIZATION ALGORITHM
Understanding the algorithm used by a computer program is always helpful, and
this is particularly true in the case of design optimization. Perhaps one of the most
important issues is the notion of approximation.
For simple mathematical functions that are continuously differentiable, minima
can be found by analytical techniques such as solving for points of zero slope. The
mathematical relationship between an arbitrary objective function and the DVs,
however, is generally not known, so the program has to establish the relationship
by curve tting. This is done by calculating the objective function for several sets
of DV values (i.e., for several designs) and performing a least squares t among the
data points. The resulting curve (or surface) is called an approximation. Each opti-
mization loop generates a new data point, and the objective function is updated. It
is this approximation that is minimized, not the actual objective function.
SL3151Ch04Frame Page 184 Thursday, September 12, 2002 6:11 PM
Simulation 185
State variables are handled in the same manner. An approximation is generated
for each state variable and updated at the end of each loop. (Because approximations
are used for the objective function and SVs, the optimum design will be only as
good as the approximations.)
CONVERSION TO AN UNCONSTRAINED PROBLEM
State variables and limits on design variables are used to constrain the design and
make the optimization problem a constrained one. The ANSYS program converts
this problem to an unconstrained optimization problem because minimization tech-
niques for the latter are more efcient. The conversion is done by adding penalties
to the objective function approximation to account for the imposed constraints. You
can think of penalties as causing an upturn of the objective function approximation
at the constraints. The ANSYS program uses extended interior penalty functions.
(For more information on penalty functions see sources in the selected bibliography
for this chapter.)
The search for a minimum is then performed on the unconstrained objective
function approximation using the Sequential Unconstrained Minimization Technique
(SUMT), which is explained in most texts on engineering design and optimization.
SIMULATION AND DFSS
In summary, simulation is of value in connection with DFSS because:
Design problems are discovered sooner.
Shortens development time
Provides better overall quality
Permits early optimization of the design
Build-and-test is supplemented by computer simulations.
Permits lower testing budgets
Shortens development time
Permits evaluation of alternative designs
Minimizes overdesign by evaluating early in cycle
Therefore, with the aid of simulation we are capable of:
Less time spent designing
Less time spent testing
Less time spent changing
Result: Better products
in less time
at a lower cost.
And that is what DFSS is all about.
SL3151Ch04Frame Page 185 Thursday, September 12, 2002 6:11 PM
186 Six Sigma and Beyond
REFERENCES
Buchanan G.R., Schaums Outline of Finite Element Analysis, McGraw-Hill Professional
Publishing, New York, 1994.
Cook, R., Finite Element Modeling for Stress Analysis, Wiley, New York, 1995.
Moaveni, S., Finite Element Analysis: Theory and Applications with ANSYS, Prentice Hall,
Upper Saddle River, NJ, 1999.
Schaeffer, H.G., MSC/NASTRAN Primer: Static and Normal Modes Analysis, MSC, New
York, 1998.
SELECTED BIBLIOGRAPHY
Adams, V. and Askenazi, A., Building Better Products with Finite Element Analysis, OnWord
Press, New York, 1998.
Belytschko, T., Liu, W.K., and Moran, B., Nonlinear Finite Elements for Continua and
Structures, Wiley, New York, 2000.
Hughes, T.J.R., The Finite Element Method: Linear Static and Dynamic Finite Element
Analysis, Dover Publications, New York, 2000.
Malkus, D.S. et al., Concepts and Applications of Finite Element Analysis, 4th ed., Wiley,
New York, 2001.
Rieger, M. and Steele, J., Basic Course in FEA Modeling, Machine Design, June 6, 1981,
pp. 78.
Rieger, M. and Steele, J., Basic Course in FEA Modeling, Machine Design, July 9, 1981, pp.
810.
Rieger, M. and Steele, J., Advanced Techniques in FEA Modeling, Machine Design, July 23,
1981, pp. 712.
Shih, R., Introduction to Finite Element Analysis Using I-DEAS Master Series 7, Schroff
Development Corp. Publications, New York, 1999.
Zienkiewics, O.C. and Taylor, R.L., Finite Element Method: Volume 1, The Basis, Butterworth-
Heinsmann, London, 2000.
Zienkiewics, O.C. and Taylor, R.L., Finite Element Method: Volume 2, Solid Mechanics,
Butterworth-Heinsmann, London, 2000.
Zienkiewics, O.C. and Taylor, R.L., Finite Element Method: Volume 3, Fluid Dynamics,
Butterworth-Heinsmann, London, 2000.
SL3151Ch04Frame Page 186 Thursday, September 12, 2002 6:11 PM

187

5

Design for
Manufacturability/
Assembly
(DFM/DFA or DFMA)

When we talk about design for manufacturability/assembly (DFM/DFA or DFMA),
we describe a methodology that is concerned with reducing the cost of a product
through simplication of its design. In other words, we try to reduce the number of
individual parts that must be assembled and ultimately, increase the ease with which
these parts can be put together.
By focusing on these two items we are able to:
1. Design for a competitive advantage
2. Design for manufacturability, assembliability
3. Design for testability, serviceability, maintainability, quality, reliability,
work-in-process (wip), cost, protability, and so on.
This, of course, brings us to the objectives of DFM/DFA, which are:
To maximize
a. Simplicity of design
b. Economy of materials, parts, and components
c. Economy of tooling/xtures, process, and methods
d. Standardization
e. Assembliability
f. Testability
g. Serviceability
h. Integrity of product features
To minimize
a. Unique processes
b. Critical, precise processes
c. Material waste, or scrap due to process
d. Energy consumption
e. Generation of pollution, liquid or solid
f. Waste
g. Limited available materials, components, and parts
h. Limited available, proprietary, or long lead time equipment
i. Degree of ongoing product and production support

SL3151Ch05Frame Page 187 Thursday, September 12, 2002 6:10 PM

188

Six Sigma and Beyond

Therefore, one may describe the DFM/DFA process as a common-sense
approach consistent with the old maxim, Get it done right the rst time. In
DFM/DFA, we strive to get it done right the rst time with the most practical and
affordable methods in order to meet the customers expectations in terms of time,
process, costs, value, needs, and wants. This approach is quite different from the
old way of doing business. Figure 5.1 shows the old and new ways of design.
So, in a formal way we can say that design for manufacturing and assembly is
a way of focusing on designing the product with manufacturability and assembli-
ability in mind, to ensure the product can be produced with an affordable manufac-
turing effort and cost and also, after the manufacturing process, to ensure that the
original designed product reliability can be maintained, if not enhanced. This
approach may seem time-consuming and not value added, but if we consider the
possible alternatives available we can appreciate the benet of any DFM/DFA
initiative. For example, consider the following:
What good is the design, if nobody can produce it?
What good is the design, if nobody can produce it with an affordable
effort (in terms of manufacturing cost, scrap, rework, production
cycle/turn-around, wip, and so on)?

FIGURE 5.1

Trade-off relationships between program objectives (balance design).
Trade-off
Trade-offs
a) Old Design
Goals: Reliability
Better performance
Trade-offs Trade-offs
Trade-offs
Trade-offs
Trade-offs
(b) New Design
Goals: Balanced Design
Low support cost
Low acquisition cost
Producibility
Performance
Reliability
Life Cycle
Costs
Producibility Performance
Reliability

SL3151Ch05Frame Page 188 Thursday, September 12, 2002 6:10 PM

Design for Manufacturability/ Assembly (DFM/DFA or DFMA)

189

What good is the product, if nobody can afford it?
What good is the product, if we cannot market it in time?
What good is the product, if it does not sell?
What good is the product, if it is not protable?
What good is it, if it does not work?
By doing a DFM/DFA, we are able to take into consideration many inputs with
the intent of optimizing the design in terms of the following characteristics:
Design/development lead time vs. marketing time
Customer needs/wants vs. eld application/performance vs. engineering
specications
Production launch efforts
Manufacturing cost
Flexibility and obsolescence of process and equipment
Maintainability/serviceability of product
Protability
Specically, we are looking for the:
1. DFA to

minimize

total product cost by targeting:
a. Part count the major product cost driver
b. Assembly time
c. Part cost
d. Assembly process
2. DFM to

minimize

part cost by:
a. Optimizing manufacturing process
b. Optimizing material selection
c. Evaluating tooling and fabrication strategies
d. Estimating tooling costs

BUSINESS EXPECTATIONS AND THE IMPACT
FROM A SUCCESSFUL DFM/DFA

Perhaps one of the major reasons why we do a DFMA is that in the nal analysis
we expect tremendous results with a measurable impact in the organization. Typical
expectations are:
Product development time improvement by 5075%
Product design cost reduction by 2550%
Product liability improvement by 1025%
Product eld performance chosen to customers needs/wants
Product production launch time reduction by 2550%
Total manufacturing cost reduction by 2575%
Reduction or even elimination of additional tooling/xture cost

SL3151Ch05Frame Page 189 Thursday, September 12, 2002 6:10 PM

190

Six Sigma and Beyond

Reduction, if not total elimination, of the engineering change notice by
7599%
Increase in engineering and technical personnels work morale, and also
letting them feel and assume ownership
Ability to be competitive, be protable, be successful
The impact, of course, becomes obvious. The entire organization is impacted
for the better it becomes business focused. For example: marketing becomes
focused on the customer; engineering becomes focused on design; and manufactur-
ing becomes focused on process. Specically, the impact may be in the following
areas:
Product closer to what customer expects
Reduction of time to market
Enhanced product liability, not just from original product design point of
view but also from a manufacturing process point of view
Improved prot margins by reducing product cost
Improved operating efciency by reducing work-in-process
Enhanced return on assets
Reduced technical personnel turnover rate by improving group and indi-
vidual satisfaction with the job/work
Making the organization be protable

Traditional Approach

In the past, product design/development, manufactur-
ing process design/development, and equipment selection/capability assessment
were typically discrete activities a sequential and discrete approach. That
approach may be shown as in Figure 5.2.

New Way

In order to let the manufacturing process and equipment have a head
start, all three activities of design, process, and equipment occur simultaneously a
simultaneous equipment approach. This is where DFMA can help. This process may
be shown as in Figure 5.3.
The business strategy here becomes a pursuit to articulate the:
Customer needs, wants and expectations



product/process engineering specication
by asking a series of specic questions such as:
What is the voice of the customer (VOC)?
What regulations have to be met?
What is the relative importance of requirement?
Which product characteristics impact the VOC?
Which process characteristics impact the VOC?
What price and prot margin impact to meet VOC?
Are there delivery schedule impacts?

SL3151Ch05Frame Page 190 Thursday, September 12, 2002 6:10 PM

Design for Manufacturability/ Assembly (DFM/DFA or DFMA)

191

Any competition? Targeted competitor?
Continuing improvement opportunity?
Future cost reduction opportunity to meet future customer price reduction
demands?
Figure 5.4 shows the modern way of addressing these concerns. The arrows
between product and process indicate possible alternatives. For example, if we

FIGURE 5.2

Sequential approach.

FIGURE 5.3

Simultaneous approach.
Product selection and
development
assessment
Design/development
manufacturing process
Equipment selection and
capability assessment
Engineering
product design
Mfg process design
Mfg production
Quality inspection
Product to customer
Marketing
specification
and function
confirmation
Time
Manufacturing process
design/development
assessment
Equipment design
capability
Product
design/development

SL3151Ch05Frame Page 191 Thursday, September 12, 2002 6:10 PM

192

Six Sigma and Beyond

examine the producibility for a textile component, we could look at the following
material considerations:
Natural
Synthetic
Properties
Processes
Applications
On the other hand, if we were to evaluate the manufacturing process we might
want to examine:
Pattern layout
Cutting
Sewing assembly
Types
Processes
Characteristics

THE ESSENTIAL ELEMENTS
FOR SUCCESSFUL DFM/DFA

The very minimum requirements for a successful DFMA are:
1. Form a charter that includes all key functions.
2. Establish the product plan.
3. Dene product performance requirement.
4. Develop a realistic, agreed upon engineering specication.
5. Establish products character/features.
6. Dene product architectural structure.
7. Develop a realistic, detailed project schedule.
8. Manage the project schedule, performance, and results.
9. Make efforts to reduce costs.
10. Plan for continuing improvement.

FIGURE 5.4

Tomorrows approach if not todays.
Process
alternative(s)
Product
Voice
of the
customer
Product
alternative(s)
Manufacturing
production
and quality
Business
decision
(cost and
investment)

SL3151Ch05Frame Page 192 Thursday, September 12, 2002 6:10 PM

Design for Manufacturability/ Assembly (DFM/DFA or DFMA)

193

The details of some of these elements are outlined below:
Form a DFMA charter
With any charter there are two primary responsibilities: (a) to identify the
roles and (b) to identify the functions.
i. Roles
A.

Charter members

designer, manufacturing engineer, mate-
rial/component engineer, product engineer, reliability/quality
engineer, and purchasing.
B.

Team leader

program manager is a good candidate, but not
necessary. Any one of the charter members can be an adequate
team leader. Some companies/organizations assign an integrator
to be the DFMA leader.
ii. Charters functions
A. Determining the character of the product, to see what it is and
thus, what design and production methods are appropriate
B. Subjecting the product to a product function analysis, so that all
design decisions can be made with full knowledge of how the
item is supposed to work and so that all team members under-
stand it well enough to contribute optimally
C. Carrying out a design for producibility, usability, and maintain-
ability study to determine if these factors can be improved with-
out impairing functionality
D. Designing an assembly process appropriate to the products par-
ticular character (This involves creating a suitable assembly
sequence, identifying subassemblies, control plan, and designing
each part so that its quality is compatible with the assembly/man-
ufacturing method.)
E. Designing a factory system that fully involves workers in the
production strategy, operates on adequate inventory, and is inte-
grated with suppliers/vendors capabilities and manufacturing
processes
Establish products character/feature
QFD approach
Value analysis
Effectiveness study on function and appearance/cosmetic
Product character risk assessment
Dene product architectural structure
a. Functional block approach
b. Hardware approach
c. Software approach
d. Component approach
Develop a project schedule
a. Agreed to by all functions on:
Tasks
Objectives

SL3151Ch05Frame Page 193 Thursday, September 12, 2002 6:10 PM

194

Six Sigma and Beyond

Duration
Responsibility
b. Specic performance test:
Function
Appearance
Durability
c. Use project management techniques.
d. Concentrate on the concept of getting it done right the rst time, not
only doing it right the rst time.
e. Focus on the high leverage items get some encouraging news rst.
f. Locate and prioritize the resource.
g. Management commitment.
h. Individual commitment.
Manage the DFMA project
Ensure regular and formal review of the status by charter members.
Regularly prepare and formalize executive reports; get feedback.
Ensure total team inputs and contributions, not only involvement.
Utilize proven tools/methodologies.
Make adjustment with team consensus.
Ensure adequate resources with proper priorities.
Control the progress of the project.

T

HE

P

RODUCT

P

LAN

It is imperative that the following considerations, all of which have a major impact
on the manufacturing process, must be discussed and resolved as early as possible
in the design cycle:
1. Nature of program crash program, perfect design, or some other alter-
native
2. Product design itself
3. Production volume
4. Product life cycle
5. Funding
6. Cost of goods sold

Product Design

The focuses of marketing, engineering, manufacturing, and business/nance are
quite different, yet they all push for the same interest for the organization. Our task
then is to make sure that we balance out the different interests and priorities among
the four functions of an organization. How do we do that?
To make a long story short: How to decide between a crash program and a
perfect product? When we talk about perfect product we mean it from a denitional
perspective. There is no such a thing as a

perfect

product, but because of the operating
denition we choose, we can indeed call something a

perfect product.

SL3151Ch05Frame Page 194 Thursday, September 12, 2002 6:10 PM

Design for Manufacturability/ Assembly (DFM/DFA or DFMA)

195

Criteria for Decision between Crash Program and Perfect Product

There are three issues here:
1. Opportunity cost
2. Development risk
3. Manufacturing risk
For a short life cycle product or a highly innovative product in a competitive
environment that changes rapidly, a company must react quickly to each new product
that enters the market.

Getting the product to market fast is the name of the game.

However, being fast to the market is no advantage if the company chooses inadequate
technology, creates a product that cannot meet the potential customers
wants/needs/expectations, designs a product that cannot be manufactured, or must
set the price so high that nobody can afford the product.
The opportunity cost of missing a fast-moving market window, the risk of
entering a market with the wrong product, and the risk of introducing a product
nobody can produce pulls managers in opposite directions. So, the choice of a crash
program (CP) or a perfect product (PP) approach is a necessary step prior to any
product design taking place.
Two examples will make the point of a CP and a PP:

Case #1 Crash Program

Company

: IBM

Product:

Personal computer

Environment:

Forecasted annual growth rate of 60%. Competitors, i.e., Apple,
Tandy are controlling market developments and are beginning to cut into
IBMs traditional ofce market.

Analysis:

Opportunity cost is high. Development cost is low ($10 million
compared to IBMs equity value of $18 billion). The technology of design
and process are stable and internally available.

Decision:

Crash program approach develop, design, manufacture, and mar-
ket the product within 2 years.

Approach details:

Deviate the standard eight phases design procedure. Give
the development team complete freedom in product planning; keep inter-
ference to a minimum; and allow the use of streamlined, relatively infor-
mational management system. Use a so-called zero procedure approach,
focusing on development speed rather than risk reduction of product, man-
ufacturing, and so on.

Results:

Introduce the product within 2 years. Customer acceptance is good.
Cost overrun by 15%. Cost of goods sold is about 5% unfavorable to the
original estimate. Market share is questionable. Long term effects ???
(Does this sound familiar? Quite a few organizations take this approach and of
course, they fail.)

SL3151Ch05Frame Page 195 Thursday, September 12, 2002 6:10 PM

196

Six Sigma and Beyond

Case #2 Perfect Product Design

Company:

Boeing

Product:

Boeing 727 replacement aircraft (767)

Environment:

Replacement within ten years is inevitable (may be speeded up
to 5 years). Competitor, i.e., Airbus, has started its design. A new mid-range
aircraft may take 727 replacement market away due to the operating/fuel
inefciency, comfortability, and Environmental Protection Agency (EPA)
restraints.

Analysis:

Opportunity cost is high. (There is a need for 200300 seat market;
727 is becoming obsolete.) Development cost is high (estimated $1.5 billion
compared to entire company equity of $1.4 billion). Development and
manufacturing risk is high. Technology and customer preferences are pre-
dictable but not yet crystallized. (Should it have two engineers or three?
Should its cockpit allow for two people or three? Cruise range? Fuel
consumption? Pricing?)

Decision:

Perfect product design approach. Complete the development of all
new technologies of design and manufacturing processes in the early stages
of research and development (R and D). Test everything in sight, and move
product to launch only when success is nearly guaranteed. Eight-year design
lead time.

Approach details:

Form an R and D team of 400 engineers/managers that
includes designer, manufacturing engineer, quality, purchasing, and mar-
keting. (The team member number goes up to 1000 right before go-ahead.)
Apply concurrent engineering and DFMA process fully in the product R
and D stage.

Results

: Introduce the 767 on schedule (which compares to Airbus 310 eight
months behind schedule). Although Boeing had missed the 300350 seat
market and lost some of the 727 replacement market to Airbus 300, Boeing
got to



keep 200300 seat market with a successful 767. Development costs
were within budget and cost of goods sold was 4% favorable to the original
estimates. No recall record so far. Long term effects likely good.
Most likely you are the in-betweens.



The other approaches (see Figure 5.5)
include:
Quantum leap parallel program
Acquisition
Joint venture
Leapfrog (Purchase a facility to maintain and manufacture current tech-
nology/design. Focus R and D on next generation technology/design.)

The Product Plan Product Design Itself

Product design has dedicated (whether one wants to admit it or not) the future of
the product. About 95% of the material costs and 85% of the design/labor and

SL3151Ch05Frame Page 196 Thursday, September 12, 2002 6:10 PM

Design for Manufacturability/ Assembly (DFM/DFA or DFMA)

197

overhead costs are controlled by the design itself. Once the design is complete, about
85% of the manufacturing process has been locked in.
Design-related factors affecting the manufacturing process include:
Product size/weight
Reliability/quality requirement
Architectural structure
Fastener/joint methods
Parts/components/materials
Size, shape, and weight of parts/components
Appearance/cosmetic requirement
Other factors affecting the manufacturing process include:
Floor space
Material ow and process ow
Power, compressed air, a/c and heating, and facility
Quality plan
Manual operation mandatory
Mechanized operation or automation operation mandatory
System interfacing requirement
Manufacturing process concepts/philosophy cpf vs. in-line vs. batch
vs. cellar approaches

FIGURE 5.5

The product development map/guide.
Crash program Acquisition Leapfrog exit
Joint venture
Opportunity
cost
Development risk and
manufacturing risk
Step-by-step design approach

SL3151Ch05Frame Page 197 Thursday, September 12, 2002 6:10 PM

198

Six Sigma and Beyond

Management commitment
Production volume
Volume requirements have the major influence on the choice of the man-
ufacturing process.
Product life cycle
As with volume requirements, product life has a significant influence on
the manufacturing process.
Funding
Since most of mechanization and automation are heavily capitalized, fund-
ing plays a major role in determining the product plan, which has a sig-
nificant influence on the manufacturing process.
Cost of goods sold
What is affordable capital/tooling/fixture amortization?
What is the targeted cost of goods sold?

Dene Product Performance Requirement

Minimum requirements are the collection and understanding of the following infor-
mation:
Customer wants vs. customer needs vs. customer expectations
Field condition and environment
Performance standards
Durability
The result of this understanding will facilitate the development of realistic and
agreed upon specication(s). Some of the specic items that will guide realistic
specications are:
Engineering interpretation of customer needs
Correlation between engineering specication and product specication
Reliability study in terms of MTBF
Manufacturing process reliability assessment in terms of maintaining orig-
inal designed product standard
Manufacturing cost assessment
Option structure
Control plan
Qualication plan

AVAILABLE TOOLS AND METHODS FOR DFMA

Innite tools and methodologies may be used to accomplish the goal of a DFMA
program. However, all of them fall into two categories: (a) approach alternatives
and (b) mechanics. Some of the most important ones are listed below:
Approach alternatives:

SL3151Ch05Frame Page 198 Thursday, September 12, 2002 6:10 PM

Design for Manufacturability/ Assembly (DFM/DFA or DFMA)

199

1. Ongoing program/project manager approach
2. Manufacturing engineering sign-off approach
3. Design engineering use simulation software package approach
4. Simultaneous engineering approach
5. Concurrent engineering approach
6. Integrator approach
Mechanics:
1. Quality function development (QFD)
2. Design of experiments (DOE)
3. Potential failure mode and effects analysis (FMEA)
4. Value engineering and value analysis (VE/VA)
5. Group technology (GT)
6. Geometric dimensioning and tolerancing (GD&T)
7. Dimensional assembly analysis (DAA)
8. Process capability study (C

pk

, P

pk

, C

p

, C

r

, ppm indices)
9. Just-in-time (JIT)
10. Qualitative assembly analysis (QAA)

C

OOKBOOKS



FOR

DFM/DFA

There are no cookbooks for DFMA. However, three organized instruction manuals
may be close to most engineers terms of guidelines. They are:
1. Mitsubishi method
2. U-MASS method
3. MIL-HDB-727 design guidance for producibility
All of the above methods utilize the principles of Taylors motion economy,
which have been proven to be quite helpful, especially in the DFA area. We identify
some of these principles here that may be protably applied to shop and ofce work
alike. Although not all are applicable to every operation, they do form a basis or a
code for improving efciency and reducing fatigue in manual work:
1. Smooth continuous curved motions of the hands are preferable to straight-
line motions involving sudden and sharp changes in direction.
2. Ballistic movements are faster, easier, and more accurate than restricted
(xation) or controlled movements.
3. Work should be arranged to permit easy and natural rhythm wherever
possible.

Use of the Human Body

4. The two hands should begin as well as complete their motions at the same
time.

SL3151Ch05Frame Page 199 Thursday, September 12, 2002 6:10 PM

200

Six Sigma and Beyond

5. The two hands should not be idle at the same time except during rest
periods.
6. Motions of the arms should be made in opposite and symmetrical direc-
tions and should be made simultaneously.
7. Hand and body motions should be conned to the lowest classication
with which it is possible to perform the work satisfactorily.
8. Momentum should be employed to assist the worker wherever possible,
and it should be reduced to a minimum if it must be overcome by muscular
effort.
9. Eye xations should be as few and as close together as possible.

Arrangement of the Work Place

10. There should be a denite and xed place for all tools and materials.
11. Tools, materials, and controls should be located close to the point of use.
12. Gravity feed bins and containers should be used to deliver material close
to the point of use.
13. Drop deliveries should be used wherever possible.
14. Materials and tools should be located to permit the best sequence of
motions.
15. Provisions should be made for adequate conditions for seeing. Good
illumination is the rst requirement for satisfactory visual perception.
16. The height of the work place and the chair should preferably be arranged
so that alternate sitting and standing at work are easily possible.
17. A chair of the type and height to permit good posture should be provided
for every worker.

Design of Tools and Equipment

18. The hands should be relieved of all work that can be done more advan-
tageously by a jig, a xture, or a foot-operated device.
19. Two or more tools should be combined wherever possible.
20. Tools and materials should be pre-positioned whenever possible.
21. Where each nger performs some specic movement, such as in type-
writing, the load should be distributed in accordance with the inherent
capacities of the ngers.
22. Levers, crossbars, and hand wheels should be located in such positions
that the operator can manipulate them with the least change in body
position and with the greatest mechanical advantage.

M

ITSUBISHI

M

ETHOD

The Mitsubishi method was developed and ne-tuned by Japanese engineers in
Mitsubishis Kobe shipyard. The primary principle is the combination of QFD and
Taylors motion economy. The Mitsubishi method is very popular in Japans heavy
industries, i.e., shipbuilding industry, steel industry, and heavy equipment industry.
There is also evidence of some application of this method in Japans automotive,

SL3151Ch05Frame Page 200 Thursday, September 12, 2002 6:10 PM

Design for Manufacturability/ Assembly (DFM/DFA or DFMA)

201

motorcycle, and ofce equipment industries. More efforts are needed to promote
and share these techniques, and some effort is needed to ne-tune the Mitsubishi
method and make it practical to t U.S. manufacturing companies cultures and
traditions.
The process is based on the following principles:
The Mitsubishi method focuses on the product designs reection of the
customers desires and tastes. Thus, marketing people, design engineers,
and manufacturing staff must work together from the time a product is
rst conceived.
The Mitsubishi method is a kind of conceptual map that provides the
means for inter-functional planning and communications. People with
different problems and responsibilities can thrash out design priority while
referring to patterns of evidence on the houses grid.
The method involves 12 steps for each part in design/manufacturing, as
follows:
1. Customer attributes (CA) analysis also called voice of customer
(VOC) evaluation is performed.
2. Relative-importance weights of CA are determined.
3. Data is collected on customer evaluations of competitive products.
4. Engineering characteristics tell how to change the product.
5. Relationship matrix shows how engineering decisions affect customer
perceptions.
6. Objective measures evaluate competitive products.
7. Roof matrix facilitates engineering creativity.
8. QFD is nalized.
9. Parts development is based on manufacturing process planning and
handling planning (i.e., start the basic manufacturing process with
materials in liquid state, feeding raw materials with elevator feeder,
handling the wip with center board hopper, and continuing the forth-
coming sequential operation with carousel assembly machine).
10. Manufacturing process and handling operation are based on the prin-
ciples of motion economy.
11. Process planning is guided by parts/component characteristics, which
are based on engineering characteristics, and the latter are based on
customer attributes (compare to step #9).
12. Integrator coordinates/controls the project.
Analysis procedure.
Continuing improvement:
Voice of customer, design alternative, and process alternative continue to inter-
face with each other. It is a dynamical situation no ending improvement.
Software package.
Table 5.1 shows an example of customer attributes (Cas) and bundles of CAs
for a car door. An example of relative importance weights of customer attributes is

SL3151Ch05Frame Page 201 Thursday, September 12, 2002 6:10 PM

202

Six Sigma and Beyond

shown in Table 5.2. An example of customer evaluations of competitive products is
shown in Table 5.3.

U-MASS M

ETHOD

The U-MASS method is named for the University of Massachusetts, where it was
developed by two professors, Geoffrey Boothroyd and Peter Dewhurst, and their
graduate students. It is the most common DFM/DFA approach used in the U.S. The

TABLE



5.1
Customer Attributes for a Car Door

Primary Secondary Tertiary

Easy to open and close Easy to close from outside
Stays open on a hill
Easy to open from outside
Does not kick back
Easy to close from inside
Easy to open from inside
Good operation and use Isolation
Arm rest
Interior trim
Does not leak in rain
No road noise
Does not leak in car wash
No wind noise
Does not drip water or snow when open
Does not rattle
Soft, comfortable
In right position
Material will not fade
Attractive (non-plastic look)
Good appearance Clean
Fit
Easy to clean
No grease from door
Uniform gaps between matching panels

TABLE



5.2
Relative Importance of Weights

Bundles Customer Attributes Relative Importance

Easy to open and close door Easy to close from outside
Stays open on a hill
7
5
Isolation Does not leak in rain
No road noise
3
2
A complete list totals 100%

SL3151Ch05Frame Page 202 Thursday, September 12, 2002 6:10 PM

Design for Manufacturability/ Assembly (DFM/DFA or DFMA)

203

primary principle is the conventional motion and time study, while keeping in mind
the component counts and motion economy.
This method is heavily promoted in academic communities or institute-related
manufacturing companies located in the New England area, such as Digital Equipment
Corp. and Westinghouse Electric Company. Other companies are using it as well, such
as Ford Motor Co., DaimlerChrysler, and many others. Its appeal seems to be the
availability of the software that may be purchased from Boothroyd and Dewhurst. (Some
practitioners nd the software very time-consuming in design efciency calculation and
believe that more work is needed to ne tune its efciency, as well as make it more
user friendly.) The process is based on the following principles:
1. Determine the theoretical minimum part count by applying minimum part
criteria.
2. Estimate actual assembly time using DFA database.
3. Determine DFA Index by comparing actual assembly time with theoretical
minimum assembly time.
4. Identify assembly difculties and candidates for elimination that may lead
to manufacturing and quality problems.

MIL-HDBK-727

This method was developed by the U.S. Army material command and published by
the naval publications and forms center. The rst edition was published in 1971,
and the latest revision was published in April 1984. The primary principle is Taylors
motion economy and some other design tools, i.e., DOE. This method is not too
popular. Not many people know about it, and it is not used very much outside of
the military. Some updates and revisions are needed to make it more practical to
general manufacturing companies.

TABLE



5.3
Customers Evaluations of Competitive Products

Bundles
Customer
Attributes
Relative
Importance
Customer
Perceptions

Easy to open
and close door
Easy to close from outside
Stays open on a hill
7
5
Worst Best
1 2 3 4 5
Isolation Does not leak in rain
No road noise
3
2
Worst Best
1 2 3 4 5
A complete list totals 100%
Comparison is based on individual
attributes as compared to:
Our car door
Competitor As
Competitor Bs
And so on

SL3151Ch05Frame Page 203 Thursday, September 12, 2002 6:10 PM

204

Six Sigma and Beyond

FUNDAMENTAL DESIGN GUIDANCE

The core of the DFM/DFA process is to make sure that the design and assembly
are planned in terms of:
1. Simplicity (as opposed to complexity)
2. Standardization (commonality)
3. Flexibility
4. Capability
5. Suitability
6. Carryover
So, a designer designing a product should be cognizant of the effects on product
design. Some of these are:
Materials selection is based on the targeted manufacturing process.
The forms/shapes of parts are based on the targeted transportation, han-
dling, and parts feeding system.
Field environment can affect the production durability, which contributes
variation to the components/parts as well as the manufacturing process.
Shelf life.
Operating life.
Product MTBF and MTBR.
In the development of the primary design, consideration must be given to whether
to start with a basic process or to start with secondary process with purchased raw
or semi-raw materials. If the decision is to start with a basic process, then the next
question will be what kind of materials to start with? There are three options:
1. Start with materials in liquid state, i.e., casting.
2. Start with materials in plastic state, i.e., forging.
3. Start with materials in solid state, i.e., roll forming (sheet), extrusion (rod,
sheet), electroforming (powder), automatic screw machine work (rod).
If a secondary process is needed, either as a sequential operation of a basic
process or a fresh starting point, consideration must be given to the selection of the
most favorable forming and sizing operations. A number of factors relating to a
given design that need to be considered include:
1. The shape desired
2. The characteristics of the materials
3. The tolerance required
4. The surface nish
5. The quantity to be produced
6. The average run size
7. The cost

SL3151Ch05Frame Page 204 Thursday, September 12, 2002 6:10 PM

Design for Manufacturability/ Assembly (DFM/DFA or DFMA)

205

The focus then of a product design is to:
1.

Minimize parts/components:

The fewer parts/components and the fewer
manufacturing/assembly operations, the better, i.e.,
Combine mating parts, unless isolation is needed.
Eliminate screws and loose pieces. Replace screws with snap-on parts
or fasten rivet, if practical. If screws are a necessary evil, try to make
them all the same type and size.
Do not use a screw to locate. Remember that a screw is a fastener.
2.

Use common/popular components/parts:

Off-the-shelf type compo-
nents/parts usually are user friendly and less expensive. Tooling/setup
charges also can be avoidable beyond the pilot try headache, i.e.,
Use fasteners with common/popular/standard length and diameter.
Use common values of resistors, capacitors, diodes, etc.
Use standard color chip of paints and coatings, if possible.
3.

Design the parts to be symmetrical:

If you must use customized unique
parts, try to design the parts to be symmetrical, and use a jigless assembly
method, if at all possible, i.e.,
Avoid internal orientations.
Design an external accentuated locating feature, if it cannot be inter-
nally symmetrical.
4. Design the parts to be self-aligned, self-locating, and self-locking, i.e.,
Design locating pins and small snap protrusions on mating parts.
Chamfers and tapers.
Use mechanical entrapments and snap-on approach.
Connect necessary wires/harnesses directly and use locking connectors.
Make sure that parts are easy to grip.
Avoid exible parts the more rigid the part, the more easily handled
and assembled.
Avoid cables, if practical.
Avoid complicated fastening process, if practical.
(Special note: If screws must be used, remember these rules:
Shank to head ratio: l greater than or equal to 1.5; l greater than or
equal to 1.8 if tube feed
Head design
Thread consideration:
Tapped holes?
Thread cutting screws?
Thread forming screws?
Quality screws)
5. Design for simple or no adjustment at all:
Remember, adjustment is a non-value added operation. Minimum
adjustment if necessary with one-hand operation should be at most.
6. Modularize sub-assembly design:
Modularize sub-assemblies. Assemble and test them prior to nal
assembly.

SL3151Ch05Frame Page 205 Thursday, September 12, 2002 6:10 PM

206

Six Sigma and Beyond

THE MANUFACTURING PROCESS

Figure 5.6 shows a schematic of a manufacturing system. There are four categories
of manufacturing processes. They are:
1.

Fabrication process

which can be further categorized as basic process,
secondary process, or nishing process. Typical types are:
Single station
Continuous production ow
Pace production line
Manufacturing cell approach
2.

Assembly process

which can be further categorized as manual assem-
bly, mechanical assembly, automatic assembly, or computer-aided assem-
bly. Typical types are:
Continuous transfer
Intermittent transfer
Indexing mechanisms
Operator-paced free-transfer machine
3. Inspection or quality control process
Inspection check point(s)
4. Material handling process
Conveyors
Tractors

FIGURE 5.6

Manufacturing system schematic.
Manufacturing System Schematic
Input
Design drawings
Specifications, standards
Requirements
Materials
Activities
Manufacturing
Controlling
Planning
Scheduling
Output
Products
Constraints
Personnel Policies
Quality Control/Assurance
Purchasing

SL3151Ch05Frame Page 206 Thursday, September 12, 2002 6:10 PM

Design for Manufacturability/ Assembly (DFM/DFA or DFMA)

207

Fork lifts
Parts/component feeding system:
Vibratory bowl feeder
Reciprocating tube hopper feeder
Centerboard hopper feeder
Reciprocating fork hopper feeder
External gate hopper feeder
Rotary disk feeder
Centrifugal hopper feeder
Revolving hook hopper feeder
Stationary hook hopper feeder
Bladed wheel hopper feeder
Tumbling barrel hopper feeder
Rotary centerboard hopper feeder
Magnetic disk feeder
Elevating hopper feeder
Magnetic elevating hopper feeder
Approaches to manufacturing processes include the job shop approach, the
assembly line approach, and the one in, one out approach. Details of these processes
are as follows:
Singled station manufacturing process job shop approach

Definition:

Single fixture with one or more operations performed
Advantages:
Capital investment low
Line balance not needed
Interference with other operations (downtime) minimum, if any
Flexibility easy to expand or rearrange
Employment fulllment high

Disadvantages

:
Multiple tooling/xture investment high
Material handling high
Material ow easy to congest at in/out
Operation cycle time long
Operator skills moderate
Continuous production ow manufacturing process assembly line approach

Definition:

Continuous, sequential motion assembly/manufacturing ap-
proach
Advantages:
Work-in-process low
Manufacturing/assembly cycle time low
Material handling very low, if not eliminated
Material ow good
Operator skill/training only in specialized areas

SL3151Ch05Frame Page 207 Thursday, September 12, 2002 6:10 PM

208

Six Sigma and Beyond

Disadvantages

:
Capital investment high
Preventative maintenance and corrective maintenance absolute
necessity (If one part breaks down, the entire line is down.)
Engineering, technician, and ow disciplines absolute necessity
Flexibility low
Production changeover complicated
Pace production line one in, one out

Definition:

Same cycle time at all work stations, and likely all work pieces
transfer at the same time

Advantages:
Work-in-process very low and can be calculated
Material handling automatic
Material ow good
Productivity best
Disadvantages:
Capital investment high
Preventative maintenance and corrective maintenance absolute
necessity (If one part breaks down, the entire line is down.)
Engineering, technician, and ow disciplines absolute necessity
Flexibility very low
Production changeover difcult
MISTAKE PROOFING
DEFINITION
Mistake proong by denition is a process improvement system that prevents per-
sonal injury, promotes job safety, prevents faulty products, and prevents machine
damage. It is also known as the Shingo method, Poka Yoke, error proong, fail safe
design, and by many other names.
THE STRATEGY
Establish a team approach to mistake proof systems that will focus on both internal
and external customer concerns with the intention of maximizing value. This will
include quality indicators such as on-line inspection and probe studies.
The strategy involves:
Concentrating on the things that can be changed rather than on the things
that are perceived as having to be changed to improve process performance
Developing the training required to prepare team members
Involving all the appropriate people in the mistake proof systems process
Tracking quality improvements using in-plant and external data collection
systems (before/after data)
SL3151Ch05Frame Page 208 Thursday, September 12, 2002 6:10 PM
Design for Manufacturability/ Assembly (DFM/DFA or DFMA) 209
Developing a core team to administer the mistake proof systems process
This core team will be responsible for tracking the status of the mistake
proof systems throughout the implementation stages.
Creating a communication system for keeping plant management, local
union committee, and the joint quality committee informed of all
progress as applicable
Developing a process for sharing the information with all other depart-
ments and/or plants as applicable
Establishing the mission statement for each team and objectives that will
identify the philosophy of mistake proof systems as a means to improve
quality
A typical mission statement may read: to protect our customers by devel-
oping mistake proofing systems that will detect or eliminate defects
while continuing to pursue variation reduction within the process.
Developing timing for completion of each phase of the process
Establishing cross-functional team involvement with your customer(s)
Typical objectives may be to:
Become more aware of quality issues that affect our customer
Focus our efforts on eliminating these quality issues from the production
process
Expose the conditions that cause mistakes
Understand source investigation and recognize its role in preventing
defects
Understand the concepts and principles that drive mistake prevention
Recognize the three functional levels of mistake proong systems
Be knowledgeable of the relationships between mistake proof system
devices and defects
Recognize the key mistake proof system devices
Share the mistake proof system knowledge with all other facilities within
the organization
DEFECTS
Many things can and often do go wrong in our ever-changing and increasingly
complex work environment. Opportunities for mistakes are plentiful and often lead
to defective products. Defects are not only wasteful but result in customer dissatis-
faction if not detected before shipment.
The philosophy behind mistake proof systems suggests that if we are going to
be competitive and remain competitive in a world market we cannot accept any
number of defects as satisfactory.
In essence, not even one defect can be tolerated. Mistake proof systems are a
simple method for making this philosophy become a daily practice. Simple concepts
and methods are used to accomplish this objective.
SL3151Ch05Frame Page 209 Thursday, September 12, 2002 6:10 PM
210 Six Sigma and Beyond
Humans tend to be forgetful, and as a result, we make mistakes. In a system
where blame is practiced and people are held accountable for their mistakes and
mistakes within the process, we discourage the worker and lower morale of the
individual, but the problem continues and remains unsolved.
MISTAKE PROOF SYSTEM IS A TECHNIQUE FOR AVOIDING ERRORS
IN THE WORKPLACE
The concept of error proof systems has been in existence for a long time, only we
have not attempted to turn it into a formalized process. It has often been referred to
as idiot proong, goof proong, fool proong, and so on. These terms often have a
negative connotation that appears to attack the intelligence of the individual involved
and therefore are not used in todays work environment. For this reason we have
selected the term mistake proof system. The idea behind a mistake proof system
is to reduce the opportunity for human error by taking over tasks that are repetitive
or actions that depend solely upon memory or attention. With this approach, we
allow the worker to maintain dignity and self-esteem without the negative connota-
tion that the individual is an idiot, goof, or fool.
TYPES OF HUMAN MISTAKES
Forgetfulness
There are times when we forget things, especially when we are not fully concen-
trating or focusing. An example that can result in serious consequences is the failure
to lock out a piece of equipment or machine we are working on. To preclude this,
precautionary measures can be taken: post lock out instructions at every piece of
equipment and/or machine; have an ongoing program to continuously alert operators
of the danger.
Mistakes of Misunderstanding
Jumping to conclusions before we are familiar with the situation often leads to
mistakes. For example, visual aids are often prepared by engineers who are thor-
oughly familiar with the operation or process. Since the aid is completely clear from
their perspective, they may make the assumption (and often do) that the operator
fully understands as well. This may not be true. To preclude this, we may test this
hypothesis before we create an aid; provide training/education; standardize work
methods and procedures.
Identication Mistakes
Situations are often misjudged because we view them too quickly or from too far
away to clearly see them. One example of this type of mistake is misreading the
identication code on a component of a piece of equipment and replacing that
component with the wrong part. To prevent these errors, we might improve legibility
SL3151Ch05Frame Page 210 Thursday, September 12, 2002 6:10 PM
Design for Manufacturability/ Assembly (DFM/DFA or DFMA) 211
of the data/information; provide training; improve the environment (lighting); reduce
boredom of the job, thus increasing vigilance and attentiveness.
Amateur Errors
Lack of experience often leads to mistakes. Newly hired workers will not know the
sequence of operations to perform their tasks and often, due to inadequate training,
will perform those tasks incorrectly. To prevent amateur errors, provide proper
training; utilize skill building techniques prior to job assignment; use work stan-
dardization.
Willful Mistakes
Willful errors result when we choose to ignore the rules. One example of this type
of error is placing a rack of material outside the lines painted on the oor that clearly
designate the proper location. The results can be damage to the vehicle or the material
or perhaps an unsafe work condition. To prevent this situation, provide basic edu-
cation and/or training; require strict adherence to the rules.
Inadvertent Mistakes
Sometimes we make mistakes without even being aware of them. For example, a
wrong part might be installed because the operator was daydreaming. To minimize
this, we may standardize the work, through discipline if necessary.
Slowness Mistakes
When our actions are slowed by delays in judgment, mistakes are often the result.
For example, an operator unfamiliar with the operation of a fork lift might pull the
wrong lever and drop the load. Methods to prevent this might be: skill building;
work standardization.
Lack of Standards Mistakes
Mistakes will occur when there is a lack of suitable work standards or when workers
do not understand instructions. For example, two inspectors performing the same
inspection may have different views on what constitutes a reject. To prevent this,
develop operation denitions of what the product is expected to be that are clearly
understood by all; provide proper training and education.
Surprise Mistakes
When the function or operation of a piece of equipment suddenly changes without
warning, mistakes may occur. For example, power tools that are used to supply
specic torque to a fastener will malfunction if an adequate oil supply is not
maintained in the reservoir. Errors such as these can often be prevented by work
standardization; having a total productive maintenance system in place.
SL3151Ch05Frame Page 211 Thursday, September 12, 2002 6:10 PM
212 Six Sigma and Beyond
Intentional Mistakes
Mistakes are sometimes made deliberately by some people. These fall in the category
of sabotage. Disciplinary measures and basic education are the only deterrents to
these types of mistakes.
There are many reasons for mistakes to happen. However, almost all of these
can be prevented if we diligently expend the time and effort to identify the basic
conditions that allow them to occur, such as:
When they happen
Why they happen
and then determine what steps are needed to prevent these mistakes from recurring
permanently.
The mistake proof system approach and the methods used give you an oppor-
tunity to prevent mistakes and errors from occurring.
DEFECTS AND ERRORS
Mistakes are generally the cause of defects. Can mistakes be avoided? To answer this
question requires us to realize that we have to look at errors from two perspectives:
1. Errors are inevitable: People will always make mistakes. Accepting this
premise makes one question the rationale of blaming people when mis-
takes are committed. Maintaining this blame attitude generally results
in defects. Also, quite often errors are overlooked when they occur in the
production process. To avoid blame, the discovery of defects is postponed
until the nal inspection, or worse yet, until the product reaches the
customer.
2. Errors can be eliminated: If we utilize a system that supports (a) proper
training and education and (b) fostering the belief that mistakes can be
prevented, then people will make fewer mistakes. This being true, it is
then possible that mistakes by people can be eliminated.
Sources of mistakes may be any one of the six basic elements of a process:
1. Measurement
2. Material
3. Method
4. Manpower
5. Machinery
6. Environment
Each of these elements may have an effect on quality as well as productivity.
To make quality improvements, each element must be investigated for potential
mistakes of operation. To reduce defects, we must recognize that defects are a
SL3151Ch05Frame Page 212 Thursday, September 12, 2002 6:10 PM
Design for Manufacturability/ Assembly (DFM/DFA or DFMA) 213
consequence of the interaction of all six elements and the actual work performed in
the process. Furthermore, we must recognize that the role of inspection is to audit
the process and to identify the defects. It is an appraisal system and it does nothing
for prevention. Product quality is changed only by improving the quality of the
process. Therefore, the rst step toward elimination of defects is to understand the
difference between defects and mistakes (errors):
Defects are the results.
Mistakes are the causes of the results.
Therefore, the underlying philosophy behind the total elimination of defects
begins with distinguishing between mistakes and defects. Examples of mistakes and
defects are shown in Table 5.4.
MISTAKE TYPES AND ACCOMPANYING CAUSES
The following categories with the associated potential causes are given as exam-
ples, rather than exhaustive lists:
Assembly mistakes
Inadequate training
Symmetry (parts mounted backwards)
Too many operations to perform
Multiple parts to select from with poor or no identification
Misread or unfamiliar with parts/products
Tooling broken and/or misaligned
New operator
Processing mistakes
Part of process omitted (inadvertent/deliberate)
Fixture inadequate (resulting in parts being set into incorrectly)
Symmetrical parts (wrong part can be installed)
TABLE 5.4
Examples of Mistakes and Defects
Mistake Resulting Defects
Failure to put gasoline in the snow blower Snow blower will not start
Failure to close window of unit being tested Seats and carpet are wet
Failure to reset clock for daylight savings time Late for work
Failure to show operator how to properly assemble
components
Defective or warped product
Proper weld schedule not maintained on welding
equipment
Bad welds, rejectable and/or scrap material
Low charged battery placed in griptow Griptow will not pull racks resulting in lost
production, downtime, etc.
SL3151Ch05Frame Page 213 Thursday, September 12, 2002 6:10 PM
214 Six Sigma and Beyond
Irregular shaped/sized part (vendor/supplier defect)
Tooling damaging part as it is installed
Carelessness (wrong part or side installed)
Process/product requirements not understood (holes punched in wrong lo-
cation)
Following instructions for wrong process (multiple parts)
Using incorrect tooling to complete operations (impact versus torque
wrench)
Inclusion of wrong part or item
Part codes wrong/missing
Parts for different products/applications mixing together
Similar parts confused
Misreading prints/schedules/bar codes etc.
Operations mistakes
Process elements assigned to too many operators
Operator error
Consequential results
Setup mistakes
Improper alignment of equipment
Process or instructions for setup not understood or out of date
Jigs and fixtures mislocated or loose
Fixtures or holding devices will accept mislocated components
Assembly omissions missing parts
Special orders (high or low volume parts missing)
No inspection capability (hidden parts omitted)
Substitutions (unexpected deviations from normal production)
Misidentified build parameters (heavy duty versus standard)
Measurement or dimensional mistakes
Flawed measuring device
Operator skill in measuring
Inadequate system for measuring
Using best guess system
Processing omissions
Operator fatigue (part assembled incorrectly/omitted)
Cycle time (incomplete/poor weld)
Equipment breakdown (weld omitted)
New operator
Tooling omitted
Automation malfunction
Instructions for operation incomplete/missing
Job not set up for changeover
Operator not trained/improper training
Sequence violation
Mounting mistakes
Symmetry (parts can be installed backwards)
Tooling wrong/inadequate
SL3151Ch05Frame Page 214 Thursday, September 12, 2002 6:10 PM
Design for Manufacturability/ Assembly (DFM/DFA or DFMA) 215
Operator dependency (parts installed upside down)
Fixtures or holding devices accept mispositioned parts
Miscellaneous mistakes
Inadequate standards
Material misidentified
No controls on operation
Counting system flawed/operating incorrectly
Print/specifications incorrect
SIGNALS THAT ALERT
Signals that alert are conditions present in a process that commonly result in
mistakes. Some signals that alert are:
Many parts/mixed parts
Multiple steps needed to perform operation
Adjustments
Tooling changes
Critical conditions
Lack of or ineffective standards
Infrequent production
Extremely high volume
Part symmetry
Asymmetry
Rapid repetition
Environmental
Housekeeping
Material handing
Poor lighting
Foreign matter and debris
Other
Ten of the most common types of mistakes are:
Assembly mistakes Processing mistakes
Inclusion of wrong part or item Operations mistakes
Setup mistakes Assembly omissions (missing parts)
Measurement mistakes Process omissions
Mounting mistakes Miscellaneous
APPROACHES TO MISTAKE PROOFING
As we already mentioned, any mistake proong system is a process that focuses on
producing zero defects by eliminating the human element from assembly. There are
two approaches to this see Figure 5.7.
SL3151Ch05Frame Page 215 Thursday, September 12, 2002 6:10 PM
216 Six Sigma and Beyond
1. Reactive systems (defect detection)
This approach relies on halting production in order to sort out the good
from the bad for repair or scrap.
2. Proactive systems (defect prevention)
This approach seeks to eliminate mistakes so that defective products are
not produced, production downtime is reduced, costs are lowered, and
customer satisfaction is increased.
Major Inspection Techniques
Figure 5.8 shows major inspection techniques. Source inspection utilizing mistake
proong system devices is the most logical method of defect prevention.
Mistake Proof System Devices
Mistake proof system devices are simple and inexpensive. There are essentially
two types of devices used:
1. Detectors (sensors) to detect mistakes that have occurred or are about
to occur
2. Preventers to prevent mistakes from occurring
FIGURE 5.7 Approaches to mistake proong.
Operation #1 Operation #2 Ship to
Customer
Reactive Systems
Focus on defect
identification
Alerts (signals) operator
that failure has occurred
Provides immediate
feedback to operator
Points to area of cause of
defect
Points to apparent cause
(symptom of defect
stops production until
defective item removed or
repaired)
Protects customer from
receiving defective product
Does not prevent mistakes
or defects from
recurring
Proactive Systems
Focus on defect prevention
Utilizes source inspection to
detect when a mistake is about to
occur before a defect is produced
Halts production before mistake
occurs
Utilizes ideal Mistake Proofing
Methods that eliminate the
possibility of mistakes so that
defective product cannot be
produced
Performs 100% inspection
without inspection costs
Prevents defects and mistakes
from occurring
SL3151Ch05Frame Page 216 Thursday, September 12, 2002 6:10 PM
Design for Manufacturability/ Assembly (DFM/DFA or DFMA) 217
Devices Used as Detectors of Mistakes
When used as detectors (sensors), these devices:
1. Provide prompt feedback (signals) to the operator that a mistake has
occurred or is about to occur
2. Initiate an action or actions to prevent further mistakes from occurring
Devices Used as Preventers of Mistakes
When used to prevent mistakes, these devices prevent mistakes from occurring or
initiate an action or actions to prevent mistakes from occurring.
FIGURE 5.8 Major inspection techniques.
Operation #1 Operation #2 Ship to
Customer
Inspect Finished
Product
Sort good from bad
BAD
GOOD
Scrap Repair
Source Inspection
A defect is a result of a
mistake. Source inspection
looks at the cause(s) for the
mistake, rather than the
actual defect.
By conducting inspection
at the source, mistakes can
be corrected before they
become defects.
Inspection utilizing Mistake
Proofing System Devices
to automatically inspect for
mistakes or defective
operating conditions is an
effective low-cost solution
for eliminating defects and
resulting defective product.
Informative
Inspection
Looks at the cause(s) of
defects and feeds this
information back to the
appropriate personnel/process
so that defects can be
reduced/eliminated
Judgment Inspection
Distinguishes good product from
bad. This method prevents
defective product from being
delivered to the customer but:
Does nothing to prevent
production of defective
products
SL3151Ch05Frame Page 217 Thursday, September 12, 2002 6:10 PM
218 Six Sigma and Beyond
EQUATION FOR SUCCESS
To be successful with a mistake proong initiative one must keep in mind the
following equation:
Source investigation + Mistake proong = Defect free system
However, to reach the state of defect free system, in addition to signals and
inspection we must also incorporate appropriate sensors to identify, stop, and/or
correct a problem before it goes to the next operation. Sensors are very important
in mistake proong, so let us look at them little closer.
A sensor is an electrical device that detects and responds to changes in a given
characteristic of a part, assembly, or xture see Figure 5.9. A sensor can, for
example, verify with a high degree of accuracy the presence and position of a part
on an assembly or xture and can identify damage or wear. Some examples of types
of sensors and typical uses are:
Welding position indicators: Determine changes in metallic composition, even
on joints that are invisible to the surface
Fiber sensors: Observe linear interruptions utilizing ber optic beams
Metal passage detectors: Determine if parts have a metal content or mixed
metal content, for example in resin materials
Beam sensors: Observe linear interruptions using electronic beams
Trimetrons: Exclude or detect preset measurement values using a dial gauge
(Value limits can be set on plus or minus sides, as well as on nominal
values.)
Tap sensors: Identify incomplete or missing tap screw machining
Color marking sensors: Identify differences in color or colored marking
FIGURE 5.9 Functions of mistake-proong devices.
Operation #1 Operation #2
Ship to
customer
First
Function
Eliminates the
mistake at the
source before
it occurs
Third Function
Detects a defect that
has occurred before it
is sent to the next
operation or shipped to
the customer
Second
Function
Detects mistakes as
they are occurring,
but before they result
in defects
SL3151Ch05Frame Page 218 Thursday, September 12, 2002 6:10 PM
Design for Manufacturability/ Assembly (DFM/DFA or DFMA) 219
Area sensors: Determine random interruptions over a xed area
Double feed sensors: Identify when two products are fed at the same time
Positioning sensors: Determine correct/incorrect positioning
Vibration sensors: Identify product passage, weld position, broken wires, loose
parts, etc.
Displacement sensors: Identify thickness, height, warpage, surface irregular-
ities, etc.
Typical Error Proong Devices
Some of the most common mistake proong devices used are:
1. Sensors
2. Sequence restrictors
3. Odd part out method
4. Limit or microswitches, proximity detectors
5. Templates
6. Guide rods or pins
7. Stoppers or gates
8. Counters
9. Standardized methods of operation and/or material usage
10. Detect delivery chute
11. Critical condition indicators
12. Probes
13. Mistake proof your mistake proof system
and so on
REFERENCES
Boothroyd, G. and Dewhurst, P., Product Design for Assembly, Boothroyd Dewhurst, Inc.,
Wakeeld, RI, 1991.
MIL-HDBK-727, Design Guidance for Producibility, U.S. Army Material Command, Wash-
ington, DC, 1986.
Mitsubishi, Mitsubishi Design Engineering Handbook, Mitsubishi, Kobe, Japan, 1976.
Munro, A., S. Munro and Associates, Inc., Design for Manufacture, training manual, 1992.
SELECTED BIBLIOGRAPHY
Anon., How To Achieve Error Proof Manufacturing: Poka-Yoke and Beyond: A Technical
Video Tutorial, SAE International, undated (may be ordered online for $895 [$25
preview copy]).
Anon., 21st Century Manufacturing Enterprise Strategy, An Industry Led View, Volumes 1
and 2, Iacocca Institute, Lehigh University, PA, 1991.
Anon., Mistake-Proong for Operators: The ZQC System, The Productivity Press Develop-
ment Team, Productivity Press, Portland, OR, 1997.
SL3151Ch05Frame Page 219 Thursday, September 12, 2002 6:10 PM
220 Six Sigma and Beyond
Anon., Manufacturing Management Handbook for Program Manager, ABN Fort Belvoir, VA,
1982.
Anon., Product Engineering Design Mannual, Litton Industries, Beverly Hills, CA, 1978.
Azuma, L. and Tada, M., A case history development of a foolproong interface documen-
tation system, IEEE Transactions on Software Engineering, 19, 765773, 1993.
Bandyopadhyay, J.K., Poka Yoke systems to ensure zero defect quality manufacturing, Inter-
national Journal of Management, 10(1), 2933, 1993.
Barkers, R., Motion and Time Study: Design and Measurement of Work, Cot Loge Book
Company, Los Angeles, 1970.
Barkman, W.E., In-Process Quality Control for Manufacturing, Marcel Dekker, New York,
1989. (Preface and Chapter 3 are of particular interest.)
Bayer, P.C., Using Poka Yoke (mistake proong devices) to ensure quality, IEEE 9th Applied
Power Electronics Conference Proceedings, 1, 201204, 1994.
Bodine, W.E., The Trend: 100 Percent Quality Verication, Production, June 1993, pp. 5455.
Bosa, R., Despite fuzzy logic and neural networks, operator control is still a must, CMA, 69,
7, 995.
Boothroyd, G. and Murch, P., Automatic Assembly, Marcel Dekker, New York, 1982.
Brehmer, B., Variable errors set a limit to adaptation, Ergonomics 33, 12311239, 1990.
Brall, J., Product Design for Manufacturing, McGraw-Hill, New York, 1986.
Casey, S., Set Phasers on Stun and Other True Tales of Design, Technology, and Human
Error, Aegean, Santa Barbara, CA, 1993.
Chase, R.B., and Stewart, D.M., Make Your Service Fail-safe, Sloan Management Review,
Spring 1994, pp. 3544.
Chase, R.B. and Stewart, D.M., Designing Errors Out, Productivity Press, Portland, OR,
1995. Note of interest: Productivity Press has discontinued sales of this book (a very
sad outcome). Some copies may be available in local bookstores. It is both more
readable and broader in application than Shingo but does not have a catalog of
examples as Shingo does.
Damian, J., Agile Manufacturing Can Revive U.S. Competitiveness, Industry Study Says
A Modest Proposal, Electronics, Feb. 1992, pp. 34, 4244.
Dove, R., Agile and Otherwise Measuring Agility: The Toll of Turmoil, Production, Jan.
1995, pp. 1215.
Dove, R., Agile and Otherwise The Challenges of Change, Production, Feb. 1995, pp.
1416.
Gross, N., This Is What the U.S. Must Do To Stay Competitive, Business Week, Dec. 1991,
pp. 2124.
Grout, J.R., Mistake-Proong Production, working paper 752750333, Cox School of Busi-
ness, Southern Methodist University, Dallas, 1995.
Grout, J.R. and Downs, B.T., An Economic Analysis of Inspection Costs for Failsang
Attributes, working paper 950901, Cox School of Business, Southern Methodist
University, Dallas, 1995.
Grout, J.R. and Downs. B.T., Fail-sang and Measurement Control Charts, 1995 Proceedings,
Decision Sciences Institute Annual Meetings, Boston, MA, 1995.
Henricks, M., Make No Mistake, Entrepreneur, Oct. 1996, pp. 8689. (Last quote should
read average net savings of around $2500 a piece... not average cost.)
Hinckley, C.M. and Barkan, P., The role of variation, mistakes, and complexity in producing
nonconformities, Journal of Quality Technology 27(3), 242249, 1995.
Jaikumar, R., Manufacturing ala Carte Agile Assembly Lines, Faster Development Cycles,
200 Years to CIM, IEEE Spectrum, 7682, Sept. 1993.
SL3151Ch05Frame Page 220 Thursday, September 12, 2002 6:10 PM
Design for Manufacturability/ Assembly (DFM/DFA or DFMA) 221
Kaplan, G., Manufacturing ala Carte Agile Assembly Lines, Faster Development Cycles,
Introduction, IEEE Spectrum, 4651, Sept. 1993.
Kelly, K., Your Job Managing Error is Out of Control, Addison-Wesley, New York, 1994.
Kletz, T., Plant Design for Safety: A User-Friendly Approach, Hemisphere Publishing Corp.,
New York, 1991.
Lafferty, J.P., Cpk of 2 Not Good Enough for You? Manufacturing Engineering, Oct. 1992,
p. 10.
Ligus, R.G., Enterprise Agility: Jazz in the Factory, Industrial Engineering, Nov. 1994, pp.
1923.
Lucas Engineering and Systems Ltd., Design for Manufacture Reference Tables, University
of Hull, Hull, England, Lucas Industries, Jan. 1994.
Manji, J.F., Sharpen, C., Your Competitive Edge Today and Into the 21st Century, CALS El
Journal, Date unknown, pp. 5661.
Marchwinski, C., Ed., Company Cuts the Risk of Defects During Assembly and Maintenance,
MfgNet: The Manufacturers Internet Newsletter, Productivity, Inc. Norwalk, CT, 1996.
Marchwinski, C., Ed., Mistake-proong, Productivity, 17(3), 16, 1995.
Marchwinski, C., Ed., (1997). SPC vs. ZQC. Productivity, 18(1), 14 1997. (Note: ZQC is
another name for mistake proong. It stands for Zero Quality Control.)
McClelland, S., Poka-Yoke and the Art of Motorcycle Maintenance, Sensor Review, 9(2), 63,
1989.
Monden, Y., Toyota Production System, Industrial Engineering and Management Press, Nor-
cross, GA, 1983, pp. 10, 137154.
Munro, A., S. Munro and Associates, Inc., Design for Manufacture, training manual, 1994.
Munro, A., S. Munro and Associates, Inc., Trainers for Design for Manufacture, analysis,
undated.
Myers, M., Poka/Yoke-ing Your Way to Success, Network World, Sept. 11, 1995, p. 39.
Nakajo, T. and Qume, H., The principles of foolproong and their application in manufac-
turing, Reports of Statistical Application Research, Union of Japanese Scientists and
Engineers, 32(2), 1029, 1985.
Niebel, C. and Baldwin, J., Designing for Production, Irwin, Homewood, IL, 1963.
Nieber, C. and Draper, G., Product Design and Process Engineering, McGraw-Hill, New
York, 1974.
Noaker, P.M., The Search for Agile Manufacturing, Manufacturing Engineering, Nov. 1994,
pp. 5763.
Norman, D.A., The Design of Everyday Things, Doubleday, New York, 1989.
OConnor, L., Agile Manufacturing in a Responsive Factory, Mechanical Engineering, July
1994, pp. 4346.
Otto, K. and Wood, K., Product Design: Techniques in Reverse Engineering and New Product
Development, Prentice Hall, Upper Saddle River, NJ, 2001.
Port, O., Moving Past the Assembly Line Agile Manufacturing Systems May Bring a
U.S. Revival, Business Week/Re-Inventing America, 1992, pp. 1720.
Reason, J., Human Error, Cambridge University Press, New York, 1990.
Robinson, A.G. and Schroeder, D.M., The limited role of statistical quality control in a zero
defects environment, Production and Inventory Management Journal, 31(3), 6065,
1990.
Robinson, A.G., Ed., Modern Approaches to Manufacturing Improvement: The Shingo System,
Productivity Press, Portland, OR, 1991.
Shandle, J., Sandia Labs Launches Agile Factory Program, Electronics, Mar. 8, 1993, pp.
4849.
SL3151Ch05Frame Page 221 Thursday, September 12, 2002 6:10 PM
222 Six Sigma and Beyond
Sheridan, J.H., A Vision of Agility, Industry Week, Mar. 21, 1994, pp. 2224.
Shingo, S., Zero Quality Control: Source Inspection and the Poka-Yoke System, Trans. A.P.
Dillion, Productivity Press, Portland, OR, 1986.
Shingo S., A Study of the Toyota Production System from an Industrial Engineering Viewpoint,
Productivity Press, Portland, OR, 1989, online excerpts.
Steven, S. and Bowen., H.K., Decoding the DNA of the Toyota Production System, Harvard
Business Review, Sept./Oct, 1999, pp. 97106.
Texas Instruments, Design to Cost: An Introduction, Corporate Engineering Council, Texas
Instruments, Inc., Dallas, 1977.
Trucks, H.E., Designing for Economical Production, SME, Dearborn, MI, 1974.
Tsuda, Y., Implications of fool proong in the manufacturing process, in Quality Through
Engineering Design, Kuo, W., Ed., Elsevier, New York, 1993.
Vasilash, G.S., Re-engineering, Re-energizing, Objects and Other Issues of Interest, Produc-
tion, Jan. 1995, pp. 3841.
Vasilash, G.S., On training for mistake-proong, Production, Mar. 1995, pp. 4244.
Ward, C., What Is Agility? Industrial Engineering, Nov. 1994, pp. 3844.
Warm, J.S., An introduction to vigilance, in Sustained Attention in Human Performance,
Warm, J.S., Ed., Wiley, New York, 1984.
Weimer, G., Is an American Renaissance at Hand? Industry Week, May 1992, pp. 1417.
Weimer, G., U.S.A. 2006: Industry Leader or Loser, Industry Week, Jan. 20, 1992, pp. 3134.
SL3151Ch05Frame Page 222 Thursday, September 12, 2002 6:10 PM

223

6

Failure Mode and
Effect Analysis (FMEA)

This chapter has been developed to assist and instruct design, manufacturing, and
assembly engineers in the development and execution of a potential Failure Mode
and Effect Analysis (FMEA) for design considerations, manufacturing, assembly
processes, and machinery.
An FMEA is a methodology that helps identify potential failures and recom-
mends corrective action(s) for xing these failures before they reach the customer.
A concept (system) FMEA is conducted as early as possible to identify serious
problems with the potential concept or design. A design FMEA is conducted prior
to production and involves the listing of potential failure modes and causes. An
FMEA identies actions required to prevent defects and thus keeps products that
may fail or not be t from reaching the customer. Its purpose is to analyze the
products design characteristics relative to the planned manufacturing or assembly
process to ensure that the resultant product meets customer needs and expectations.
When potential failure modes are identied, corrective action can be initiated to
eliminate them or continuously reduce their potential occurrence. The FMEA also
documents the rationale for the manufacturing or assembly process involved.
Changes in customer expectations, regulatory requirements, attitudes of the
courts, and the industrys needs require disciplined use of a technique to identify
and prevent potential problems. That disciplined technique is the FMEA.
A process FMEA is an analytical technique that identies potential product-
related process failure modes, assesses the potential customer effects of the failures,
identies the potential manufacturing or assembly process causes, and identies
signicant process variables to focus controls for prevention or detection of the
failure conditions. (Also, process FMEAs can assist in developing new machine or
equipment processes. The methodology is the same; however, the machine or equip-
ment being designed would be considered the product.)
A machinery FMEA is a methodology that helps in the identication of possible
failure modes and determines the cause for and effect of these failures. The focus
of the machinery FMEA is to eliminate any safety issues and to resolve them
according to specied procedures between customer and supplier. In addition, the
purpose of this particular FMEA is to review both design and process with the intent
to reduce risk.
All FMEAs utilize occurrence and detection probability in conjunction with
severity criteria to develop a Risk Priority Number (RPN) for prioritization of
corrective action considerations. This is a major departure in methodology from the
Failure Mode and Critical Analysis (FMCA), which focuses primarily on the severity
of the failure as a priority characteristic.

SL3151Ch06Frame Page 223 Thursday, September 12, 2002 6:09 PM

224

Six Sigma and Beyond

In its most rigorous form, an FMEA summarizes the engineers thoughts while
developing a process. This systematic approach parallels and formalizes the mental
discipline that an engineer normally uses to develop processing requirements.

DEFINITION OF FMEA

FMEA is an engineering reliability tool that:
1. Helps to dene, identify, prioritize, and eliminate known and/or potential
failures of the system, design, or manufacturing process before they reach
the customer, with the goal of eliminating the failure modes or reducing
their risks
2. Provides structure for a cross-functional critique of a design or a process
3. Facilitates inter-departmental dialog (It is much more than a design
review.)
4. Is a mental discipline great engineering teams go through, when cri-
tiquing what might go wrong with the product or process
5. Is a living document that reects the latest product and process actions
6. Ultimately helps prevent and not react to problems
7. Identies potential product- or process-related failure modes before they
happen
8. Determines the effect and severity of these failure modes
9. Identies the causes and probability of occurrence of the failure modes
10. Identies the controls and their effectiveness
11. Quanties and prioritizes the risks associated with the failure modes
12. Develops and documents action plans that will occur to reduce risk

TYPES OF FMEAS

There are many types of FMEAs (see Figure 6.1). However, the main ones are:


System/Concept S/CFMEA.

These are driven by system functions. A
system is an organized set of parts or subsystems to accomplish one or
more functions. System FMEAs are typically done very early, before
specic hardware has been determined.


Design DFMEA.

A design FMEA is driven by part or component
functions. A design/part is a unit of physical hardware that is considered
a single replaceable part with respect to repair. Design FMEAs are typi-
cally done later in the development process when specic hardware has
been determined.


Manufacturing or Process PFMEA

. A process FMEA is driven by
process functions and part characteristics. A manufacturing process is a
sequence of tasks that is organized to produce a product. A process FMEA
can involve fabrication as well as assembly.

SL3151Ch06Frame Page 224 Thursday, September 12, 2002 6:09 PM

Failure Mode and Effect Analysis (FMEA)

225


Machinery MFMEA

is driven by low volume machinery and equipment
where large-scale testing is impractical prior to production and manufac-
ture of the machinery and equipment. The MFMEA focuses on design
changes to lower life cycle costs by improving the reliability and main-
tainability of the machinery and equipment.

Note:

Service, software, and environmental FMEAs are additional variations.
However, in this chapter we will focus only on design, process, and machinery
FMEAs. The other FMEAs follow the same rationale as the design and process
FMEAs.

IS FMEA NEEDED?

If any answer to the following questions is positive, then you need an FMEA:
Are customers becoming more quality conscious?
Are reliability problems becoming a big concern?
Are regulatory requirements harder to meet?
Are you doing too much problem solving?
Are you addicted to problem solving?

FIGURE 6.1

Types of FMEA.
Types of FMEA
Component
Subsystem
System
Machines
Methods
Material
Manpower
Measurement
Environment
System
FMEA
Design
FMEA
Process
FMEA
Machinery
FMEA
Focus:
Minimize failure
effects on the
system
Objective:
Maximize system
quality,
reliability cost,
and
maintain ability
Focus:
Design changes to
lower life cycle
costs
Objective:
Improve the
reliability and
maintain ability of
the machinery and
equipment
Focus:
Minimize
production process
failure effects on
the system
Objective:
Maximize the
system quality,
reliability, cost,
maintain ability,
and productivity

SL3151Ch06Frame Page 225 Thursday, September 12, 2002 6:09 PM

226

Six Sigma and Beyond

Addiction to problem solving is a very important consideration in the appli-
cation of an active FMEA program. When the thrill and excitement of solving
problems become dominant, your organization is addicted to problem solving rather
than preventing the problem to begin with. A proper FMEA will help break your
addiction by:
Reducing the percentage of time devoted to problem solving
Increasing the percentage of time in problem prevention
Increasing the efciency of resource allocation

Note:

The emphasis is always on reducing complexity and engineering changes.

BENEFITS OF FMEA

When properly conducted, product and process FMEAs should lead to:
1. Condence that all risks have been identied early and appropriate actions
have been taken
2. Priorities and rationale for product and process improvement actions
3. Reduction of scrap, rework, and manufacturing costs
4. Preservation of product and process knowledge
5. Reduction of eld failures and warranty cost
6. Documentation of risks and actions for future designs or processes
By way of comparison of FMEA benets and the quality lever, Figure 6.2 may
help.
In essence, one may argue that the most important benet of an FMEA is that
it helps identify hidden costs, which are quite often greater than visible costs. Some
of these costs may be identied through:
1. Customer dissatisfaction
2. Development inefciencies
3. Lost repeat business (no brand loyalty)
4. High employee turnover
5. And so on

FMEA HISTORY

This type of thinking has been around for hundreds of years. It was rst formalized
in the aerospace industry during the Apollo program in the 1960s. The initial
automotive adoption was in the 1970s in the area of safety issues. FMEA was
required by QS-9000 and the advanced product quality planning process in 1994
for all automotive suppliers. It has now been adopted by many other industries.

SL3151Ch06Frame Page 226 Thursday, September 12, 2002 6:09 PM

Failure Mode and Effect Analysis (FMEA)

227

INITIATION OF THE FMEA

Regardless of the type, all FMEAs should be conducted as early as possible. FMEA
studies can be carried out at any stage during the development of a product or
process. However, the ideal time to start the FMEA is:
When new systems, designs, processes, or machines are being designed,
but before they are nalized
When systems design, process, or machine modications are being con-
templated
When new applications are used for the systems, designs, processes, or
machines
When quality concerns become visible
When safety issues are of concern

Note:

Once the FMEA is initiated, it becomes a

living document

, is updated as
necessary, and is never really complete.
Therefore:
FMEA-type thinking is central to reliability and continual improvement
in products and manufacturing processes to remain competitive in our
global marketplace. It must be understood that an FMEA conducted after
production serves as a reactive tool, and the user has not taken full
advantage of the FMEA process.

FIGURE 6.2

Payback effort.
Product design fix
100:1
Process design fix
10:1
Production fix
1:1
Customer fix
1:10
Production
Planning and
definition
Product design
and development
Mfg process
design and
development
Product and
process
validation
Payback: Effort

SL3151Ch06Frame Page 227 Thursday, September 12, 2002 6:09 PM

228

Six Sigma and Beyond

A typical system FMEA should begin even before the program approval
stage. The design FMEA should start right after program approval and
continue to be updated through prototypes. A process FMEA should begin
just before prototypes and continue through pilot build and sometimes
into product launching. As for the MFMEA, it should also start at the
same time as the design FMEA. It is imperative for a user of an FMEA
to understand that sometimes information is not always available. During
these situations, users must do the best they can with what they have,
recognizing that the document itself is indeed a living document and will
change as more information becomes available.
History has shown that a majority of product warranty campaigns and
automotive recalls could have been prevented



by thorough FMEA studies.

GETTING STARTED

Just as with anything else, before the FMEA begins there are some assumptions and
preparations that must be taken care of. These are:
1. Know your customers and their needs.
2. Know the function.
3. Understand the concept of priority.
4. Develop and evaluate conceptual designs/processes based on your

cus-
tomers needs

and

business strategy.

5. Be committed to continual improvement.
6. Create an effective team.
7. Dene the FMEA project and scope.

1. U

NDERSTAND

Y

OUR

C

USTOMERS



AND

T

HEIR

N

EEDS

A product or a process may perform functions awlessly, but if the functions are
not aligned with the customers needs, you may be wasting your time. Therefore,
you must:
Determine all (internal or external) relevant customers.
Understand the customers needs better than the customers understand
their own needs.
Document the customers needs and develop concepts. For example, cus-
tomers need:
Chewable toothpaste
Smokeless cigarettes
Celery-avored gum
????
In FMEA, a customer is anyone/anything that has functions/needs from your
product or manufacturing process. An easy way to determine customer needs is to
understand the Kano model see Figure 6.3.

SL3151Ch06Frame Page 228 Thursday, September 12, 2002 6:09 PM

Failure Mode and Effect Analysis (FMEA)

229

The model facilitates understanding of all the customer needs, including:

Excitement needs:

Generally, these are the unspoken wants of the customer.

Performance needs:

Generally, these are the spoken needs of the customer.
They serve as the neutral requirements of the customer.

Basic needs:

Generally, these are the unspoken needs of the customer. They
serve as the very minimum of requirements.
It is important to understand that these needs are always in a state of change.
They move from basic needs to performance to excitement depending on the product
or expectation, as well as value to the customer. For example:
SYSTEM customers may be viewed as: other systems, whole product, gov-
ernment regulations, design engineers, and end user.
DESIGN customers may be viewed as: higher assembly, whole product, design
engineers, manufacturing engineers, government engineers, and end user.
PROCESS customers may be viewed as: the next operation, operators, design
and manufacturing engineering, government regulations, and end user.
MACHINE customers may be viewed as: higher assembly, whole product,
design engineers, manufacturing engineers, government regulations, and
end user.
Another way to understand the FMEA customers is through the FMEA team,
which must in no uncertain terms determine:
1. Who the customers are
2. What their needs are
3. Which needs will be addressed in the design/process

FIGURE 6.3

Kano model.
Performance needs
Time
Basic needs
Dissatisfied
Satisfied
Did it
very well
Did not do
it at all
Excitement needs

SL3151Ch06Frame Page 229 Thursday, September 12, 2002 6:09 PM

230

Six Sigma and Beyond

The appropriate and applicable response will help in developing both the

function

and

effects

.

2. K

NOW



THE

F

UNCTION

The dictionary denition of a function is: The natural, proper, or characteristic action
of any thing. This is very useful because it implies

performance

. After all, it is
performance that we are focusing in the FMEA.
Specically, a function from an FMEA perspective is the task that a system,
part, or manufacturing process performs to satisfy a customer. To understand the
function and its signicance, the team conducting the FMEA must have a thorough
list of functions to evaluate. Once this is done, the rest of the FMEA process is a
mechanical task.
For machinery, the function may be analyzed through a variety of methodologies
including but not limited to:
Describing the design intent either through a block diagram or a P-diagram
Identifying an iterative process in terms of what can be measured
Describing the ideal function what the machine is supposed to do
Identifying relationships in verbnoun statements function tree analysis
Considering environmental and safety conditions
Accounting for all R & M parameters
Accounting for the machines performance conditions
Analyzing all other measurable engineering attributes

3. U

NDERSTAND



THE

C

ONCEPT



OF

P

RIORITY

One of the outcomes of an FMEA is the prioritization of problems. It is very
important for the team to recognize the temptation to address

all

problems, just
because they have been identied. That action, if taken, will diminish the effective-
ness of the FMEA. Rather, the team should concentrate on the most important
problems, based on performance, cost, quality, or any characteristic identied on an

a priori

basis through the risk priority number.

4. D

EVELOP



AND

E

VALUATE

C

ONCEPTUAL

D

ESIGNS

/P

ROCESSES

B

ASED



ON

C

USTOMER

N

EEDS



AND

B

USINESS

S

TRATEGY

There are many methods to assist in developing concepts. Some of the most common
are:
1. Brainstorming
2. Benchmarking
3. TRIZ (the theory of inventive problem solving)
4. Pugh concept selection (an objective way to analyze and select/synthesize
alternative concepts)

SL3151Ch06Frame Page 230 Thursday, September 12, 2002 6:09 PM

Failure Mode and Effect Analysis (FMEA)

231

Figure 6.4 shows what a Pugh matrix may look like for the concept of shaving
with a base that of a razor.

5. B

E

C

OMMITTED



TO

C

ONTINUAL

I

MPROVEMENT

Everyone in the organization and especially management must be committed to
continual improvement. In FMEA, that means that once recommendations have been
made to increase effectiveness or to reduce cost, defects, or any other characteristic,
a proper corrective action must be developed and implemented, provided it is sound
and it complements the business strategy.

6. C

REATE



AN

E

FFECTIVE

FMEA T

EAM

Perhaps one of the most important issues in dealing with the FMEA is that an FMEA

must

be done with a team. An FMEA completed by an individual is only that
individuals opinion and does not meet the requirements or the intent of an FMEA.
The elements of an effective FMEA team are:
Expertise in subject (ve to seven individuals)
Multi-level/consensus based

FIGURE 6.4

A Pugh matrix shaving with a razor.
+ 1 1 1 2 1
- 2 3 4 1
S S - S - + -
M + S - S - S +
U - S - + - S S
T - + + + - S +
A
D
Eval. Criteria Razor A B C D E F G H
Stubble length
Pain level
Mfg. Costs
Price/Use
Etc
Etc
Totals
Legend:
Evaluation Criteria: These are the criteria that we are comparing the razor
with the other approaches.
Datum: These are the basic razor characteristics that we are comparing the
other concepts to.
A: Chemical B: Electric C: Electrolysis
D: Duct tape E: Epilady F: Laser beam
G: Straight edge H: ?
+ : Better than the basic razor requirement
- : Worse than the basic razor requirement
S : Same as the basic razor requirement
S 1 3 2 3 1
2

SL3151Ch06Frame Page 231 Thursday, September 12, 2002 6:09 PM

232

Six Sigma and Beyond

Representing

all

relevant stakeholders (those who have ownership)
Possible change in membership as work progresses
Cross-functional and multidisciplinary (One persons best effort cannot
approach the knowledge of an effective cross-functional and multidisci-
plinary team.)
Appropriate and applicable empowerment
The structure of the FMEA team is based on:
Core team
The experts of the project and the closest to the project. They facilitate
honest communication and encourage active participation. Support
membership may vary depending on the stage of the project.
Champion/sponsor
Provides resources and support
Attends some meetings
Supports team
Promotes team efforts and implements recommendations
Shares authority/power with team
Kicks off team
Higher up in management the better
Team leader
A team leader is the watchdog of the project. Typically, this function
falls upon the lead engineer. Some of the ingredients of a good team
leader are:
Possesses good leadership skills
Is respected by team members
Leads but does not dominate
Maintains full team participation
Recorder
Keeps documentation of teams efforts. The recorder is responsible for co-
ordinating meeting rooms and times as well as distributing meeting
minutes and agendas.
Facilitator
The watchdog of the process. The facilitator keeps the team on track and
makes sure that everyone participates. In addition, it the facilitators re-
sponsibility to make sure that team dynamics develop in a positive en-
vironment. For the facilitator to be effective, it is imperative for the
facilitator to have no stake on the project, possess FMEA process ex-
pertise, and communicate assertively.
Important considerations for a team include:
Continuity of members
Receptive and open-minded
Committed to success

SL3151Ch06Frame Page 232 Thursday, September 12, 2002 6:09 PM

Failure Mode and Effect Analysis (FMEA)

233

Empowered by sponsor
Cross-functionality
Multidiscipline
Consensus
Positive synergy
Ingredients of a motivated FMEA team include:
Realistic agendas
Good facilitator
Short meetings
Right people present
Reach decisions based on consensus
Open minded, self initiators, volunteers
Incentives offered
Ground rules established
One individual responsible for coordination and accountability of the
FMEA project (Typically for the design, the design engineer is that person
and for the process, the manufacturing engineer has that responsibility.)
To make sure the effectiveness of the team is sustained throughout the project,
it is imperative that everyone concerned with the project bring useful information
to the process. Useful information may be derived due to education, experience,
training, or a combination of these.
At least two areas that are usually underutilized for useful information are
background information and surrogate data. Background information and supporting
documents that may be helpful to complete system, design, or process FMEAs are:
Customer specications (OEMs)
Previous or similar FMEAs
Historical information (warranty/recalls etc.)
Design reviews and verication reports
Product drawings/bill of material
Process ow charts/manufacturing routing
Test methods
Preliminary control and gage plans
Maintenance history
Process capabilities
Surrogate data are data that are generated from similar projects. They may help
in the initial stages of the FMEA. When surrogate data are used, extra caution should
be taken.
Potential FMEA team members include:
Design engineers
Manufacturing engineers

SL3151Ch06Frame Page 233 Thursday, September 12, 2002 6:09 PM

234

Six Sigma and Beyond

Quality engineers
Test engineers
Reliability engineers
Maintenance personnel
Operators (from all shifts)
Equipment suppliers
Customers
Suppliers
Anyone who has a direct or indirect interest
In any FMEA team effort the individuals must have interaction with
manufacturing and/or process engineering while conducting a design
FMEA. This is important to ensure that the process will manufacture
per design specication.
On the other hand, interaction with design engineering while conduct-
ing a process or assembly FMEA is important to ensure that the design
is right.
In either case, group consensus will identify the high-risk areas that
must be addressed to ensure that the design and/or process changes
are implemented for improved quality and reliability of the product
Obviously, these lists are typical menus to choose an appropriate team for your
project. The actual team composition for your organization will depend upon your
individual project and resources.
Once the team is chosen for the given project, spend 1520 minutes creating a
list of the biggest (however you dene biggest) concerns for this product or
process. This list will be used later to make sure you have a complete list of functions.

7. D

EFINE



THE

FMEA P

ROJECT



AND

S

COPE

Teams must know their assignment. That means that they must know:
What they are working on (scope)
What they are

not

working on (scope)
When they must complete the work
Where and how often they will meet
Two excellent tools for such an evaluation are (1) block diagram for system, design,
and machinery and (2) process ow diagram for process. In essence, part of the respon-
sibility to dene the project and scope has to do with the question How broad is our
focus? Another way to say this is to answer the question How detailed do we have
to be? This is much more difcult than it sounds and it needs some heavy discussion
from all the members. Obviously, consensus is imperative. As a general rule, the focus
is dependent upon the project and the experience or education of the team members.
Let us look at an example. It must be recognized that sometimes due to the
complexity of the system, it is necessary to narrow the scope of the FMEA. In other
words, we must break down the system into smaller pieces see Figure 6.5.

SL3151Ch06Frame Page 234 Thursday, September 12, 2002 6:09 PM

Failure Mode and Effect Analysis (FMEA)

235

THE FMEA FORM

There are many forms to develop a typical FMEA. However, all of them are basically
the same in that they are made up of two parts, whether they are for system, design,
process, or machinery. A typical FMEA form consists of the header information and
the main body.
There is no standard information that belongs in the header of the form, but
there are specic requirements for the body of the form.
In the header, one may nd the following information see Figure 6.7. How-
ever, one must remember that this information may be customized to reect ones
industry or even the organization:
Type of FMEA study
Subject description
Responsible engineer
FMEA team leader
FMEA core team members
Suppliers
Appropriate dates (original issue, revision, production start, etc.)
FMEA number
Assembly/part/detail number
Current dates (drawings, specications, control plan, etc.)
The form may be expanded to include or to be used for such matters as:

Safety:

Injury is the most serious of all failure effects. As a consequence, safety
is handled either with an FMEA or a fault tree analysis (FTA) or critical

FIGURE 6.5

Scope for DFMEA braking system.





Master
cylinder
Pedals
and
linkages
Hydraulics
Back
plate and
hardware
Caliper system
Rotor and studs
Cylinder, fluid
bladder, etc
Pedal, rubber
cover, cotter pins, etc.
Rubber hose, metal
tubing, proportioning
valve, fitting, etc.
Back plate, springs, washer,
clips, etc.
Pistons, cylinder, casting,
plate, etc.
Rotor hat, rotor, studs,
etc.
Friction material, substrate,
rivets, clip etc.
OUR
FMEA
SCOPE
Pads and
hardware
Brake
System

SL3151Ch06Frame Page 235 Thursday, September 12, 2002 6:09 PM

236

Six Sigma and Beyond

FIGURE 6.6

Scope for PFMEA printed circuit board screen printing process.

FIGURE 6.7

Typical FMEA header.
Legend:
L: Low risk
M: Medium risk
H: High risk
Note: Just as in design FMEA, sometimes it is necessary to narrow the scope
of the process FMEA.
L L L
L
L
Our scope
Load
sqeegee
Load tool
plate
Yes
No No
M
H
L
Good?
Inpect
print
Apply
paste
Load
board
Dispense
paste
Set up
machine
Load
screen
Wash board
Run, package
and ship
M
H
H
Develop
program
System FMEA ____Design FMEA ____Process FMEA ____FMEA Number ____
Subject: ______________Team Leader.________________Page ____ of _____
Part/Proc. ID No. __________Date Orig. _____________Date Rev. __________
Key Date. ____________Team Members: ___________________
FMEA WORKSHEET

SL3151Ch06Frame Page 236 Thursday, September 12, 2002 6:09 PM

Failure Mode and Effect Analysis (FMEA)

237
F
I
G
U
R
E

6
.
8

T
y
p
i
c
a
l

F
M
E
A

b
o
d
y
.
F
a
i
l
u
r
e

M
o
d
e

A
n
a
l
y
s
i
s
P
o
t
e
n
t
i
a
l
f
a
i
l
u
r
e
m
o
d
e
P
o
t
e
n
t
i
a
l
e
f
f
e
c
t

o
f
f
a
i
l
u
r
e
m
o
d
e
P
o
t
e
n
t
i
a
l
c
a
u
s
e

o
f
f
a
i
l
u
r
e
m
o
d
e
C
u
r
r
e
n
t
c
o
n
t
r
o
l
s
R
e
c
o
m
m
e
n
d
e
d
a
c
t
i
o
n

a
n
d
r
e
s
p
o
n
s
i
b
i
l
i
t
y
T
a
r
g
e
t
f
i
n
i
s
h
d
a
t
e
A
c
t
u
a
l
f
i
n
i
s
h
d
a
t
e
A
c
t
i
o
n
s
t
a
k
e
n
R
e
m
a
r
k
s
S
O
D
S
O
D
R
P
N
A
c
t
i
o
n

P
l
a
n
A
c
t
i
o
n

R
e
s
u
l
t
s
CLA
SS
RPN
D
e
s
c
r
i
p
t
i
o
n
P
a
r
t

n
a
m
e

o
r
p
r
o
c
e
s
s

s
t
e
p
a
n
d

f
u
n
c
t
i
o
n
SL3151Ch06Frame Page 237 Thursday, September 12, 2002 6:09 PM
238 Six Sigma and Beyond
analysis (FMCA). In the traditional FTA, the starting point is the list of
hazard or undersized events for which the designer must provide some
solution. Each hazard becomes a failure mode and thus it requires an
analysis.
Effect of downtime: The FMEA may incorporate maintenance data to study
the effects of downtime. It is an excellent tool to be used in conjunction
with total preventive maintenance.
Repair planning: An FMEA may provide preventive data to support repair
planning as well as predictive maintenance cycles.
Access: In the world of recycling and environmental conscience, the FMEA
can provide data for tear downs as well as information about how to get at
the failed component. It can be used with mistake proong for some very
unexpected positive results.
A typical body of an FMEA form may look like Figure 6.8. The details of this
form will be discussed in the following pages. We begin with the rst part of the
form; that is the description in the form of:
Part name/process step and function (verb/noun)
In this area the actual description is written in concise,
exact and simple language.
DEVELOPING THE FUNCTION
A fundamental principle in writing functions is the notion that they must be written
either in action verb format or as a measurable noun. Remember, a function is a
task that a component, subsystem, or product must perform, described in language
that everyone understands. Stay away from jargon. To identify appropriate functions,
leading questions such as the following may help:
What does the product/process do?
How does the product/process do that?
If a product feature or process step is deleted, what functions disappear?
If you were this task, what are you supposed to accomplish? Why do you
exist?
The priority of asking function questions for a system/part FMEA is:
1. A system view
2. A subsystem view
3. A component view
Typical functions are:
SL3151Ch06Frame Page 238 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA) 239
Position
Support
Seal in, out
Retain
Lubricate
ORGANIZING PRODUCT FUNCTIONS
After the brainstorming is complete, a function tree see Figure 6.9 can be used
to organize the functions. This is a simple tree structure to document and help
organize the functions, as follows:
Purposes of the function tree
a. To document all the functions
b. To improve team communication
c. To document complexity and improve team understanding of all the
functions
Steps
a. Brainstorm all the functions.
b. Arrange functions into function tree.
c. Test for completeness of function (how/why).
Building the function tree
Ask:
What does the product/process do?
Which component/process step does that?
FIGURE 6.9 Function tree process.
HOW?
WHY?
Supporting
functions
Enhancing
functions
Primary
supporting
function
Primary
supporting
function
Tertiary
supporting
function
Ensure
dependability
Ensure
convenience
Please senses
Delight customer
Task
function
Secondary
enhancing
function
Tertiary
enhancing
function
Tertiary
enhancing
function
Tertiary
enhancing
function
Tertiary
supporting
function
Primary
supporting
function
Secondary
supporting
function
SL3151Ch06Frame Page 239 Thursday, September 12, 2002 6:09 PM
240 Six Sigma and Beyond
How does it do that?
Primary functions provide a direct answer to this question with-
out conditions or ambiguity.
Secondary functions explain how primary functions are per-
formed.
Continue until the answer to how requires using a part name,
labor operation, or activity.
Ask why in the reverse direction.
Add additional functions as needed.
The function tree process can be summarized as follows:
1. Identify the task function.
Place on the far left side of a chart pad.
2. Identify the supporting functions.
Place on the top half of the pad.
3. Identify enhancing functions.
Place on the bottom half of the pad.
4. Build the function tree.
Include the secondary/tertiary functions.
Place these to the right of the primary functions.
5. Verify the diagram: Ask how and why.
For an example of a function tree for a ball point pen (tip), see Figure 6.10.
FAILURE MODE ANALYSIS
The second portion of the FMEA body form deals with the failure mode analysis.
A typical format is shown in Figure 6.11.
Understanding Failure Mode
Failure mode (a specic loss of a function) is the inability of a component/sub-
system/system/process/part to perform to design intent. In other words, it may
potentially fail to perform its function(s). Design failure mode is a technical descrip-
tion of how the system, subsystem, or part may not adequately perform its function.
Process failure mode is a technical description of how the manufacturing process
may not perform its function, or the reason the part may be rejected.
Failure Mode Questions
The process of brainstorming failure modes may include the following questions:
DFMEA
Considering the conditions in which the product will be used, how can
it fail to perform its function?
How have similar products failed in the past?
SL3151Ch06Frame Page 240 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA) 241
PFMEA
Considering the conditions in which the process will be used, what
could possibly go wrong with the process?
How have similar processes failed in the past?
What might happen that would cause a part to be rejected?
FIGURE 6.10 Example of ballpoint pen.
User grasps barrel and moves pen axially while
simultaneously pressing down on the tip at
a vector to the 180 degree plane
The ink flows through the
ink tube contacting the ball
surface
Vector Force Function
The end of the barrel and
the barrel I.D. (tip end)
simultaneously apply force
to the tip system housing
end and sheath.
The tip assembly housing
transmits the vector force to
the O.D. of the ball housing
The ball housing transmits
the vector force to the ball,
the ball moves up into the
ball housing creating a gap
between the ball and ball
housing
Axial Force Function
The tip system housing tip
retainer I.D. transmits axial
force to the ball housing
The ball housing I.D. (ball)
transmits axial force
on the ball
The ball transmits axial force
to the marking surface,
however, the marking surface
is stationary, which causes
the ball rotational motion
The ball rotates through the
ink supply, picking up a film
of ink on the ball surface
The ink is transferred from the
ball surface to the marking
surface
The ink remains on the
marking surface (3mm width)
area, drying in 3 seconds
The inside diameter of the
barrel tip end transmits axial
force to the tip system
housing sheath O.D.
Super pen makes marks on varied surfaces
SL3151Ch06Frame Page 241 Thursday, September 12, 2002 6:09 PM
242 Six Sigma and Beyond
Determining Potential Failure Modes
Failure modes are when the function is not fullled in ve major categories. Some
of these categories may not apply. As a consequence, use these as thought provok-
ers to begin the process and then adjust them as needed:
1. Absence of function
2. Incomplete, partial, or decayed function
3. Related unwanted surprise failure mode
4. Function occurs too soon or too late
5. Excess or too much function
6. Interfacing with other components, subsystems or systems. There are four
possibilities of interfacing. They are (a) energy transfer, (b) information
transfer, (c) proximity, and (d) material compatibility.
Failure mode examples using the above categories and applied to the pen case
include:
1. Absence of function:
DFMEA: Make marks
PFMEA: Inject plastic
2. Incomplete, partial or decayed function:
DFMEA: Make marks
PFMEA: Inject plastic
3. Related unwanted surprise failure mode
DFMEA: Make marks
PFMEA: Inject plastic
4. Function occurs too soon or too late
DFMEA: Make marks
PFMEA: Inject plastic
5. Excess or too much function
DFMEA: Make marks
PFMEA: Inject plastic
FIGURE 6.11 FMEA body.
Potential
Failure
Mode
Potential
Effects of
Failure
Mode
Potential
Causes of
Failure Mode
Current
Controls
Risk Priority
Number
(RPN)
Identify
the
Potential
Failure
S
E
V
E
R
I
T
Y
C
L
A
S
S
O
C
C
U
R
R
E
N
C
E
D
E
T
E
C
T
I
O
N
SL3151Ch06Frame Page 242 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA) 243
General examples of failure modes include:
Design FMEA:
No power Failed to open
Water leaking Partial insulation
Open circuit Loss of air
Releases too early No spark
Noise Insufcient torque
Vibration Paper jams
Does not cut And so on
Process FMEA:
Four categories of process failures:
1. Fabrication failures
2. Assembly failures
3. Testing failures
4. Inspection failures
Typical examples of these categories are:
Warped
Too hot
RPM too slow
Rough surface
Loose part
Misaligned
Poor inspection
Hole too large
Leakage
Fracture
Fatigue
And so on
Note: At this stage, you are ready to transfer the failure modes in the FMEA
form see Figure 6.12.
FAILURE MODE EFFECTS
A failure mode effect is a description of the consequence/ramication of a system,
part, or manufacturing process failure. A typical failure mode may have several
effects depending on which customer(s) are considered. Consider the effects/conse-
quences on all the customers, as they are applicable, as in the following FMEAs:
SFMEA
System
Other systems
SL3151Ch06Frame Page 243 Thursday, September 12, 2002 6:09 PM
244 Six Sigma and Beyond
Whole product
Government regulations
End user
DFMEA
Part
Higher assembly
Whole product
Government regulations
End user
PFMEA
Part
Next operation
Equipment
Government regulations
Operators
End user
Effects and Severity Rating
Effects and severity are very related items. As the effect increases, so does the
severity. In essence, two fundamental questions have to be raised and answered:
1. What will happen if this failure mode occurs?
2. How will customers react if these failures happen?
Describe as specically as possible what the customer(s) might notice
once the failure occurs.
What are the effects of the failure mode?
How severe is the effect on the customers?
The progression of function, cause, failure mode, effect, and severity can be
illustrated by the following series of questions:
In function: What is the individual task intended by design?
In failure mode: What can go wrong with this function?
FIGURE 6.12 Transferring the failure modes to the FMEA form.
Does Not
Transfer Ink
Partial Ink
and so on
Potential
Failure
Mode
Potential
Effects
of
Failure
Mode
Potential
Causes of
Failure
Mode
Current
Controls
Risk Priority
Number
(RPN)
S
E
V
E
R
I
T
Y
C
L
A
S
S
O
C
C
U
R
R
E
N
C
E
D
E
T
E
C
T
I
O
N
SL3151Ch06Frame Page 244 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA) 245
In cause: What is the root cause of the failure mode?
In effect: What are the consequences of this failure mode?
In severity: What is the seriousness of the effect?
The following are some examples of DFMEA and PFMEA effects:
Customer gets wet Loss of performance
System failure Scrap
Loss of efciency Rework
Reduced life Becomes loose
Degraded performance Hard to load in next operation
Cannot assemble Operator injury
Violate Gov. Reg. XYZ Noise, rattles
Damaged equipment And so on
Special Note: Please note that the effect remains the same for both DFMEA and
PFMEA.
Severity Rating (Seriousness of the Effect)
Severity is a numerical rating see Table 6.1 for design and Table 6.2 for process
of the impact on customers. When multiple effects exist for a given failure mode,
enter the worst-case severity on the worksheet to calculate risk. (This is the excepted
method for the automotive industry and for the SAE J1739 standard. In cases where
severity varies depending on timing, use the worst-case scenario.
Note: There is nothing special about these guidelines. They may be changed to
reect the industry, the organization, the product/design, or the process. For example,
the automotive industry has its own version and one may want to review its guidelines
in the AIAG (2001). To modify these guidelines, keep in mind:
1. List the entire range of possible consequences (effects).
2. Force rank the consequences from high to low.
3. Resolve the extreme values (rating 10 and rating 1).
4. Fill in the other ratings.
5. Use consensus.
At this point the information should be transferred to the FMEA form see
Figure 6.13. The column identifying the class is the location for the placement of
the special characteristic. The appropriate response is only Yes or No. A Yes in
this column indicates that the characteristic is special, a No indicates that the
characteristic is not special. In some industries, special characteristics are of two
types: (a) critical and (b) signicant. Critical refers to characteristics associated
with safety and/or government regulations, and signicant refers to those that
affect the integrity of the product. In design, all special characteristics are potential.
In process they become critical or signicant depending on the numerical values of
severity and occurrence combinations.
SL3151Ch06Frame Page 245 Thursday, September 12, 2002 6:09 PM
246 Six Sigma and Beyond
FAILURE CAUSE AND OCCURRENCE
The analysis of the cause and occurrence is based on two questions:
1. What design or process choices did we already make that may be respon-
sible for the occurrence of a failure?
2. How likely is the failure mode to occur because of this?
For each failure mode, the possible mechanism(s) and cause(s) of failure are
listed. This is an important element of the FMEA since it points the way toward
preventive/corrective action. It is, after all, a description of the design or process
deciency that results in the failure mode. That is why it is important to focus on
the global or root cause. Root causes should be specic and in the form of a
characteristic that may be controlled or corrected. Caution should be exerted not to
overuse operator error or equipment failure as a root cause even though they
are both tempting and make it easy to assign blame.
You must look for causes, not symptoms of the failure. Most failure modes have
more than one potential cause. An easy way to probe into the causes is to ask:
What design choices, process variables, or circumstances could result in the failure
mode(s)?
TABLE 6.1
DFMEA Severity Rating
Effect Description Rating
None No effect noticed by customer; the failure will not have any
perceptible effect on the customer
1
Very minor Very minor effect, noticed by discriminating customers; the
failure will have little perceptible effect on discriminating
customers
2
Minor Minor effect, noticed by average customers; the failure will have
a minor perceptible effect on average customers
3
Very low Very low effect, noticed by most customers; the failure will have
some small perceptible effect on most customers
4
Low Primary product function operational, however at a reduced level
of performance; customer is somewhat dissatised
5
Moderate Primary product function operational, however secondary
functions inoperable; customer is moderately dissatised
6
High Failure mode greatly affects product operation; product or
portion of product is inoperable; customer is very dissatised
7
Very high Primary product function is non-operational but safe; customer
is very dissatised.
8
Hazard with warning Failure mode affects safe product operation and/or involves
nonconformance with government regulation with warning
9
Hazard with no warning Failure mode affects safe product operation and/or involves
nonconformance with government regulation without warning
10
SL3151Ch06Frame Page 246 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA) 247
DFMEA failure causes are typically specic system, design, or material char-
acteristics.
PFMEA failure causes are typically process parameters, equipment characteris-
tics, or environmental or incoming material characteristics.
Popular Ways (Techniques) to Determine Causes
Ways to determine failure causes include the following:
Brainstorm
Whys
Fishbone diagram
TABLE 6.2
PFMEA Severity Rating
Effect Description Rating
None No effect noticed by customer; the failure will not have any effect
on the customer
1
Very minor Very minor disruption to production line; a very small portion
of the product may have to be reworked; defect noticed by
discriminating customers
2
Minor Minor disruption to production line; a small portion (much <5%)
of product may have to be reworked on-line; process up but
minor annoyances
3
Very low Very low disruption to production line; a moderate portion
(<10%) of product may have to be reworked on-line; process
up but minor annoyances
4
Low Low disruption to production line; a moderate portion (<15%)
of product may have to be reworked on-line; process up but
minor annoyances
5
Moderate Moderate disruption to production line; a moderate portion
(>20%) of product may have to be scrapped; process up but
some inconveniences
6
High Major disruption to production line; a portion (>30%) of product
may have to be scrapped; process may be stopped; customer
dissatised
7
Very high Major disruption to production line; close to 100% of product
may have to be scrapped; process unreliable; customer very
dissatised
8
Hazard with warning May endanger operator or equipment; severely affects safe
process operation and/or involves noncompliance with
government regulation; failure will occur with warning
9
Hazard with no warning May endanger operator or equipment; severely affects safe
process operation and/or involves noncompliance with
government regulation; failure occurs without warning
10
SL3151Ch06Frame Page 247 Thursday, September 12, 2002 6:09 PM
248 Six Sigma and Beyond
Fault Tree Analysis (FTA; a model that uses a tree to show the cause-and-
effect relationship between a failure mode and the various contributing
causes. The tree illustrates the logical hierarchy branches from the failure
at the top to the root causes at the bottom.)
Classic ve-step problem-solving process
a. What is the problem?
b. What can I do about it?
c. Put a star on the best plan.
d. Do the plan.
e. Did your plan work?
Kepner Trego (What is, what is not analysis)
Discipline GPS see Volume II
Experience
Knowledge of physics and other sciences
Knowledge of similar products
FIGURE 6.13 Transferring severity and classication to the FMEA form.
Potential
Failure
Mode
Potential
Effects
of
Failure
Mode
S
E
V
E
R
I
T
Y
C
L
A
S
S
Potential
Causes of
Failure Mode
O
C
C
U
R
R
E
N
C
E
D
E
T
E
C
T
I
O
N
Pan does
not work;
customer
tries and
eventually
tears
paper
and
scraps
the pen
Does not
transfer ink
8 N
Old pen
stops
writing,
customer
scraps
pen
Customer
has to
retrace
Writing
or
drawing
looks bad
and so on
7 N Partial ink
and so on
Risk Priority
Number
(RPN)
Current
Controls
SL3151Ch06Frame Page 248 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA) 249
Experiments When many causes are suspect or specic cause is
unknown
Classical
Taguchi methods
Occurrence Rating
The occurrence rating is an estimated number of frequencies or cumulative number
of failures (based on experience) that will occur in our design concepts for a given
cause over the intended life of the design. For example: cause of staples falling out
= soft wood. The likelihood of occurrence is a 9 if we pick balsa wood but a 2 if
we choose oak.
Just as with severity, there are standard tables for occurrence see Table 6.3
for design and Table 6.4 for process for each type of FMEA. The ratings on these
tables are estimates based on experience or similar products or processes. Non-
standard occurrence tables may also be used, based on specic characteristics.
However, reliability expertise is needed to construct occurrence tables. (Typical
characteristics may be historical failure frequencies, C
pks
, theoretical distributions,
and reliability statistics.)
At this point the data for causes and their ratings should be transferred to the
FMEA form see Figure 6.14.
Current Controls and Detection Ratings
Design and process controls are the mechanisms, methods, tests, procedures, or
controls that we have in place to prevent the cause of the failure mode or detect the
failure mode or cause should it occur. (The controls currently exist.)
Design controls prevent or detect the failure mode prior to engineering release.
Process controls prevent or detect the failure mode prior to the part or assembly
leaving the area.
TABLE 6.3
DFMEA Occurrence Rating
Occurrence Description Frequency Rating
Remote Failure is very unlikely < 1 in 1,500,000 1
Low Relatively few failures 1 in 150,000
1 in 15,000
2
3
Moderate Occasional failures 1 in 2000
1 in 400
1 in 80
4
5
6
High Repeated failures 1 in 20
1 in 8
7
8
Very high Failure is almost inevitable 1 in 3
>1 in 2
9
10
SL3151Ch06Frame Page 249 Thursday, September 12, 2002 6:09 PM
250 Six Sigma and Beyond
A good control prevents or detects causes or failure modes.
As early as possible (ideally before production or prototypes)
As early as possible
Using proven methods
So, the next step in the FMEA process is to:
Analyze planned controls for your system, part, or manufacturing process
Understand the effectiveness of these controls to detect causes or failure
modes
Detection Rating
Detection rating see Table 6.5 for design and Table 6.6 for process is a numer-
ical rating of the probability that a given set of controls will discover a specic cause
or failure mode to prevent bad parts from leaving the operation/facility or getting
to the ultimate customer. Assuming that the cause of the failure did occur, assess
the capabilities of the controls to nd the design aw or prevent the bad part from
leaving the operation/facility. In the rst case, the DFMEA is at issue. In the second
case, the PFMEA is of concern.
When multiple controls exist for a given failure mode, record the best (lowest)
to calculate risk. In order to evaluate detection, there are appropriate tables for
both design and process. Just as before, however, if there is a need to alter them,
remember that the change and approval must be made by the FMEA team with
consensus.
At this point, the data for current controls and their ratings should be transferred
to the FMEA form see Figure 6.15. There should be a current control for every
cause. If there is not, that is a good indication that a problem might exist.
TABLE 6.4
PFMEA Occurrence Rating
Occurrence Description Frequency C
pk
Rating
Remote Failure is very unlikely; no failures associated
with similar processes
<1 in 1,500,000 >1.67 1
Low Few failures; isolated failures associated with
like processes
1 in 150,000
1 in 15,000
1.50
1.33
2
3
Moderate Occasional failures associated with similar
processes, but not in major proportions
1 in 2,000
1 in 400
1 in 80
1.17
1.00
0.83
4
5
6
High Repeated failures; similar processes have often
failed
1 in 20
1 in 8
0.67 7
8
Very high Process failure is almost inevitable 1 in 3
>1 in 2
0.51
0.33
9
10
SL3151Ch06Frame Page 250 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA) 251
UNDERSTANDING AND CALCULATING RISK
Without risk, there is very little progress. Risk is inevitable in any system, design,
or manufacturing process. The FMEA process aids in identifying signicant risks,
then helps to minimize the potential impact of risk. It does that through the risk
priority number or as it is commonly known, the RPN index. In the analysis of the
RPN, make sure to look at risk patterns rather than just a high RPN. The RPN is
the product of severity, occurrence, and detection or:
FIGURE 6.14 Transferring causes and occurrences to the FMEA form.
Potential
Failure
Mode
Potential
Effects
of
Failure
Mode
S
E
V
E
R
I
T
Y
C
L
A
S
S
Potential
Causes of
Failure Mode
O
C
C
U
R
R
E
N
C
E
Current
Controls
D
E
T
E
C
T
I
O
N
Risk Priority
Number
(RPN)
Does not
transfer ink
Partial ink
and so on
8 N
9
5
2
7
1
2
Pan does
not work;
customer
tries and
eventually
tears
paper
and
scraps
the pen
7
7
N
Old pen
stops
writing,
customer
scraps
pen
and so on
Customer
has to
retrace
Housing I.D.
variation due
to mfg
Writing
or
drawing
looks bad
Ball housing
I.D. deform
Ball does not
always pick
up ink due to
ink viscosity
and so on
Ink viscosity
too high
Debris build-
up
Inconsistent
ball rolling
due to
deformed
housing
SL3151Ch06Frame Page 251 Thursday, September 12, 2002 6:09 PM
252 Six Sigma and Beyond
Risk = RPN = S O D
Obviously the higher the RPN the more the concern. A good rule-of-thumb
analysis to follow is the 95% rule. That means that you will address all failure modes
with a 95% condence. It turns out the magic number is 50, as indicated in this
equation: [(S = 10 O = 10 D = 10) (1000 .95)]. This number of course is
only relative to what the total FMEA is all about, and it may change as the risk
increases in all categories and in all causes.
Special risk priority patterns require special attention, through specic action
plans that will reduce or eliminate the high risk factor. They are identied through:
1. High RPN
2. Any RPN with a severity of 9 or 10 and occurrence > 2
3. Area chart
The area chart Figure 6.16 uses only severity and occurrence and therefore
is a more conservative approach than the priority risk pattern mentioned previously.
At this stage, let us look at our FMEA project and calculate and enter the RPN
see Figure 6.17. It must be noted here that this is only one approach to evaluating
risk. Another possibility is to evaluate the risk based on the degree of severity rst,
TABLE 6.5
DFMEA Detection Table
Detection Description Rating
Almost certain Design control will almost certainly detect the potential cause of
subsequent failure modes
1
Very high Very high chance the design control will detect the potential cause of
subsequent failure mode
2
High High chance the design control will detect the potential cause of
subsequent failure mode
3
Moderately high Moderately high chance the design control will detect the potential cause
of subsequent failure mode
4
Moderate Moderate chance the design control will detect the potential cause of
subsequent failure mode
5
Low Low chance the design control will detect the potential cause of
subsequent failure mode
6
Very low Very low chance the design control will detect the potential cause of
subsequent failure mode
7
Remote Remote chance the design control will detect the potential cause of
subsequent failure mode
8
Very remote Very remote chance the design control will detect the potential cause
of subsequent failure mode
9
Very uncertain There is no design control or control will not or cannot detect the
potential cause of subsequent failure mode
10
SL3151Ch06Frame Page 252 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA) 253
in which case the engineer tries to eliminate the failure; evaluate the risk based on
a combination of severity (values of 58) and occurrence (>3) second, in which case
the engineer tries to minimize the occurrence of the failure through a redundant
system; and to evaluate the risk through the detection of the RPN third, in which
case the engineer tries to control the failure before the customer receives it.
ACTION PLANS AND RESULTS
The third portion of the FMEA form deals with the action plans and results analysis.
A typical format is shown in Figure 6.18.
The idea of this third portion of the FMEA form is to generate a strategy that
reduces severity and occurrence and makes the detection effective to reduce the total
RPN:
Reducing the severity rating (or reducing the severity of the failure mode effect)
Design or manufacturing process changes are necessary.
This approach is much more proactive than reducing the detection
rating.
Reducing the occurrence rating (or reducing the frequency of the cause)
Design or manufacturing process changes are necessary.
This approach is more proactive than reducing the detection rating.
TABLE 6.6
PFMEA Detection Table
Detection Description Rating
Almost certain Process control will almost certainly detect or prevent the potential cause
of subsequent failure mode
1
Very high Very high chance process control will detect or prevent the cause of
subsequent failure mode
2
High High chance the process control will detect or prevent the potential cause
of subsequent failure mode.
3
Moderately high Moderately high chance the process control will detect or prevent the
potential cause of subsequent failure mode
4
Moderate Moderate chance the process control will detect or prevent the potential
cause of subsequent failure mode
5
Low Low chance the process control will detect or prevent the potential cause
of subsequent failure mode
6
Very low Very low chance the process control will detect or prevent the potential
cause of subsequent failure mode
7
Remote Remote chance the process control will detect or prevent the potential
cause of subsequent failure mode
8
Very remote Very remote chance the process control will detect or prevent the potential
cause of subsequent failure mode
9
Very uncertain There is no process control or control will not or cannot detect the potential
cause of subsequent failure mode
10
SL3151Ch06Frame Page 253 Thursday, September 12, 2002 6:09 PM
254 Six Sigma and Beyond
Reducing the detection rating (or increasing the probability of detection)
Improving the detection controls is generally costly, reactive, and does
not do much for quality improvement, but it does reduce risk.
Increased frequency of inspection, for example, should only be used
as a last resort. It is not a proactive corrective action.
CLASSIFICATION AND CHARACTERISTICS
Different industries have different criteria for classication. However, in all cases
the following characteristics must be classied according to risk impact:
FIGURE 6.15 Transferring current controls and detection to the FMEA form.
Potential
Failure
Mode
Potential
Effects
of
Failure
Mode
S
E
V
E
R
I
T
Y
C
L
A
S
S
Potential
Causes of
Failure Mode
O
C
C
U
R
R
E
N
C
E
Current
Controls
D
E
T
E
C
T
I
O
N
Risk Priority
Number
(RPN)
Does not
transfer ink
Partial ink
and so on
Pan does
not work;
customer
tries and
eventually
tears
paper
and
scraps
the pen
Old pen
stops
writing,
customer
scraps
pen
Customer
has to
retrace
Writing
or
drawing
looks bad
and so on
8
7
7
N
N
Ball housing
I.D. deform
Ink viscosity
too high
Debris build-
up
Inconsistent
ball rolling
due to
deformed
housing
Ball does not
always pick
up ink due to
ink viscosity
Housing I.D.
variation due
to mfg
and so on
2
9
5
2
7
1
Life test
Test # X
Test # X
Design review
Prototype test #
XY
Test # X
None
None
and so on
2
10
7
10
10
10
SL3151Ch06Frame Page 254 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA) 255
Severity 9, 10: Highest classication (critical)
These product- or process-related characteristics:
May affect compliance with government or federal regulations
(EPA, OSHA, FDA, FCC, FAA, etc.)
May affect safety of the customer
Require specic actions or controls during manufacturing to ensure
100% compliance
Severity between 5 and 8 and occurrence greater than 3: Secondary clas-
sication (signicant)
These product- or process-related characteristics:
Are non-critical items that are important for customer satisfaction
(e.g., t, nish, durability, appearance)
Should be identied on drawings, specications, or process instruc-
tions to ensure acceptable levels of capability
High RPN: Secondary classication (see Table 6.7)
Product Characteristics/Root Causes
Examples include size, form, location, orientation, or other physical properties such
as color, hardness, strength, etc.
Process Parameters/Root Causes
Examples include pressure, temperature, current, torque, speeds, feeds, voltage,
nozzle diameter, time, chemical concentrations, cleanliness of incoming part, ambi-
ent temperature, etc.
DRIVING THE ACTION PLAN
For each recommended action, the FMEA team must:
FIGURE 6.16 Area chart.
Occurrence
Medium
Priority
High
Priority

Low
Priority
Severity
1 2 3 4 5 6 7 8 9 10
10
9
8
7
6
5
4
3
2
1
SL3151Ch06Frame Page 255 Thursday, September 12, 2002 6:09 PM
256 Six Sigma and Beyond
Plan for implementation of recommendations
Make sure that recommendations are followed, demonstrate improvement,
and are completed
Implementation of action plans requires answering the classic questions
WHO (will take the lead)
WHAT (specically is to be done)
FIGURE 6.17 Transferring the RPN to the FMEA form.
Potential
Failure
Mode
Potential
Effects
of
Failure
Mode
Potential
Causes
of
Failure Mode
Current
Controls
Does not
transfer ink
Partial ink
Pan does
Pan does
not work;
customer
tries and
eventually
tears
paper
and
scraps
the pen
Old pen
stops
writing,
customer
scraps
pen
Customer
has to
retrace
Writing
or
drawing
looks bad
and so on
Ball housing
I.D. deform
Ink viscosity
too high
Debris build-
up
Inconsistent
ball rolling
due to
deformed
housing
Ball does not
always pick
up ink due to
ink viscosity
Housing I.D.
variation due
to mfg
and so on
Life test
Test # X
Test # X
Design review
Prototype test #
XY
N
N
C
L
A
S
S
2
9
5
2
7
1
O
C
C
U
R
R
E
N
C
E
2
10
7
10
10
10
D
E
T
E
C
T
I
O
N
Risk Priority
Number
(RPN)
32
720
280
140
490
70
and so on and so on and so on
Test # X
None
None
8
7
7
S
E
V
E
R
I
T
Y
SL3151Ch06Frame Page 256 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA) 257
WHERE (will the work get done)
WHY (this should be obvious)
WHEN (should the actions be done)
HOW (will we start)
Additional points concerning the action plan include the following:
Accelerate implementation by getting buy-in (ownership).
It is important to draw out and address objections.
When plans address objections in a constructive way, stakeholders feel own-
ership in plans and actions. Ownership aids in successful implementation.
FIGURE 6.18 Action plans and results analysis.
TABLE 6.7
Special Characteristics for Both Design and Process
FMEA Type Classication Purpose Criteria Control
Design YC A potential critical
characteristic
(Initiate PFMEA)
Severity = 910 Does not apply
Design YS A potential signicant
characteristic
(Initiate PFMEA)
Severity = 58
Occurrence = 410
Does not apply
Design Blank Not a potential critical or
signicant characteristic
Severity < 5 Does not apply
Process Inverted delta A critical characteristic Severity = 910 Required
Process SC A signicant characteristic Severity = 58
Occurrence = 410
Required
Process HI High impact Severity = 58
Occurrence = 410
Emphasis
Process OS Operator safety Severity = 910 Safety sign-off
Process Blank Not a special characteristic Other Does not apply
Action Plan Action Results
Recommended
Actions and
Responsibility
Target
Finish
Date
Actual
Finish
Date
Actions
Taken
S O D RPN Remarks
SL3151Ch06Frame Page 257 Thursday, September 12, 2002 6:09 PM
258 Six Sigma and Beyond
Typical questions that begin a fruitful discussion are:
Why are we?
Why not this?
What about this?
What if?
Timing and actions must be reviewed on a regular basis to:
Maintain a sense of urgency
Allow for ongoing facilitation
Ensure work is progressing
Drive team members to meet commitments
Surface new facts that may affect plans
Fill in the actions taken.
The Action Taken column should not be lled out until the actions
are totally complete.
Record nal outcomes in the Action Plan and Action Results sections
of the FMEA form. Remember, because of the actions you have taken
you should expect changes in severity, occurrence, detection, RPN, and
new characteristic designations. Of course, these changes may be indi-
vidual or in combination. The form will look like Figure 6.19.
LINKAGES AMONG DESIGN AND PROCESS FMEAS
AND CONTROL PLAN
FMEAs are not islands unto themselves. They have continuity, and the information
must be owing throughout the design and process FMEAs as well as to the control
plan. A typical linkage is shown in Figure 6.20.
In addition to the control plan, the FMEA is also linked with robustness. To
appreciate these linkages in FMEA, we must recall that design for six sigma (DFSS)
must be a robust process. In fact, to see the linkages of this robustness we may begin
with a P diagram (see Volume V) and identify its components. It turns out that the
robustness in the FMEA usage is to make sure that the part, subsystem, or system
is going to perform its intended function, in spite of problems in both manufacturing
and environment. Of particular interest are the error states, control factors, and noise
factors. Error states may help in identifying the failures, noise factors may help us
in identifying causes, and control factors may help us in identifying the recommen-
dations. The signal and response become the functions or the starting point of the
FMEA.
The linkages then help generate the inputs and outputs of the FMEA. Typical
inputs are:
System (concept) inputs
P diagram
Boundary diagram
Interface matrix
Potential design verification tests
SL3151Ch06Frame Page 258 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA) 259
Surrogate data for reliability and robustness considerations
Corporate requirements
Benchmarking results
Customer functionality in terms of engineering specifications
Regulatory requirements review
Design inputs
P diagram
Boundary diagram
Interface matrix
Customer functionality in terms of engineering specifications
Regulatory requirements review
Process inputs
P diagram
Process flow diagram
Special characteristics from the DFMEA
Process characteristics
Regulatory requirements review
FIGURE 6.19 Transferring action plans and action results on the FMEA form.
Description
8
8
7
7
1
5
1
1
7
10
1
4
10
80
40
28
70
DOE - Taguchi
70
Develop accel.
test (thermal
vibration)
D. Robins
Test
procedure
revised
4/3/99
4/30/99
TBD
490
Develop new
test # ABC
Action
Taken
Action
Plan
Action
Results
Actual
Finish
Date
SO DR Remarks
140
R
P
N
2/2/99
5/3/99
DOE - Taguchi
optimize
viscosity
C. Abrams
Optimize
ink
formula
280
2/2/99
and so on
Test
imple-
mented
2/2/99
720
2/3/99 2/18/99
Optimized
ink
formula
on
4/30/99
32
2/15/99 3/22/99
No action
required
8 2 2 32 None
Evaluate
machining
process
RPN Target
Finish
Date
Failure
Mode
Analysis
and so
on
Recommended
Action and
Responsibility
SL3151Ch06Frame Page 259 Thursday, September 12, 2002 6:09 PM
260 Six Sigma and Beyond
Machinery inputs
P diagram
Boundary diagram
Interface matrix
Customer functionality in terms of engineering specifications
Regulatory requirements review
GETTING THE MOST FROM FMEA
Common team problems that may make it difcult to get the most from FMEA
include:
Poor team composition (not cross-functional or multidisciplinary)
Low expertise in FMEA
Not multi-level
FIGURE 6.20 FMEA linkages.
Quality
Function
Deployment
System
Design
Specifications
Function Failure Effect Severity Class Cause Controls Rec. Action
Design FMEA
Process FMEA
Sign-Off Report
Design Verification
Plan and Report
Dynamic Control
Plan
Remove the
classification
symbol
Part Drawing
(Inverted Delta
and Special
Characteristics)
Part
Characteristic
1
2
3
4
etc.
C
L
A
S
S
Function Failure Effect Controls
Normal
Controls
Special
Reaction Cause
SL3151Ch06Frame Page 260 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA) 261
Low experience/expertise in product
One-person FMEA
Lack of management support
Not enough time
Too detailed, could go on forever
Arguments between team members (Opinions should be based on facts
and data.)
Lack of team enthusiasm/motivation
Difculty in getting team to start and stay with the process
Proactive vs. reactive (a before the event not after the fact exercise)
Doing it for the wrong reason
Common procedural problems include:
Confusion about, poorly dened or incomplete functions, failure modes,
effects, or causes
Subgroup discussion
Using symptoms or supercial causes instead of root causes
Confusion about ratings as estimates and not absolutes (It will take time
to be consistent.)
Confusion about the relationship between causes, failure modes, and
effects
Using customer dissatised as failure effect
Shifting design concerns to manufacturing and vice-versa
Doing FMEAs by hand
Dependent on the engineers printing skills
RPNs or criticality cannot be ranked easily
Hard to update
Much space taken up by complicated FMEAs
Time consuming
Resistance to being the recorder when done manually
Inefcient means of storing and retrieving info
Note: With FMEA software these are all eliminated.
Working non-systematically on the form (It is suggested that the failure
analysis should progress from left to right, with each column being com-
pleted before the next is begun.)
Resistance of individuals to taking responsibility for recommended
actions
Doing a reactive FMEA as opposed to a proactive FMEA (FMEAs are
best applied as a problem prevention tool, not problem solving tool,
although one may use them for both. However, the value of a reactive
FMEA is much less.)
Not having robust FMEA terminology (A robust communication pro-
cess is one that delivers its function [imparting knowledge and under-
standing] without being affected by noise factors [varying degrees
of training]. Simply stated, the process should be as clear as possible
with minimum possibility for misunderstanding.)
SL3151Ch06Frame Page 261 Thursday, September 12, 2002 6:09 PM
262 Six Sigma and Beyond
Institutionalizing FMEA in your company is challenging, and its success is
largely dependent upon the culture in the organization as well as the reason it is
being utilized. Below are some main considerations:
Selecting pilot projects (Start small and build successes.)
Identifying team participants
Developing and promoting FMEA successes
Developing templates (databases of failure modes, functions, controls,
etc.)
Addressing training needs
Figure 6.21 shows the learning stages (the direction of the arrows indicates the
increasing level) in a company that is developing maturity in the use of FMEA.
SYSTEM OR CONCEPT FMEA
A concept FMEA is used to analyze concepts at very early stages with new ideas.
Concept FMEAs can be design, process, or even machinery oriented. However, in
practical terms, most of them are done on a system or subsystem level.
The process of the system or concept FMEA is practically the same as that of
a design FMEA. In fact, the evaluation guidelines are exactly the same as those of
DFMEA. The difference is that in the system FMEA a great effort is made to identify
gross failures with high severities. If these problems cannot be overcome, then the
project most likely will be killed. If the failures can be xed through reasonable
design changes, then the project moves to a second stage and the design FMEA
takes over.
DESIGN FAILURE MODE AND EFFECTS ANALYSIS
(DFMEA)
The Design Failure Mode and Effects Analysis (Design FMEA) is a method for iden-
tifying potential or known failure modes and providing follow-up and corrective actions.
FIGURE 6.21 The learning stages.
Stages of Learning Stages of FMEA Maturity
Unconscious incompetence
Conscious incompetence
Conscious competence
Unconscious competence
Never heard of FMEA
We talked about it
Customer made us do it
Some small successes
Proper and regular use
SL3151Ch06Frame Page 262 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA) 263
OBJECTIVE
The design FMEA is a disciplined analysis of the part design with the intent to
identify and correct any known or potential failure modes before the manufacturing
stage begins. Once these failure modes are identied and the cause and effects are
determined, each failure mode is then systematically ranked so that the most severe
failure modes receive priority attention. The completion of the design FMEA is the
responsibility of the individual product design engineer. This individual engineer is
the most knowledgeable about the product design and can best anticipate the failure
modes and their corrective actions.
TIMING
The design FMEA is initiated during the early planning stages of the design and is
continually updated as the program develops. The design FMEA must be totally
completed prior to the rst production run.
REQUIREMENTS
The requirements for a design FMEA include:
1. Forming a team
2. Completing the design FMEA form
3. FMEA risk ranking guidelines
DISCUSSION
The effectiveness of an FMEA is dependent on certain key steps in the analysis
process, as follows:
Forming the Appropriate Team
A typical team for conducting a design FMEA is the following:
Design engineer(s)
Test/development engineer
Reliability engineer
Materials engineer
Field service engineer
Manufacturing/process engineer
Customer
A design and a manufacturing engineer are required to be team members. Others
may participate as needed or as the project calls for their knowledge or experience.
The leader for the design FMEA is typically the design engineer.
SL3151Ch06Frame Page 263 Thursday, September 12, 2002 6:09 PM
264 Six Sigma and Beyond
Describing the Function of the Design/Product
There are three types of functions:
1. Task functions: These functions describe the single most important reason
for the existence of the system/product. (Vacuum cleaner? Windshield
wiper? Ballpoint pen?)
2. Supporting functions: These are the sub functions that are needed in
order for the task function to be performed.
3. Enhancing functions: These are functions that enhance the product and
improve customer satisfaction but are not needed to perform the task
function.
After computing the function tree or a block diagram, transfer functions to the
FMEA worksheet or some other form of a worksheet to retain. Add the extent of
each function (range, target, specication, etc.) to test the measurability of the
function.
Describing the Failure Mode Anticipated
The team must pose the question to itself, How could this part, system or design
fail? Could it break, deform, wear, corrode, bind, leak, short, open, etc.? The team
is trying to anticipate how the design being considered could possibly fail; at this
point, it should not make the judgment as to whether it will fail but should concentrate
on how it could fail.
The purpose of a design FMEA (DFMEA) is to analyze and evaluate a design on
its ability to perform its functions. Therefore, the initial assumption is that parts are
manufactured and assembled according to plan and in compliance with specications.
Once failure modes are determined under this assumption, then determine
other failure modes due to purchased materials, components, manufac-
turing processes, and services.
Describing the Effect of the Failure
The team must describe the effect of the failure in terms of customer reaction or in
other words, e.g., What does the customer experience as a result of the failure mode
of a shorted wire? Notice the specicity. This is very important, because this will
establish the basis for exploratory analysis of the root cause of the function. Would
the shorted wire cause the fuel gage to be inoperative or would it cause the dome
light to remain on?
Describing the Cause of the Failure
The team anticipates the cause of the failure. Would poor wire insulation cause the
short? Would a sharp sheet metal edge cut through the insulation and cause the
short? The team is analyzing what conditions can bring about the failure mode. The
more specic the responses are, the better the outcome of the FMEA.
SL3151Ch06Frame Page 264 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA) 265
The purpose of a design FMEA (DFMEA) is to analyze and/or evaluate a design
on its ability to perform its functions (part characteristics). Therefore, the initial
assumption in determining causes is that parts are made and assembled according
to plan and in compliance with specications, including purchased materials, com-
ponents, and services. Then and only then, determine causes due to purchased
materials, components, and services.
Some cause examples include:
Brittle material
Weak fastener
Corrosion
Low hardness
Too small of a gap
Wrong bend angle
Stress concentration
Ribs too thin
Wrong material selection
Poor stitching design
High G forces
Part interference
Tolerance stack-up
Vibration
Oxidation
And so on
Estimating the Frequency of Occurrence of Failure
The team must estimate the probability that the given failure is going to occur. The
team is assessing the likelihood of occurrence, based on its knowledge of the system,
using an evaluation scale of 1 to 10. A 1 would indicate a low probability of
occurrence whereas a 10 would indicate a near certainty of occurrence.
Estimating the Severity of the Failure
In estimating the severity of the failure, the team is weighing the consequence
of the failure. The team uses the same 1 to 10 evaluation scale. A 1 would indicate
a minor nuisance, while a 10 would indicate a severe consequence such as loss of
brakes or stuck at wide open throttle or loss of life.
Identifying System and Design Controls
Generally, these controls consist of tests and analyses that detect failure modes or
causes during early planning and system design activities. Good system controls
detect faults or weaknesses in system designs. Design controls consist of tests and
analyses that detect failure causes or failure modes during design, verication, and
validation activities. Good design controls detect faults or weaknesses in component
designs.
SL3151Ch06Frame Page 265 Thursday, September 12, 2002 6:09 PM
266 Six Sigma and Beyond
Special notes:
Just because there is a current control in place that does not mean that it
is effective. Make sure the team reviews all the current controls, especially
those that deal with inspection or alarms.
To be effective (proactive), system controls must be applied throughout
the pre-prototype phase of the Advanced Product Quality Planning
(APQP) process.
To be effective (proactive), design controls must be applied throughout
the pre-launch phase of the APQP process.
To be effective (proactive), process controls should be applied during the
post-pilot build phase of APQP and continue during the production phase.
If they are applied only after production begins, they serve as reactive
plans and become very inefcient.
Examples of system and design controls include:
Engineering analysis
Computer simulation
Mathematical modeling/CAE/FEA
Design reviews, verication, validation
Historical data
Tolerance stack studies
Engineering reviews, etc.
System/component level physical testing
Breadboard, analog tests
Alpha and beta tests
Prototype, eet, accelerated tests
Component testing (thermal, shock, life, etc.)
Life/durability/lab testing
Full scale system testing (thermal, shock, etc)
Taguchi methods
Design reviews
Estimating the Detection of the Failure
The team is estimating the probability that a potential failure will be detected before
it reaches the customer. Again, the 1 to 10 evaluation scale is used. A 1 would
indicate a very high probability that a failure would be detected before reaching the
customer. A 10 would indicate a very low probability that the failure would be
detected, and therefore, be experienced by the customer. For instance, an electrical
connection left open preventing engine start might be assigned a detection number
of 1. A loose connection causing intermittent no-start might be assigned a detection
number of 6, and a connection that corrodes after time causing no start after a period
of time might be assigned a detection number of 10.
SL3151Ch06Frame Page 266 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA) 267
Detection is a function of the current controls. The better the controls, the more
effective the detection. It is very important to recognize that inspection is not a very
effective control because it is a reactive task.
Calculating the Risk Priority Number
The product of the estimates of occurrence, severity, and detection forms a risk
priority number (RPN). This RPN then provides a relative priority of the failure
mode. The higher the number, the more serious is the mode of failure considered.
From the risk priority numbers, a critical items summary can be developed to
highlight the top priority areas where actions must be directed.
Recommending Corrective Action
The basic purpose of an FMEA is to highlight the potential failure modes so that
the responsible engineer can address them after this identication phase. It is imper-
ative that the team provide sound corrective actions or provide impetus for others
to take sound corrective actions. The follow-up aspect is critical to the success of
this analytical tool. Responsible parties and timing for completion should be desig-
nated in all corrective actions.
Strategies for Lowering Risk: (System/Design) High Severity or Occurrence
To reduce risk, you may change the product design to:
Eliminate the failure mode cause or decouple the cause and effect
Eliminate or reduce the severity of the effect
Make the cause less likely or impossible to occur
Eliminate function or eliminate part (functional analysis)
Some tools to consider:
Quality Function Deployment (QFD)
Fault Tree Analysis (FTA)
Benchmarking
Brainstorming
TRIZ, etc.
Evaluate ideas using Pugh concept selection. Some specic examples:
Change material, increase strength, decrease stress
Add redundancy
Constrain usage (exclude features)
Develop fail safe designs, early warning system
Strategies for Lowering Risk: (System/Design) High Detection Rating
Change the evaluation/verication/tests to:
SL3151Ch06Frame Page 267 Thursday, September 12, 2002 6:09 PM
268 Six Sigma and Beyond
Make failure mode easier to perceive
Detect causes prior to failure
Some tools to consider:
Benchmarking
Brainstorming
Process control (automatic corrective devices)
TRIZ, etc.
Evaluate ideas using Pugh concept selection. Some specic examples:
Change testing and evaluation procedures
Increase failure feedback or warning systems
Increase sampling in testing or instrumentation
Increase redundancy in testing
PROCESS FAILURE MODE AND
EFFECTS ANALYSIS (FMEA)
The Process Failure Mode and Effects Analysis (process FMEA) is a method for
identifying potential or known processing failure modes and providing problem
follow-up and corrective actions.
OBJECTIVE
The process FMEA is a disciplined analysis of the manufacturing process with the
intent to identify and correct any known or potential failure modes before the rst
production run occurs. Once these failure modes are identied and the cause and
effects are determined, each failure mode is then systematically ranked so that the
most severe failure modes receive priority attention. The completion of the process
FMEA is the responsibility of the individual product process engineer. This individ-
ual process engineer is the most knowledgeable about the process structure and can
best anticipate the failure modes and their effects and address the corrective actions.
TIMING
The process FMEA is initiated during the early planning stages of the process before
machines, tooling, facilities, etc., are purchased. The process FMEA is continually
updated as the process becomes more clearly dened. The process FMEA must be
totally completed prior to the rst production run.
REQUIREMENTS
The requirements for a process FMEA are as follows:
SL3151Ch06Frame Page 268 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA) 269
1. Form team
2. Complete the process FMEA form
3. FMEA risk ranking guidelines
DISCUSSION
The effectiveness of an FMEA on a process is dependent on certain key steps in the
analysis, including the following:
Forming the Team
A typical team for the process/assembly FMEA is the following:
Design engineer
Manufacturing or process engineer
Quality engineer
Reliability engineer
Tooling engineer
Responsible operators from all shifts
Supplier
Customer
A design engineer, a manufacturing engineer, and representative operators are
required to be team members. Others may participate as needed or as the project
calls for their knowledge or experience. The leader for the process FMEA is typically
the process or manufacturing engineer.
Describing the Process Function
The team must identify the process or machine and describe its function. The team
members should ask of themselves, What is the purpose of this operation? State
concisely what should be accomplished as a result of the process being performed.
Typically, there are three areas of concern. They are:
1. Creating/constructing functions: These are the functions that add value to
the product. Examples include cutting, forming, painting, drying, etc.
2. Improving functions: These are the functions that are needed in order to
improve the results of the creating function. Examples include deburring,
sanding, cleaning, etc.
3. Measurement functions: These are functions that measure the success of
the other functions. Examples include SPC, gauging, inspections, etc.
Manufacturing Process Functions
Just as products have functions, manufacturing processes also have functions. The
goal is to concisely list the function(s) for each process operation. The rst step in
improving any process is to make the current process visible by developing a process
SL3151Ch06Frame Page 269 Thursday, September 12, 2002 6:09 PM
270 Six Sigma and Beyond
ow diagram (a sequential ow of operations by people and/or equipment). This
helps the team understand, agree, and dene the scope. Three important questions
exist for any existing process:
1. What do you think is happening?
2. What is actually happening?
3. What should be happening?
Special reminder for manufacturing process functions: Remember, if the process
ow diagram is too extensive for a timely FMEA, a risk assessment may be done
on each process operation to narrow the scope.
The PFMEA Function Questions
Each manufacturing step typically has one or more functions. Determine what
functions are associated with each manufacturing process step and then ask:
1. What does the process step do to the part?
2. What are you doing to the part/assembly?
3. What is the goal, purpose, or objective of this process step?
For example, consider the pen assembly process (see Figure 6.22), which involves
the following steps:
1. Inject ink into ink tube (0.835 cc)
2. Insert ink tube into tip assembly housing (12 mm)
3. Insert tip assembly into tip assembly housing (full depth until stop)
4. Insert tip assembly housing into barrel (full depth until stop)
5. Insert end cap into barrel (full depth until stop)
6. Insert barrel into cap (full depth until stop)
7. Move to dock (to dock within 8 seconds)
8. Package and ship (12 pens per box)
Note: At the end of this function analysis you are ready to transfer the informa-
tion to the FMEA form.
Remember that another way to reduce the complexity or scope of the FMEA is
to prioritize the list of functions and then take only the ones that the team collectively
agrees are the biggest concerns.
Describing the Failure Mode Anticipated
The team must pose the question to itself, How could this process fail to complete
its intended function? Could the resulting workpiece be oversize, undersize, rough,
eccentric, misassembled, deformed, cracked, open, shorted, leaking, porous, dam-
aged, omitted, misaligned, out of balance, etc.? The team members are trying to
anticipate how the workpiece might fail to meet engineering requirements; at this
point in their analysis they should stress how it could fail and not whether it will fail.
SL3151Ch06Frame Page 270 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA) 271
The purpose of a process FMEA (PFMEA) is to analyze and evaluate a process
on its ability to perform its functions. Therefore, the initial assumptions are:
1. The design intent meets all customer requirements.
2. Purchased materials and components comply with specications.
Once failure modes are determined under these assumptions, then determine
other failure modes due to:
1. Design aws that cause or lead to process problems
2. Problems with purchased materials, components, or services
Describing the Effect(s) of the Failure
The team must describe the effect of the failure on the component or assembly. What
will happen as a result of the failure mode described? Will the component or
FIGURE 6.22 Pen assembly process.
Tip
assembly
Tip
assembly
housing
Ink
tube
Inject ink
into ink tube
Ink
Insert tip
assembly into tip
assembly
housing
Insert ink tube
into tip
assembly
Insert tip
assembly housing
Insert end cap into
barrel
Insert barrel into cap
Move to dock
Package and ship
Cap
End cap
Barrel
SL3151Ch06Frame Page 271 Thursday, September 12, 2002 6:09 PM
272 Six Sigma and Beyond
assembly be inoperative, intermittently operative, always on, noisy, inefcient, surg-
ing, not durable, inaccurate, etc.? After considering the failure mode, the engineer
determines how this will manifest itself in terms of the component or assembly
function. The open circuit causes an inoperative gage. The rough surface will cause
excessive bushing wear. The scratched surface will cause noise. The porous casting
will cause external leaks. The cold weld will cause reduced strength, etc. In some
cases the process engineer (the leader) must interface with the product design
engineer to correctly describe the effect(s) of a potential process failure on the
component or total assembly.
Describing the Cause(s) of the Failure
The engineer anticipates the cause of the failure. The engineer is describing what
conditions can bring about the failure mode. Locators are not at and parallel. The
handling system causes scratches on a shaft. Inadequate venting and gaging can
cause misruns, porosity, and leaks. Inefcient die cooling causes die hot spots.
Undersize condition can be caused by heat treat shrinkage, etc.
The purpose of a process FMEA (PFMEA) is to analyze or evaluate a process
on its ability to perform its functions (part characteristics). Therefore, the initial
assumptions in determining causes are:
The design intent meets all customer requirements.
Purchased materials, components, and services comply with specications.
Then and only then, determine causes due to:
Design aws that cause or lead to process problems
Problems with purchased materials, components, or services
Typical causes associated with process FMEA include:
Fatigue
Poor surface preparation
Improper installation
Low torque
Improper maintenance
Inadequate clamping
Misuse
High RPM
Abuse
Inadequate venting
Unclear instructions
Tool wear
Component interactions
Overheating
And so on
SL3151Ch06Frame Page 272 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA) 273
Estimating the Frequency of Occurrence of Failure
The team must estimate the probability that the given failure mode will occur. This
team is assessing the likelihood of occurrence, based on their knowledge of the
process, using an evaluation scale of 1 to 10. A 1 would indicate a low probability
of occurrence, whereas a 10 would indicate a near certainty of occurrence.
Estimating the Severity of the Failure
In estimating the severity of the failure, the team is weighing the consequence (effect)
of the failure. The team uses the same 1 to 10 evaluation scale. A 1 would indicate
a minor nuisance, while a 10 would indicate a severe consequence such as motor
inoperative, horn does not blow, engine seizes, no drive, etc.
Identifying Manufacturing Process Controls
Manufacturing process controls consist of tests and analyses that detect causes or
failure modes during process planning or production. Manufacturing process controls
can occur at the specic operation in question or at a subsequent operation. There
are three types of process controls, those that:
1. Prevent the cause from happening
2. Detect causes then lead to corrective actions
3. Detect failure modes then lead to corrective actions
Manufacturing process controls should be based on process dominance factors.
Dominance factors are process elements that generate signicant process variation.
Dominance factors are the predominant factors that contribute to problems in a
process. Most processes have one or two dominant sources of variation. Depending
on the source, there are tools that may be used to track these as well as monitor
them. Table 6.8 gives a cross reference of the dominance factors and the tools that
may be used for tracking them. The following list provides some very common
dominance factors:
Setup
Machine
Operator
Component or material
Tooling
Preventive maintenance
Fixture/pallet/work holding
Environment
Special note: Controls should target the dominant sources of variation.
Manufacturing process control examples include:
SL3151Ch06Frame Page 273 Thursday, September 12, 2002 6:09 PM
274 Six Sigma and Beyond
Statistical Process Control (SPC)
X-bar/R control charts (variable data)
Individual X-moving range charts (variable data)
p; n; u; c charts (attribute data)
Non-statistical control
Check sheets, checklists, setup procedures, operational denitions/
instruction sheets
Preventive maintenance
Tool usage logs/change programs (PM)
Mistake proong/error proong/Poka Yoke
Training and experience
Automated inspection
Visual inspection
It is very important to recognize that inspection is not a very effective control
because it is a reactive task.
Estimating the Detection of the Failure
The detection is directly related to the controls available in the process. So the better
the controls, the better the detection. The team in essence is estimating the probability
TABLE 6.8
Manufacturing Process Control Matrix
Dominance Factor Attribute Data Variable Date
Setup Check sheet
Checklist
X-bar/R chart
X-MR chart
Machine p or c chart
Check sheet
Run chart
X-bar/R chart
X-MR chart
Operator Check sheet
Run chart
X-bar/R chart
X-MR chart
Component/material Check sheet
Supplier information
Check sheet
Supplier information
Tool Tool logs
Check sheet
p or c chart
Tool logs
Capability study
X-MR chart
Preventive maintenance Time to failure chart
Supplier information
Time to failure chart
Supplier information
X-MR chart
Fixture/pallet/work holding Time to failure chart
Check sheet
p or c chart
Time to failure chart
X-bar/R chart
X-MR chart
Environment Check sheet Run chart
X-MR chart
SL3151Ch06Frame Page 274 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA) 275
that a potential failure will be detected before it reaches the customer. The team
members use the 1 to 10 evaluation scale. A 1 would indicate a very high probability
that a failure would be detected before reaching the customer. A 10 would indicate
a very low probability that the failure would be detected, and therefore, be experi-
enced by the customer. For instance, a casting with a large hole would be readily
detected and would be assessed as a 1. A casting with a small hole causing leakage
between two channels only after prolonged usage would be assigned a 10. The team
is assessing the chances of nding a defect, given that the defect exists.
Calculating the Risk Priority Number
The product of the estimates of occurrence, severity, and detection forms a risk
priority number (RPN). This RPN then provides a relative priority of the failure
mode. The higher the number, the more serious is the mode of failure considered.
From the risk priority numbers, a critical items summary can be developed to
highlight the top priority areas where actions must be directed.
Recommending Corrective Action
The basic purpose of an FMEA is to highlight the potential failure modes so that
the engineer can address them after this identication phase. It is imperative that
the engineer provide sound corrective actions or provide impetus for others to take
sound corrective actions. The follow-up aspect is critical to the success of this
analytical tool. Responsible parties and timing for completion should be designated
in all corrective actions.
Strategies for Lowering Risk: (Manufacturing) High Severity or Occurrence
Change the product or process design to:
Eliminate the failure cause or decouple the cause and effect
Eliminate or reduce the severity of the effect (recommend changes in
design)
Some tools to consider:
Benchmarking
Brainstorming
Mistake proong
TRIZ, etc.
Evaluate ideas using Pugh concept selection. Some specic examples:
Developing a robust design (insensitive to manufacturing variations)
Changing process parameters (time, temperature, etc.)
Increasing redundancy, adding process steps
Altering process inputs (materials, components, consumables)
Using mistake proong (Poka Yoke), reducing handling
SL3151Ch06Frame Page 275 Thursday, September 12, 2002 6:09 PM
276 Six Sigma and Beyond
Strategies for Lowering Risk: (Manufacturing) High Detection Rating
Change the process controls to:
Make failure mode easier to perceive
Detect causes prior to failure mode
Some tools to consider:
Benchmarking
Brainstorming, etc.
Evaluate ideas using Pugh concept selection. Some specic examples:
Change testing and inspection procedures/equipment.
Improve failure feedback or warning systems.
Add sensors/feedback or feed forward systems.
Increase sampling and/or redundancy in testing.
Alter decision rules for better capture of causes and failures (i.e., more
sophisticated tests).
At this stage, now you are ready to enter a brief description of the recommended
actions, including the department and individual responsible for implementation, as
well as both the target and nish dates, on the FMEA form. If the risk is low and
no action is required write no action needed.
For each entry that has a designated characteristic in the class[ication] column,
review the issues that impact cause/occurrence, detection/control, or failure mode.
Generate recommended actions to reduce risk. Special RPN patterns suggest that certain
characteristics/root causes are important risk factors that need special attention.
Guidelines for process control system:
1. Select the process.
2. Conduct the FMEA on the process.
3. Conduct gage system analysis.
4. Conduct process potential study.
5. Develop control plan.
6. Train operators in control methods.
7. Implement control plan.
8. Determine long-term process capability.
9. Review the system for continual improvement.
10. Develop audit system.
11. Institute improvement actions.
After FMEA:
SL3151Ch06Frame Page 276 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA) 277
1. Review the FMEA.
2. Highlight the high-risk areas based on the RPN.
3. Identify the critical and major characteristics based on your classication
criteria.
4. Ensure that a control plan exists and is being followed.
5. Conduct capability studies.
6. Work on processes that have C
pk
of less or equal to 1.33.
7. Work on processes that have C
pk
greater than 1.33 to reduce variation and
reach a C
pk
of greater or equal to 2.0.
MACHINERY FMEA (MFMEA)
A machinery FMEA is a systematic approach that applies the traditional tabular
method to aid the thought process used by simultaneous engineering teams to identify
the machines potential failure modes, potential effects, and potential causes of the
potential failure modes and to develop corrective action plans that will remove or
reduce the impact of the potential failure modes. Generally, the delivery of a MFMEA
is the responsibility of the supplier who generates a functional MFMEA for system
and subsystem levels. This is in contrast to a DFMEA where the responsibility is
still on the supplier but now the focus is to generate transfer mechanisms, spindles,
switches, cylinders, exclusive of assembly-level equipment.
A typical MFMEA follows a hierarchical model in that it divides the machine
into subsystems, assemblies, and lowest replaceable units. For example:
Level 1: System level generic machine
Level 2: Subsystem level electrical, mechanical, controls
Level 3: Assembly level xtures/tools, material handling, drives
Level 4: Component level
And so on
IDENTIFY THE SCOPE OF THE MFMEA
Use the boundary diagram. Once the diagram has been completed, you can focus
the MFMEA on the low MTBF and reliability values.
IDENTIFY THE FUNCTION
Dene the function in terms of an active verb and a noun. Use a functional diagram
or the P diagram to nd the ideal function. Always focus on the intent of the system,
subsystem, or component under investigation.
FAILURE MODE
A failure is an event when the equipment/machinery is not capable of producing
parts at specic conditions when scheduled or is not capable of producing parts or
SL3151Ch06Frame Page 277 Thursday, September 12, 2002 6:09 PM
278 Six Sigma and Beyond
performing scheduled operations to specications. Machinery failure modes can
occur in three ways:
Component defect (hard failure)
Failure observation (potential)
Abnormality of performance (constitutes the equipment as failed)
POTENTIAL EFFECTS
The consequence of a failure mode on the subsystem is described in terms of safety
and the big seven losses. (The big seven losses may be identied through warranty
or historical data.)
Describe the potential effects in terms of downtime, scrap, and safety issues. If
a functional approach is used, then list the causes rst before developing the effects
listing. Associated with the potential effects is the severity, which is a rating corre-
sponding to the seriousness of the effect of a potential machinery failure mode.
Typical descriptions are:
Downtime
Breakdowns: Losses that are a result of a functional loss or function
reduction on a piece of machine requiring maintenance intervention.
Setup and adjustment: Losses that are a result of set procedures. Adjust-
ments include the amount of time production is stopped to adjust
process or machine to avoid defect and yield losses, requiring operator
or job setter intervention.
Startup losses: Losses that occur during the early stages of production
after extended shutdowns (weekends, holidays, or between shifts),
resulting in decreased yield or increased scrap and defects.
Idling and minor stoppage: Losses that are a result of minor interrup-
tions in the process ow, such as a process part jammed in a chute or
a limit switch sticking, etc., requiring only operator or job setter inter-
vention. Idling is a result of process ow blockage (downstream of the
focus operation) or starvation (upstream of the focus operation). Idling
can only be resolved by looking at the entire line/system.
Reduced cycle: Losses that are a result of differences between the ideal
cycle time of a piece of machinery and its actual cycle time.
Scrap
Defective parts: Losses that are a result of process part quality defects
resulting in rework, repair, or scrap.
Tooling: Losses that are a result of tooling failures/breakage or dete-
rioration/wear (e.g., cutting tools, xtures, welding tips, punches, etc.).
Safety
Safety considerations: Immediate life or limb threatening hazard or
minor hazard.
SL3151Ch06Frame Page 278 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA) 279
SEVERITY RATING
Severity is comprised of three components:
Safety of the machinery operator (primary concern)
Product scrap
Machinery downtime
A rating should be established for each effect listed. Rate the most serious effect.
Begin the analysis with the function of the subsystem that will affect safety, gov-
ernment regulations, and downtime of the equipment. A very important point here
is the fact that a reduction in severity rating may be accomplished only through a
design change. A typical rating is shown in Table 6.9.
It should be noted that these guidelines may be modied to reect specic
situations. Also, the basis for the criteria may be changed to reect the specicity
of the machine and its real world usage.
CLASSIFICATION
The classication column is not typically used in the MFMEA process but should
be addressed if related to safety or noncompliance with government regulations.
Address the failure modes with a severity rating of 9 or 10. Failure modes that affect
worker safety will require a design change. Enter OS in the class column. OS
(operator safety) means that this potential effect of failure is critical and needs to
be addressed by the equipment supplier. Other notations can be used but should be
approved by the equipment user.
POTENTIAL CAUSES
The potential causes should be identied as design deciencies. These could translate as:
Design variations, design margins, environmental, or defective components
Variation during the build/install phases of the equipment that can be
corrected or controlled
Identify rst level causes that will cause the failure mode. Data for the devel-
opment of the potential causes of failure can be obtained from:
Surrogate MFMEA
Failure logs
Interface matrix (focusing on physical proximity, energy transfer, material,
information transfer)
Warranty data
Concern reports (things gone wrong, things gone right)
Test reports
Field service reports
SL3151Ch06Frame Page 279 Thursday, September 12, 2002 6:09 PM
280 Six Sigma and Beyond
T
A
B
L
E

6
.
9
M
a
c
h
i
n
e
r
y

G
u
i
d
e
l
i
n
e
s

f
o
r

S
e
v
e
r
i
t
y
,

O
c
c
u
r
r
e
n
c
e
,

a
n
d

D
e
t
e
c
t
i
o
n
E
f
f
e
c
t
C
r
i
t
e
r
i
a

S
e
v
e
r
i
t
y
R
a
n
k
P
r
o
b
a
b
i
l
i
t
y

o
f

F
a
i
l
u
r
e
C
r
i
t
e
r
i
a

f
o
r

O
c
c
u
r
r
e
n
c
e
R
a
n
k
A
l
t
e
r
n
a
t
e

C
r
i
t
e
r
i
a

f
o
r

O
c
c
u
r
r
e
n
c
e
D
e
t
e
c
t
i
o
n

C
r
i
t
e
r
i
a

f
o
r

D
e
t
e
c
t
i
o
n
R
a
n
k
H
a
z
a
r
d
o
u
s

w
i
t
h
o
u
t

w
a
r
n
i
n
g
V
e
r
y

h
i
g
h

s
e
v
e
r
i
t
y
:

a
f
f
e
c
t
s

o
p
e
r
a
t
o
r
,

p
l
a
n
t
,

o
r

m
a
i
n
t
e
n
a
n
c
e

p
e
r
s
o
n
n
e
l

s
a
f
e
t
y

a
n
d
/
o
r

e
f
f
e
c
t
s

n
o
n
c
o
m
p
l
i
a
n
c
e

w
i
t
h

g
o
v
e
r
n
m
e
n
t

r
e
g
u
l
a
t
i
o
n
s

w
i
t
h
o
u
t

w
a
r
n
i
n
g
1
0
F
a
i
l
u
r
e

o
c
c
u
r
s

e
v
e
r
y

h
o
u
r
R
(
t
)

<

1

o
r

s
o
m
e

M
T
B
F
1
0
1

i
n

1
V
e
r
y

l
o
w
P
r
e
s
e
n
t

d
e
s
i
g
n

c
o
n
t
r
o
l
s

c
a
n
n
o
t

d
e
t
e
c
t

p
o
t
e
n
t
i
a
l

c
a
u
s
e

o
r

n
o

d
e
s
i
g
n

c
o
n
t
r
o
l

a
v
a
i
l
a
b
l
e
1
0
H
a
z
a
r
d
o
u
s

w
i
t
h

w
a
r
n
i
n
g
H
i
g
h

s
e
v
e
r
i
t
y
:

a
f
f
e
c
t
s

o
p
e
r
a
t
o
r
,

p
l
a
n
t

o
r

m
a
i
n
t
e
n
a
n
c
e

p
e
r
s
o
n
n
e
l

s
a
f
e
t
y

a
n
d
/
o
r

e
f
f
e
c
t
s

n
o
n
c
o
m
p
l
i
a
n
c
e

w
i
t
h

g
o
v
e
r
n
m
e
n
t

r
e
g
u
l
a
t
i
o
n
s

w
i
t
h

w
a
r
n
i
n
g
9
F
a
i
l
u
r
e

o
c
c
u
r
s

e
v
e
r
y

s
h
i
f
t
R
(
t
)

=

5
%
9
1

i
n

8
T
e
a
m

s

d
i
s
c
r
e
t
i
o
n

d
e
p
e
n
d
i
n
g

o
n

m
a
c
h
i
n
e

a
n
d

s
i
t
u
a
t
i
o
n
9
V
e
r
y

h
i
g
h
D
o
w
n
t
i
m
e

o
f

8
+

h
o
u
r
s

o
r

t
h
e

p
r
o
d
u
c
t
i
o
n

o
f

d
e
f
e
c
t
i
v
e

p
a
r
t
s

f
o
r

o
v
e
r

2

h
o
u
r
s
8
F
a
i
l
u
r
e

o
c
c
u
r
s

e
v
e
r
y

d
a
y

R
(
t
)

=

2
0
%
8
1

i
n

2
4
T
e
a
m

s

d
i
s
c
r
e
t
i
o
n

d
e
p
e
n
d
i
n
g

o
n

m
a
c
h
i
n
e

a
n
d

s
i
t
u
a
t
i
o
n
8
H
i
g
h
D
o
w
n
t
i
m
e

o
f

2

4

h
o
u
r
s

o
r

t
h
e

p
r
o
d
u
c
t
i
o
n

o
f

d
e
f
e
c
t
i
v
e

p
a
r
t
s

f
o
r

u
p

t
o

2

h
o
u
r
s
7
F
a
i
l
u
r
e

o
c
c
u
r
s

e
v
e
r
y

w
e
e
k
R
(
t
)

3
7
%
7

1

i
n

8
0
L
o
w
M
a
c
h
i
n
e
r
y

c
o
n
t
r
o
l

w
i
l
l

i
s
o
l
a
t
e

t
h
e

c
a
u
s
e

a
n
d

f
a
i
l
u
r
e

m
o
d
e

a
f
t
e
r

t
h
e

f
a
i
l
u
r
e

h
a
s

o
c
c
u
r
r
e
d
,

b
u
t

w
i
l
l

n
o
t

p
r
e
v
e
n
t

t
h
e

f
a
i
l
u
r
e

f
r
o
m

o
c
c
u
r
r
i
n
g

7
SL3151Ch06Frame Page 280 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA) 281
M
o
d
e
r
a
t
e
D
o
w
n
t
i
m
e

o
f

6
0

1
2
0

m
i
n

o
r

t
h
e

p
r
o
d
u
c
t
i
o
n

o
f

d
e
f
e
c
t
i
v
e

p
a
r
t
s

f
o
r

u
p

t
o

6
0

m
i
n
6
F
a
i
l
u
r
e

o
c
c
u
r
s

e
v
e
r
y

m
o
n
t
h
R
(
t
)

=

6
0
%
6
1

i
n

3
5
0
T
e
a
m

s

d
i
s
c
r
e
t
i
o
n

d
e
p
e
n
d
i
n
g

o
n

m
a
c
h
i
n
e

a
n
d

s
i
t
u
a
t
i
o
n
6
L
o
w
D
o
w
n
t
i
m
e

o
f

3
0

6
0

m
i
n

w
i
t
h

n
o

p
r
o
d
u
c
t
i
o
n

o
f

d
e
f
e
c
t
i
v
e

p
a
r
t
s

o
r

t
h
e

p
r
o
d
u
c
t
i
o
n

o
f

d
e
f
e
c
t
i
v
e

p
a
r
t
s

f
o
r

u
p

t
o

3
0

m
i
n
5

F
a
i
l
u
r
e

o
c
c
u
r
s

e
v
e
r
y

3

m
o
n
t
h
s
R
(
t
)

=

7
8
%
5
1

i
n

1
0
0
0

M
e
d
i
u
m
M
a
c
h
i
n
e
r
y

c
o
n
t
r
o
l
s

w
i
l
l

p
r
o
v
i
d
e

a
n

i
n
d
i
c
a
t
o
r

o
f

i
m
m
i
n
e
n
t

f
a
i
l
u
r
e
5
V
e
r
y

l
o
w
D
o
w
n
t
i
m
e

o
f

1
5

3
0

m
i
n

w
i
t
h

n
o

p
r
o
d
u
c
t
i
o
n

o
f

d
e
f
e
c
t
i
v
e

p
a
r
t
s
4
F
a
i
l
u
r
e

o
c
c
u
r
s

e
v
e
r
y

6

m
o
n
t
h
s
R
(
t
)

=

8
5
%
4
1

i
n

2
5
0
0
T
e
a
m

s

d
i
s
c
r
e
t
i
o
n

d
e
p
e
n
d
i
n
g

o
n

m
a
c
h
i
n
e

a
n
d

s
i
t
u
a
t
i
o
n
4
M
i
n
o
r
D
o
w
n
t
i
m
e

u
p

t
o

1
5

m
i
n

w
i
t
h

n
o

p
r
o
d
u
c
t
i
o
n

o
f

d
e
f
e
c
t
i
v
e

p
a
r
t
s
3
F
a
i
l
u
r
e

o
c
c
u
r
s

e
v
e
r
y

y
e
a
r
R
(
t
)

=

9
0
%
3

1

i
n

5
0
0
0
H
i
g
h
M
a
c
h
i
n
e
r
y

c
o
n
t
r
o
l
s

w
i
l
l

p
r
e
v
e
n
t

a
n

i
m
m
i
n
e
n
t

f
a
i
l
u
r
e

a
n
d

i
s
o
l
a
t
e

t
h
e

c
a
u
s
e
3
V
e
r
y

m
i
n
o
r
P
r
o
c
e
s
s

p
a
r
a
m
e
t
e
r

v
a
r
i
a
b
i
l
i
t
y

n
o
t

w
i
t
h
i
n

s
p
e
c
i

c
a
t
i
o
n

l
i
m
i
t
s
.

A
d
j
u
s
t
m
e
n
t
s

m
a
y

b
e

d
o
n
e

d
u
r
i
n
g

p
r
o
d
u
c
t
i
o
n
;

n
o

d
o
w
n
t
i
m
e

a
n
d

n
o

d
e
f
e
c
t
s

p
r
o
d
u
c
e
d
2

F
a
i
l
u
r
e

o
c
c
u
r
s

e
v
e
r
y

2

y
e
a
r
s
R
(
t
)

=

9
5
%
2
1

i
n

1
0
,
0
0
0
T
e
a
m

s

d
i
s
c
r
e
t
i
o
n

d
e
p
e
n
d
i
n
g

o
n

m
a
c
h
i
n
e

a
n
d

s
i
t
u
a
t
i
o
n
2
N
o
n
e
P
r
o
c
e
s
s

p
a
r
a
m
e
t
e
r

v
a
r
i
a
b
i
l
i
t
y

w
i
t
h
i
n

s
p
e
c
i

c
a
t
i
o
n

l
i
m
i
t
s
;

a
d
j
u
s
t
m
e
n
t
s

m
a
y

b
e

p
e
r
f
o
r
m
e
d

d
u
r
i
n
g

n
o
r
m
a
l

m
a
i
n
t
e
n
a
n
c
e
1

F
a
i
l
u
r
e

o
c
c
u
r
s

e
v
e
r
y

5

y
e
a
r
s
R
(
t
)

=

9
8
%
1
1

i
n

2
5
,
0
0
0

V
e
r
y

h
i
g
h
M
a
c
h
i
n
e
r
y

c
o
n
t
r
o
l
s

n
o
t

r
e
q
u
i
r
e
d
;

d
e
s
i
g
n

c
o
n
t
r
o
l
s

w
i
l
l

d
e
t
e
c
t

a

p
o
t
e
n
t
i
a
l

c
a
u
s
e

a
n
d

s
u
b
s
e
q
u
e
n
t

f
a
i
l
u
r
e

a
l
m
o
s
t

e
v
e
r
y

t
i
m
e
1
T
A
B
L
E

6
.
9
M
a
c
h
i
n
e
r
y

G
u
i
d
e
l
i
n
e
s

f
o
r

S
e
v
e
r
i
t
y
,

O
c
c
u
r
r
e
n
c
e
,

a
n
d

D
e
t
e
c
t
i
o
n
E
f
f
e
c
t
C
r
i
t
e
r
i
a

S
e
v
e
r
i
t
y
R
a
n
k
P
r
o
b
a
b
i
l
i
t
y

o
f

F
a
i
l
u
r
e
C
r
i
t
e
r
i
a

f
o
r

O
c
c
u
r
r
e
n
c
e
R
a
n
k
A
l
t
e
r
n
a
t
e

C
r
i
t
e
r
i
a

f
o
r

O
c
c
u
r
r
e
n
c
e
D
e
t
e
c
t
i
o
n

C
r
i
t
e
r
i
a

f
o
r

D
e
t
e
c
t
i
o
n
R
a
n
k
SL3151Ch06Frame Page 281 Thursday, September 12, 2002 6:09 PM
282 Six Sigma and Beyond
OCCURRENCE RATINGS
Occurrence is the rating corresponding to the likelihood of the failure mode occurring
within a certain period of time see Table 6.8. The following should be considered
when developing the occurrence ratings:
Each cause listed requires an occurrence rating.
Controls can be used that will prevent or minimize the likelihood that the
failure cause will occur but should not be used to estimate the occurrence
rating.
Data to establish the occurrence ratings should be obtained from:
Service data
MTBF data
Failure logs
Maintenance records
SURROGATE MFMEAS
Current Controls
Current controls are described as being those items that will be able to detect the
failure mode or the causes of failure. Controls can be either design controls or
process controls.
A design control is based on tests or other mechanisms used during the design
stage to detect failures. Process controls are those used to alert the plant personnel
that a failure has occurred. Current controls are generally described as devices to:
Prevent the cause/mechanism failure mode from occurring
Reduce the rate of occurrence of the failure mode
Detect the failure mode
Detect the failure mode and implement corrective design action
Detection Rating
Detection rating is the method used to rate the effectiveness of the control to detect
the potential failure mode or cause. The scale for ranking these methods is based
on a 1 to 10 scale see Table 6.8.
RISK PRIORITY NUMBER (RPN)
The RPN is a method used by the MFMEA team to rank the various failure modes
of the equipment. This ranking allows the team to attack the highest probability of
failure and remove it before the equipment leaves the supplier oor.
The RPN typically:
Has no value or meaning (Ratings and RPNs in themselves have no value
or meaning. They should be used only to prioritize the machines potential
SL3151Ch06Frame Page 282 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA) 283
design weakness [failure mode] for consideration of possible design
actions to eliminate the failures or make them maintainable.)
Is used to prioritize potential design weaknesses (root causes) for consid-
eration of possible design actions
Is the product of severity, occurrence and detection (RPN = S O D)
Special note on risk identication: Whereas it is true that most organizations
using FMEA guidelines use the RPN for identifying the risk priority, some do not
follow that path. Instead, they use a three path approach based on:
Step 1: severity
Step 2: criticality
Step 3: detection
This means that regardless of the RPN, the priority is based on the highest
severity rst, especially if it is a 9 or a 10, followed by the criticality, which is the
product of severity and occurrence, and then the RPN.
RECOMMENDED ACTIONS
Each RPN value should have a recommended action listed.
The actions are designed to reduce severity, occurrence, and detection
ratings.
Actions should address in order the following concerns:
Failure modes with a severity of 9 or 10
Failure mode/cause that has a high severity occurrence rating
Failure mode/cause/design control that has a high RPN rating
When a failure mode/cause has a severity rating of 9 or 10, the design
action must be considered before the engineering release to eliminate
safety concerns.
DATE, RESPONSIBLE PARTY
Document the person, department, and date for completion of the recom-
mended action.
Always place the responsible partys name in this area.
ACTIONS TAKEN/REVISED RPN
After each action has been taken, document the action.
Results of an effective MFMEA will reduce or eliminate equipment down-
time.
The supplier is responsible for updating the MFMEA. The MFMEA is a
living document. It should reect the latest design level and latest design
actions.
Any equipment design changes need to be communicated to the MFMEA
team.
SL3151Ch06Frame Page 283 Thursday, September 12, 2002 6:09 PM
284 Six Sigma and Beyond
REVISED RPN
Recalculate S, O, and D after the action taken has been completed. Always
remember that only a change in design can change the severity. Occurrence
may be changed by a design change or a redundant system. Detection
may be changed by a design change or better testing or better design
control.
MFMEA A team needs to review the new RPN and determine if
additional design actions are necessary.
SUMMARY
In summary, the steps in conducting the FMEA are as follows:
1. Select a project and scope.
2. If DFMEA, construct a block diagram.
3. If PFMEA, construct a process ow diagram.
4. Select an entry point based on the block or process ow diagram.
5. Collect the data.
6. Analyze the data.
7. Calculate results (results must be data driven).
8. Evaluate/conrm/measure the results.
Better off
Worse off
Same as before
9. Do it all over again.
SELECTED BIBLIOGRAPHY
Chrysler Corporation, Ford Motor Company, and General Motors Corporation, Potential
Failure Mode and Effect Analysis (FMEA) Reference Manual, 2nd ed., distributed by
the Automotive Industry Action Group (AIAG), Southeld, MI, 1995.
Chrysler Corporation, Ford Motor Company, and General Motors Corporation, Advanced
Product Quality Planning and Control Plan, distributed by the Automotive Industry
Action Group (A.I.A.G.), Southeld, MI, 1995.
Chrysler Corporation, Ford Motor Company, and General Motors Corporation, Potential
Failure Mode and Effect Analysis (FMEA) Reference Manual, 32nd ed., Chrysler
Corporation, Ford Motor Company, and General Motors Corporation. Distributed by
the Automotive Industry Action Group (AIAG), Southeld, MI, 2001.
The Engineering Society for Advancing Mobility Land Sea Air and Space, Potential Failure
Mode and Effects Analysis in Design FMEA and Potential Failure Mode and Effects
Analysis in Manufacturing and Assembly Processes (Process FMEA) Reference Man-
ual, SAE: J1739, The Engineering Society for Advancing Mobility Land Sea Air and
Space, Warrendale, PA, 1994.
SL3151Ch06Frame Page 284 Thursday, September 12, 2002 6:09 PM
Failure Mode and Effect Analysis (FMEA) 285
The Engineering Society for Advancing Mobility Land Sea Air and Space, Reliability and
Maintainability Guideline for Manufacturing Machinery and Equipment, SAE Prac-
tice Number M-110, The Engineering Society for Advancing Mobility Land Sea Air
and Space, Warrendale, PA, 1999.
Ford Motor Company, Failure Mode Effects Analysis: Training Reference Guide, Ford Motor
Company Ford Design Institute. Dearborn, MI, 1998.
Kececioglu, D., Reliability Engineering Handbook, Vol. 12, Prentice Hall, Englewood Cliffs,
NJ, 1991.
Stamatis, D.H., Advanced Quality Planning, Quality Resources, New York, 1998.
Stamatis, D.H., Failure Mode and Effect Analysis: FMEA from Theory to Execution, Quality
Press, Milwaukee, 1995.
SL3151Ch06Frame Page 285 Thursday, September 12, 2002 6:09 PM
SL3151Ch06Frame Page 286 Thursday, September 12, 2002 6:09 PM

287

7

Reliability

Reliability n

may



be relied on; trustworthiness, authenticity, consistency; infalli-
bility, suggesting the complete absence of error, breakdown, or poor performance.
In other words, when we speak of a reliable product, we usually expect such
adjectives as dependable and trustworthy to apply. But to measure product reliability,
we must have a more exact denition. The denition of reliability then, is

: the
probability that a product will perform its intended function in a satisfactory manner
for a specied period of time when operating under specied conditions.

Thus, the reliability of a system expresses the length of failure-free time that
can be expected from the equipment. Higher levels of reliability mean less failure
of the system and consequently less downtime. To measure reliability it is necessary
to:
Relate probability to a precise denition of success or satisfactory per-
formance
Specify the time base or operating cycles over which such performance
is to be sustained
Specify the environmental or use conditions that will prevail


Note:

Theoretically, every product has a designed-in reliability function. This
reliability function (or curve) expresses the system reliability at any point in time.
As time increases the curve must drop, eventually reaching zero.

PROBABILISTIC NATURE OF RELIABILITY

We cannot say exactly when a particular product will fail, but we can say what
percentage of the products in use will fail by certain times. This is analogous to the
reasoning used by insurance companies in dening mortality. We can state reliability
in various ways:
The probability that a product will be performing its intended function at
5000 hours of use is 0.95.
The reliability at 5000 hours is 0.95 or 95%.
If we place 1000 units in use, 950 will still be operating with no failures
at 5000 hours.
Or to cite another example:
The reliability at 8000 hours is 0.80.
The unreliability at 8000 hours is 0.20.

SL3151Ch07Frame Page 287 Thursday, September 12, 2002 6:07 PM

288

Six Sigma and Beyond

From a service point of view, we may be interested in repair frequency and then
we say that 20% of the units will have to be repaired by 8000 hours. Or the repair
per hundred units (R/100) is 20 at 8000 hours. The important point is that the
reliability is a

metric

expressing the probability of maintaining intended function
over time and is measurable as a percentage.

PERFORMING THE INTENDED
FUNCTION SATISFACTORILY

A product fails when it ceases to function in a way that is satisfactory to the customer.
Products rarely fail suddenly in the way that a light bulb does. Rather, they deteriorate
over time. This eventually leads to unsatisfactory performance from the customers
standpoint. Unsatisfactory performance can result from:
Excess vibrations
Excess noise
Intermittent operation
Drift
Catastrophic failure
And many other possibilities
Unsatisfactory performance must be clearly spelled out. The customers per-
spective must be recognized in this process. There will usually be various levels of
failure based on the customers perceived level of severity. The levels of severity
are frequently grouped into two categories such as:
Major
Minor
The severity of the failure to the customer must be documented and recognized
in a

Failure Denition and Scoring Criterion

that precisely delineates how each
incident on a system or equipment will be handled in regards to reliability and
maintainability calculations. Such documents should be developed early in a design
and development program so that all concerned are aware of the consequences of
incidents that occur during product testing and in eld use.
The design team must be able to use the failure denition and scoring criterion
to address product trade-offs. If the severity of a failure to the customer can be
lowered by design changes, the failure denition and scoring criterion should pro-
mote such trade-offs.

S

PECIFIED

T

IME

P

ERIOD

Products deteriorate with use and even with age when dormant. Longer lengths of
usage imply lower reliability. For design purposes, target usage periods must be
identied. Typical usage periods are:

SL3151Ch07Frame Page 288 Thursday, September 12, 2002 6:07 PM

Reliability

289

1.

Warranty period(s):

A warranty is a



contract supplied with the product
providing the user with a certain amount of protection against product
failure.
2.

Expected customer life:

Customers have a reasonably consistent belief as
to how long a product should last. This belief can be determined through
a market survey.
3.

Durability life:

This is a



measure of useful life, dening the number of
operating hours (or cycles) until overhaul is required.

S

PECIFIED

C

ONDITIONS

Different environments promote different failure modes and different failure rates
for a product. The environmental factors that the product will encounter must be
clearly dened. The levels (and rate of change) at which we want to address these
environmental factors must also be dened.

E

NVIRONMENTAL

C

ONDITIONS

P

ROFILE

The environmental prole must include the level and rate of change for each envi-
ronmental factor considered. Environmental factors include but are not limited to:
Temperature
Humidity
Vibration
Shock
Corrosive materials
Immersion
Pressures, vacuum
Salt spray
Dust
Cement oors/basements
Ice/snow
Lubricants
Perfumes
Magnetic elds
Nuclear radiation
Weather
Contamination
Antifreeze
Gasoline fumes
Rust inhibitors/under coatings
Rain
Soda pop/hot coffee
Sunlight
Electrical discharges
And so on

SL3151Ch07Frame Page 289 Thursday, September 12, 2002 6:07 PM

290

Six Sigma and Beyond

Not all of these environmental conditions would be appropriate for a particular
product. Each product must be considered in its individual operating environment
and scenario. Environment must consider the environment induced from operating
the product, the environment induced from external factors, and the environment
induced by delivering the product to the customer.

R

ELIABILITY

N

UMBERS

The reliability number attached to a product changes with:
Usage and environmental conditions
Customers perception of satisfactory performance
At any product age (t) for a population of

N

products, the reliability at time t
denoted by R(t) is
R(t) = Number of survivors/N, which is equal to
R(t)

=

1 (Number of failures/N) = 1 Unreliability
This is the reliability of this population of products at time t. The real world
estimation of reliability is usually much more difcult due to products being sold
over time with each having a different usage prole. Calendar time is known but
product life on each product is not, while warranty systems monitor and record only
failure.

I

NDICATORS

U

SED



TO

Q

UANTIFY

P

RODUCT

R

ELIABILITY

Several metrics are in common use to indicate product reliability. Some of these
actually quantify unreliability. Some of the metrics follow:


MTBF

The mean time between failures, also MTTF, MMBF, MCTF.
MTBF

=

120 hours means that on the average a failure will occur with
every 120 hours of operation.


Failure rate

The



rate of failures per unit of operating time.


=
0.05/hour means that one failure will occur with every 20 hours of oper-
ation, on the average.


R/100 (or R/1000)

The



number of warranty claims per 100 (or 1000)
products sold. R/100 = 7 means that there are seven warranty claims for
every 100 products sold.


Reliability number

The reliability of the product at some specic time.
R = 90% means that 9 out of 10 products work successfully for the
specied time.

SL3151Ch07Frame Page 290 Thursday, September 12, 2002 6:07 PM

Reliability

291

RELIABILITY AND QUALITY

Customers and product engineers frequently use the terms reliability and quality
interchangeably. Ultimately, the customer denes quality. Customers want products
that meet or exceed their needs and expectations, at a cost that represents value.
This expectation of performance must be met throughout the customers expected
life for the particular product. Quality is usually recognized as a more encompassing
term including reliability. Some quality characteristics are:
Psychological
Taste
Beauty, style
Status
Technological
Hardness
Vibration
Noise
Materials (bearings, belts, hoses, etc.)
Time-oriented
Reliability
Maintainability
Contractual
Warranty
Ethical
Honesty of repair shop
Experience and integrity of sales force

P

RODUCT

D

EFECTS

Quality defects

are dened as those that can be located by conventional inspection
techniques. (

Note:

for legal reasons, it is better to identify these defects as

noncon-
formances

.)

Reliability defects

are dened as those that require some stress applied
over time to develop into detectable defects.
What causes product failure over time? Some possibilities are:
Design
Manufacturing
Packaging
Shipping
Storage
Sales
Installation
Maintenance
Customer duty cycle

SL3151Ch07Frame Page 291 Thursday, September 12, 2002 6:07 PM

292

Six Sigma and Beyond

C

USTOMER

S

ATISFACTION

The ultimate goal of a product is to satisfy a customer from all aspects of cost,
performance, reliability, and maintainability. The customer trades off these param-
eters when making a decision to buy a product. Assuming that we are designing a
product for a certain market segment, cost is determined within limits. The trade-
offs are as follows:
1.

Performance parameters

are the designed-in system capabilities such as
acceleration, top speed, rate of metal removal, gain, ability to carry a 5-
ton payload up a 40 degree grade without overheating, and so on.
2. The

reliability

of equipment expresses the length of failure-free time that
can be expected from the equipment. Higher levels of reliability mean
less failure of the equipment and consequently less downtime and loss of
use. Although we will attach reliability numbers to products, it should be
recognized that the customers perspective interprets reliability as the
ability of a product to perform its intended function for a given period of
time

without

failure. This concept of failure-free operation is becoming
more and more xed in the mind of the customer. This is true whether
the customer is purchasing an automobile, a machine tool, a computer
system, a refrigerator, or an automatic coffee maker.
3.

Maintainability

is dened as the probability that a failed system is restored
to operable condition in a specied amount of downtime.
4.

Availability

is



the probability that at any time, the system is either oper-
ating satisfactorily or is ready to be operated on demand, when used under
stated conditions. The availability might also be looked at as the ability
of equipment, under combined aspects of its reliability, maintainability,
and maintenance support, to perform its required function at a stated
instant of time. This availability includes the built-in equipment features
as well as the maintenance support function. Availability combines reli-
ability and maintainability into one measure. There are different kinds of
availability that are calculated in different ways see Von Alven (1964)
and ANSI/IEEE (1988). The most popular availabilities are achieved
availability and inherent availability.
a.

Achieved availability

includes all diagnostic, repair, administrative, and
logistic times. This availability is dependent on the maintenance sup-
port system. Achieved availability can be calculated as
A = Operating Time/(Operating Time + Unscheduled Time)
b.

Inherent availability

only includes operating time and active repair
time addressing the built-in capabilities of the equipment. Inherent
availability is calculated as
A =
MTBF
MTBF MTTR +

SL3151Ch07Frame Page 292 Thursday, September 12, 2002 6:07 PM

Reliability

293

where

MTTR

= mean time-to-repair and the

MTTR

is for the active re-
pair time.
5.

Active repair time

is



that portion of downtime when the technicians are
working on the system to repair the failure situation. It must be understood
that the different availabilities are dened for various time-states of the
system.
6.

Serviceability

is



the ease with which machinery and equipment can be
repaired. Here repair includes diagnosis of the fault, replacement of the
necessary parts, tryout, and bringing the equipment back on line. Service-
ability is somewhat qualitative and addresses the ease by which the equip-
ment, as designed, can be diagnosed and repaired. It involves factors such
as accessibility to test points, ease of removal of the failed components,
and ease of bringing the system back on line.

P

RODUCT

L

IFE



AND

F

AILURE

R

ATE

Let us assume that we have released a population of products to the marketplace.
The failure rate is observed as the products age. The shape of the failure rate is
referred to as a bathtub curve (see Figure 7.1). Here we have overemphasized the
different parts of the curve for illustration.
This bathtub curve has three distinct regions:
1.

Infant mortality period:

During the infant mortality period the population
exhibits a high failure rate, decreasing rapidly as the weaker products fail.
Some manufacturers provide a burn-in period for their products to help
eliminate infant mortality failures. Generally, infant mortality is associated
with manufacturing issues. Examples are:
Poor welds
Contamination
Improper installation
And so on
2.

Useful life period:

During this period the population of products exhibits
a relatively low and constant failure rate. It is explained using the stress
strength inference model for reliability. This model identies the stress

FIGURE 7.1

Bathtub curve.
Time
Infant mortality Normal life Wear out
Failure rate

SL3151Ch07Frame Page 293 Thursday, September 12, 2002 6:07 PM

294

Six Sigma and Beyond

distribution that represents the combined stressors acting on a system at
some point in time. The strength distribution represents the piece-to-piece
variability of components in the eld. The inference area is indicative of
a potential failure when stresses exceed the strength of a component. In
other words, any failure in this period is a factor of the designed-in
reliability. Examples are:
Low safety factors
Abuse
Misapplication
Product variability
And so on
3.

Wear out period:

At the onset of wear out, the failure rate starts to increase
rapidly. When the failure rate becomes high, replacement or major repair
must be performed if the product is to be left in service. Wear out is due
to a number of forces such as:
Frictional wear
Chemical change
Maintenance practices
Fatigue
Corrosion or oxidation
And so on
In conjunction with the bathtub curve there are two more items of concern. The
rst one is the hazard rate (or the instantaneous failure rate) and the second, the
ROCOF plot.
The hazard rate is the probability that the product will fail in the next interval
of time (or distance or cycles). It is assumed the product has survived up to that
time. For example, there is a one in twenty chance that it will crack, break, bend,
or fail to function in the next month. Typically, hazard rate is shown as
where h(t) = hazard rate; f(t) = probability density function [PDF: f(t) =


t

]; F(t) =
cumulative distribution function [CDF: F(t) = 1

e


t

; and R(t) = reliability at time
t [R(t) = 1 F(t) = 1 (1

e


t

) =

e


t

].
The Rate of Change of Failure or Rate of Change of Occurrence of Failure
(ROCOF), on the other hand, is a visual tool that helps the engineer to analyze
situations where a lot of data over time has been accumulated. Essentially, its purpose
is the same as that of the reliability bathtub curve, that is, to understand the life
stages of a product or process and take the appropriate action. A typical ROCOF
plot (for warranty item) will display an early (decreasing rate) and useful life
(constant rate) performance. If wear out is detected, it should be investigated.
Knowing what is happening to a product from one region of the bathtub curve to
the next helps the engineer specify what failed hardware to collect and aids with
calibrating the severity of development tests.
h t
f t
F t
f t
R t
( )
( )
( )
( )
( )


1

SL3151Ch07Frame Page 294 Thursday, September 12, 2002 6:07 PM

Reliability

295

If the number of failures is small, the ROCOF plot approach may be difcult to
interpret. When that happens, it is recommended that a smoothing approach be
taken. The typical smoothing methodology is to use log paper for the plotting.
Obviously, many more ways and more advanced techniques exist. It must be noted
here that most statistical software provides this smoothing as an option for the data
under consideration. See Volume III for more details on smoothing.

PRODUCT DESIGN AND DEVELOPMENT CYCLE

Developing a product that can be manufactured economically and consistently to be
delivered to the marketplace in quantity and that will work satisfactorily for the
customer takes a well established and precisely controlled design and development
cycle. Events must be scheduled to occur at precise times to phase the product into
the marketplace. To develop a new internal combustion engine for an automobile
takes about a three-year design cycle (down recently from ve years), while a new
minicomputer takes about 18 months. Although the timing may be different for
different companies, the activities comprising a design and development cycle are
similar. The following is representative of the activities in a product development
cycle:
Market research
Forecast need.
Forecast sales.
Understand who the customer is and how the product will be used.
Set broad performance objectives.
Establish program cost objectives.
Establish technical feasibility.
Establish manufacturing capacity.
Establish reliability and maintainability (R&M) requirements.
Understand governmental regulations.
Understand corporate objectives.
Concept phase
Formulate project team.
Formulate design requirements.
Establish real world customer usage prole.
Develop and consider alternatives.
Rank alternatives considering R&M requirements.
Review quality and reliability history on past products.
Assess feasibility of R&M requirements.
Design phase
Prepare preliminary design.
Perform design calculations.
Prepare rough drawings.
Compare alternatives to pursue.
Evaluate manufacturing feasibility of design approach (design for
manufacturability and assembly).

SL3151Ch07Frame Page 295 Thursday, September 12, 2002 6:07 PM

296

Six Sigma and Beyond

Complete detailed design.
Perform a design failure mode and effect analysis (FMEA).
Complete detailed design package.
Update FMEA to reect current design and details.
Develop design verication plan.
Develop R&M model for product.
Estimate product R&M using current design approach.
Prototype program
Build components and prototypes.
Write test plan.
Perform component/subsystem tests.
Perform system test.
Eliminate design weaknesses.
Estimate reliability using growth techniques.
Manufacturing engineering
Process planning
Assembly planning
Capability analyses
Process FMEA
Finalized design
Consider test results.
Consider manufacturing engineering inputs (design for manufactura-
bility/assembly).
Make design changes.
Freeze design
Release to manufacturing
Engineering changes
Manufacturing experience
Field experience

R

ELIABILITY



IN

D

ESIGN

The cost of unreliability is:
High warranty costs
Field campaigns
Loss of future sales
Cost of added eld service support
It has been demonstrated in the marketplace that highly reliable products (failure
free) gain market share. A very classic example of this is the American automotive
market. In the early 1960s, American manufacturers were practically the only game
in town with GM capturing some 60% of the market. Since then, progressively and
on a yearly basis the market has shifted to the point where Flint (2001) reports that
now GM has a shade over 25% without trucks and Saab, Ford 14.7% without Volvo
and Jaguar, and Chrysler about 5%. The projections for the 2002 model year are

SL3151Ch07Frame Page 296 Thursday, September 12, 2002 6:07 PM

Reliability

297

not any better with GM capturing only 25%, Ford 15%, and Chrysler 6%. The sad
part of the automotive scene is that GM, Ford, and DaimlerChrysler have lost market
share, and sales are continually nudging down with no end in sight. That is, as Flint
(2001, p. 21) points out, they are not going to recover that market share, not in the
short term, not in the next ve to ten years.
The evidence suggests that the mission of a reliability program is to estimate,
track, and report the reliability of hardware before it is produced. The reliability of
the equipment must be reported at every phase of design and development in a
consistent and easy-to-understand format. Warranty cost is an expensive situation
resulting from poor manufacturing quality and inadequate reliability. For example,
the chairman and chief executive of Ford Motor Company, Jacques Nasser, in the
1st quarter of 2001 leadership cascading meeting made the statement that in 1999,
there were 2.1 times as many vehicles recalled as were sold. In 2000, there were
six times as many. By way of comparison:
In 1994, according to an article in

USA Today

, the cost of warranty for a
Chrysler automobile was as high as $850 per vehicle. From the same article,
one could deduce that the cost per vehicle for General Motors was about
$350 and for Ford $650. This would be to cover the 36,000 mile warranty
in effect at that time.
In 2000, the warranty cost for Chrysler was about $1,300, GM about $1,200,
and Ford about $850 (Mayne et al., 2001).
For each car sold, the manufacturer must collect and retain this expense in a
warranty account.

C

OST



OF

E

NGINEERING

C

HANGES



AND

P

RODUCT

L

IFE

C

YCLE

The stage of product development/manufacturing and the cost of an engineering
change have been estimated many times by many different industries and various
trade magazines as a cost that grows by a factor of ve to ten as one moves from
early design to manufacturing. Typical gures for this high cost are
Prototype stage: <$20,000
After start of production: >$100,000
Therefore, reliability can play an important role in designing products that will
satisfy the customer and will prove durable in the real world usage application. The
focus of reliability is to design, identify, and detect early potential concerns at a
point where it is really cost effective to do so.
Reliability must be valued by the organization and should be a primary consid-
eration in all decision making. Reliability techniques and disciplines are integrated
into system and component planning, design, development, manufacturing, supply,
delivery, and service processes. The reliability process is tailored to t individual
business unit requirements and is based on common concepts that are focused on
producing reliable products and systems, not just components.

SL3151Ch07Frame Page 297 Thursday, September 12, 2002 6:07 PM

298

Six Sigma and Beyond

Any organization committed to satisfy the customers expectations for reliability
(and value) throughout the useful life of its product must be concerned with reli-
ability. For without it, the organization is doomed to fail. The total reliability process
includes robustness concepts and methods that are integrated into the organizations
timing schedule and overall business system. Cross-functional teams and empowered
individuals are key to the successful implementation of any reliability program.
Reliability concepts and methods are generally thought of as a proprietary
domain of only the product development department or community. That is not
completely true. Reliability may be used anywhere there is a need for design and
development work, such as manufacturing and tooling. However, it does not address
actions specically targeted at manufacturing and assembly. This is the reason why
under Design for Six Sigma (DFSS), reliability becomes very important from the
get go. To be sure, reliability currently does not include all the elements of the
Advanced Product Quality Plan (APQP), but it is compatible with APQP. It outlines
the three quality and reliability phases that all program teams and supporting orga-
nizations should go through in the product development process to achieve a more
reliable and robust product. The three phases stress useful life reliability, focusing
specically on the deployment of customer-driven requirements, designing in robust-
ness, and verifying that the designs meet the requirements.

R

ELIABILITY



IN

THE TECHNOLOGY DEPLOYMENT PROCESS
Technology is ever changing on all fronts. Customers expect increased reliability
and better quality for a reasonable cost. Reliability may indeed play a major role in
bringing technology, customer satisfaction, and lower cost into reality. Let us then
try to understand the process of support and the cascading of requirements through-
out the Technology Deployment Process (TDP).
Understanding the TDP begins with the recognition that this process has three
phases and each phase has specic requirements. The three phases are pre deploy-
ment process, core engineering process, and quality support. In the pre deployment
process, there are three stages with very specic inputs and outputs. In core engi-
neering, the development of generic requirements begins, and in quality support, the
best reliability practices are developed.
1. Pre-Deployment Process
Three stages are involved here. They are:
1. Identify/select new technologies: The main function of this stage is to
identify and select technology for reliable and robust products that meet
future customer needs or wants. In essence, here we are to develop and
understand:
Customer wants process
Competitive analysis
Technology strategy/roadmap
SL3151Ch07Frame Page 298 Thursday, September 12, 2002 6:07 PM
Reliability 299
2. Develop/optimize technology to achieve concept readiness: The main
function of this stage is to sufciently develop and prove through analyt-
ical and/or surrogate testing that the technology meets the functional and
reliability requirements for customer wants or needs under real world
usage conditions. In essence, here we are to generate, understand, and
develop readiness through:
Reviewing quality history of similar systems/concepts
Understanding real world usage prole
Dening functional requirements of system
Planning for robustness
Reviewing quality/reliability/durability reports or worksheets
3. Develop/optimize technology to achieve implementation readiness: The
main function of this stage is to optimize the technology to meet functional
and/or reliability requirements. Additionally, the aim is to demonstrate
that the technology is robust and reliable under real world usage condi-
tions. In essence, here we are to further understand the requirements by:
Rening design requirements
Designing for robustness
Verifying the design
Reviewing quality/reliability/durability reports or worksheets
2. Core Engineering Process
Develop generic requirements for forward models by providing product lines with
generic information on system robust design, such as case studies, system P-dia-
grams, measurement of ideal functions, etc. In this stage, we also conduct compet-
itive technical information analysis to our potential product lines through test-the-
best and reliability benchmarking. Some of the specic tools we may use are:
System design specication guidelines
Real world usage demographics
Failure mode and effect analysis
Key life testing
Fault tree analysis
Design verication process
And so on
The idea here is to be able to develop common-cause problem resolution, that
is, to be able to identify common-cause problems/root causes across the product
line(s) and champion corrective action by following reliability disciplines. In essence
then, core engineering should:
Prioritize concerns
Identify root causes
Determine/incorporate corrective action
SL3151Ch07Frame Page 299 Thursday, September 12, 2002 6:07 PM
300 Six Sigma and Beyond
Validate improvements
Champion implementation across product line(s)
3. Quality Support
Identify best reliability practices and lead the process standardization and simpli-
cation. Develop a toolbox and provide reliability consultation.
RELIABILITY MEASURES TESTING
The purpose of performing a reliability test is to answer the question, Does the
item meet or exceed the specied minimum reliability requirement? Reliability
testing is used to:
Determine whether the system conforms to the specied, quantitative
reliability requirements
Evaluate the systems expected performance in the warranty period and
its compliance to the useful life targets as dened by corporate policy
Compare performance of the system to the goal that was established earlier
Monitor and validate reliability growth
Determine design actions based on the outcomes of the test
In addition to their other uses, the outcomes of reliability testing are used as a
basis for design qualication and acceptance. Reliability testing should be a natural
extension of the analytical reliability models, so that test results will clarify and
verify the predicted results, in the customers environment.
WHAT IS A RELIABILITY TEST?
A reliability test is effectively a sampling test in that it involves a sample of objects
selected from a population. From the sample data, some statement(s) are made
about the population parameter(s). In reliability testing, as in any sampling test:
The sample is assumed to be representative of the population.
The characteristics of the sample (e.g., sample mean) are assumed to be
an estimate of the true value of the population characteristics (e.g., pop-
ulation mean).
A key factor in reliability test planning is choosing the proper sample size. Most
of the activity in determining sample size is involved with either:
1. Achieving the desired condence that the test results give the correct
information
2. Reducing the risk that the test results will give the wrong information
SL3151Ch07Frame Page 300 Thursday, September 12, 2002 6:07 PM
Reliability 301
WHEN DOES RELIABILITY TESTING OCCUR?
Prior to the time that hardware is available, simulation and analysis should be used
to nd design weaknesses. Reliability testing should begin as soon as hardware is
available for testing. Ideally, much of the reliability testing will occur on the bench
with the testing of individual components. There is good reason for this: The effect
of failure on schedule and cost increases progressively with the program timeline.
The later in the process that the failure and corrective action are found, the more it
costs to correct and the less time there is to make the correction. Some key points
to remember regarding test planning:
Develop the reliability test plan early in the design phase.
Update the plan as requirements are added.
Run the formal reliability testing according to the predetermined proce-
dure. This is to ensure that results are not contaminated by development
testing or procedural issues.
Develop the test plan in order to get the maximum information with the
fewest resources possible.
Increase test efciency by understanding stress/strength and acceleration
factor relationships. This may require accelerated testing, such as AST
(Accelerated Stress Test), which will increase the information gained from
a test program.
Make sure your test plan shows the relationship between development testing
and reliability testing. While all data contribute to the overall knowledge
about a system, other functional development testing is an opportunity to
gain insight into the reliability performance of your product.
Note: A control sample should be maintained as a reference throughout the
reliability testing process. Control samples should not be subjected to any stresses
other than the normal parametric and functional testing.
RELIABILITY TESTING OBJECTIVES
When preparing the test plan, keep these objectives in mind:
Test with regard to production intent. Make sure the sample that is tested
is representative of the system that the customer will receive. This means
that the test unit is representative of the nal product in all areas including
materials (metals, fasteners, weight), processes (machining, casting, heat
treat), and procedures (assembly, service, repair). Of course, consider that
these elements may change or that they may not be known. However, use
the same production intent to the extent known at the time of the test plan.
Determine performance parameters before testing is started. It is often
more important in reliability evaluations to monitor the percentage change
in a parameter rather than the performance to specication.
SL3151Ch07Frame Page 301 Thursday, September 12, 2002 6:07 PM
302 Six Sigma and Beyond
Duplicate/simulate the full range of the customer stresses and environ-
ments. This includes testing to the 95th percentile customer. (For most
organizations this percentile is the default. Make sure you identify what
is the exact percentile for your organization.)
Quantify failures as they relate to the system being tested. A failure results
when a system does not perform to customer expectations, even if there
is no actual broken part.
Remember,
Customer requirements include the specications and requirements of inter-
nal customers and regulatory agencies as well as the ultimate purchaser.
You should structure testing to identify hardware interface issues as they
relate to the system being tested.
Sudden-Death Testing
Sudden-death testing allows you to obtain test data quickly and reduces the number
of test xtures required. It can be used on a sample as large as 40 or more or as
small as 15. Sudden-death testing reduces testing time in cases where the lower
quartile (lower 25%) of a life distribution is considerably lower than the upper
quartile (upper 25%). The philosophy involved in sudden-death testing is to test
small groups of samples to a rst failure only and use the data to determine the
Weibull distribution of the component. The method is as follows:
1. Choose a sample size that can be divided into three or more groups with
the same number of items in each group. Divide the sample into three or
more groups of equal size and treat each group as if it were an individual
assembly.
2. Test all items in each group concurrently until there is a rst failure in
that group. Testing is then stopped on the remaining units in that group
as soon as the rst unit fails, hence the name sudden death.
3. Record the time to rst failure in each group.
4. Rank the times to failure in ascending order.
5. Assign median ranks to each failure based on the sample size equal to
the number of groups. Median rank charts are used for this purpose.
6. Plot the times to failure vs. median ranks on Weibull paper.
7. Draw the best t line. (Eye the line or use the regression model.) This
line represents the sudden-death line.
8. Determine the life at which 50% of the rst failures are likely to occur
(B
50
life) by drawing a horizontal line from the 50% level to the sudden-
death line. Drop a vertical line from this point down.
9. Find the median rank for the rst failure when the sample size is equal
to the number of items in each subgroup. Again, refer to the median rank
charts. Draw a horizontal line from this point until it intersects the vertical
line drawn in the previous step.
SL3151Ch07Frame Page 302 Thursday, September 12, 2002 6:07 PM
Reliability 303
10. Draw a line parallel to the sudden-death line passing through the inter-
section point from step 9. This line is called the population line and
represents the Weibull distribution of the population.
Sudden-death testing is a good method to use to determine the failure distribution
of the component. (Note: Only common failure mechanisms can be used for each
Weibull distribution. Care must be taken to determine the true root cause of all
failures. Failure must be related to the stresses applied during the test.)
EXAMPLE
Assume you have a sample of 40 parts from the same production run available for
testing purposes. The parts are divided into ve groups of eight parts as shown below:
All parts in each group are put on test simultaneously. The test proceeds until any
one part in each group fails. At that time, testing stops on all parts in that group.
In the test, we experience the following rst failures in each group:
Failure data are arranged in ascending hours to failure, and their median ranks are
determined based on a sample size of N = 5. (There are ve failures, one in each of
ve groups.) The chart in Table 7.1 illustrates the data. The median rank percentage
for each failure is derived from the median rank (Table 7.2) for ve samples.
If the life hours and median ranks of the ve failures are plotted on Weibull paper,
the resulting line is called the sudden-death line. The sudden-death line represents
TABLE 7.1
Failure Rates with Median Ranks
Failure Order
Number
Life
Hours
Median Ranks,
%
1 65 12.95
2 120 31.38
3 155 50.00
4 200 68.62
5 300 87.06
Group l 12345678
Group 2 12345678
Group 3 12345678
Group 4 12345678
Group 5 12345678
Group 1 Part #3 fails at 120 hours
Group 2 Part #4 fails at 65 hours
Group 3 Part #1 fails at 155 hours
Group 4 Part #5 fails at 300 hours
Group 5 Part #7 fails at 200 hours
SL3151Ch07Frame Page 303 Thursday, September 12, 2002 6:07 PM
304 Six Sigma and Beyond
the cumulative distribution that would result if ve assemblies failed, but it actually
represents ve measures of the rst failure in eight of the population. The median
life point on the sudden-death line (point at which 50% of the failures occur) will
correspond to the median rank for the rst failure in a sample of eight, which is
8.30%. The population line is drawn parallel to the sudden-death line through a point
plotted at 8.30% and at the median life to rst failure as determined above. This
estimate of the populations minimum life is just as reliable as the one that would
have been obtained if all 40 parts were tested to failure.
TABLE 7.2
Median Ranks
Rank
Sample size
Order 1 2 3 4 5 6 7 8 9 10
1 50.0 29.3 20.6 15.9 12.9 10.9 9.4 8.3 7.4 6.7
2 70.7 50.0 38.6 31.4 26.4 22.8 20.1 18.0 16.2
3 7 9.4 61.4 50.0 42.1 36.4 3G.1 Z8.6 25.9
4 84.1 68.6 57.9 50.0 44:0 39.3 35.5
5 87.1 73.9 63.6 56.0 50.0 45.2
6 89.1 77.2 67.9 60.7 54.8
7 90.6 79.9 71.4 64.5
8 91.7 82.0 74.1
9 92.6 83.8
10 93.3
Rank
Sample Size
Order 11 12 13 14 15 16 17 18 19 20
1 6.1 5.6 5.2 4.8 4.5 4.2 4.0 3.8 3.6 3.4
2 14.8 13.6 12.6 1 1.7 10.9 10.3 9.7 9.2 8.7 8.3
3 23.6 21.7 20.0 18.6 17.4 16.4 15.4 14.6 13.8 13.1
4 32.4 29.8 27.5 25.6 23.9 22.5 21.2 20.0 19.0 18.1
5 41.2 37.9 35.0 32.6 30.4 28.6 26.9 25.5 24.2 23.0
6 50.0 46.0 42.5 39.5 37.0 34.7 32.7 30.9 29.3 27.9
7 58.8 54.0 50.0 46.5 43.5 40.8 38.5 36.4 34.5 32.8
8 67.6 62.1 57.5 53.5 50.0 46.9 44.2 41.8 39.7 37.7
9 76.4 70.2 65.0 60.5 56.5 53.1 50.0 47.3 44.8 42.6
10 85.2 78.3 72.5 67.4 63.0 59.2 55.8 52.7 50.0 47.5
11 93.9 86.4 80.0 74.4 69.5 65.3 61.5 58.2 55.2 52.5
12 94.4 87.4 81.4 76.1 71.4 67.3 63.6 60.3 57.4
13 94.8 88.3 82.6 77.5 73.1 69.1 65.5 62.3
14 95.2 89.1 83.6 78.8 74.5 70.7 67.2
15 95.5 89.7 84.6 80.0 75.8 72.1
16 95.8 90.3 85.4 81.0 77.0
17 96.0 90.8 86.2 81.9
18 96.2 91.3 86.9
19 96.4 91.7
20 96.6
SL3151Ch07Frame Page 304 Thursday, September 12, 2002 6:07 PM
Reliability 305
Accelerated Testing
Accelerated testing is another approach that may be used to reduce the total test
time required. Accelerated testing requires stressing the product to levels that are
more severe than normal. The results that are obtained at the accelerated stress levels
are compared to those at the design stress or normal operating conditions. We will
look at examples of this comparison during this section.
We use accelerated testing to:
Generate failures, especially in components that have long life under
normal conditions
Obtain information that relates to life under normal conditions
Determine design/technology limits of the hardware
Accelerated testing is accomplished by reducing the cycle time, such as by:
Compressing cycle time by reducing or eliminating idle time in the normal
operating cycle
Overstressing
There are some pitfalls in using accelerated testing:
Accelerated testing can cause failure modes that are not representative.
If there is little correlation to real use, such as aging, thermal cycling,
and corrosion, then it will be difcult to determine how accelerated testing
affects these types of failure modes.
ACCELERATED TEST METHODS
There are many test methods that can be used for accelerated testing. This section covers:
Constant-stress testing
Step-stress testing
Progressive-stress testing
AST/PASS testing
Before we discuss the methods, keep in mind that any product may be subjected
to multiple stresses and combinations of stresses. The stresses and combinations are
identied very early in the design phase. When accelerated tests are run, ensure that
all the stresses are represented in the test environment and that the product is exposed
to every stress.
CONSTANT-STRESS TESTING
In constant-stress testing, each test unit is run at constant high stress until it fails or
its performance degrades. Several different constant stress conditions are usually
SL3151Ch07Frame Page 305 Thursday, September 12, 2002 6:07 PM
306 Six Sigma and Beyond
employed, and a number of test units are tested at each condition. Some products
run at constant stress, and this type of test represents actual use for those products.
Constant stress will usually provide greater accuracy in estimating time to failure.
Also, constant-stress testing is most helpful for simple components. In systems and
assemblies, acceleration factors often differ for different types of components.
STEP-STRESS TESTING
In step-stress testing, the item is tested initially at a normal, constant stress for a
specied period of time. Then the stress is increased to a higher level for a specied
period of time. Increases continue in a stepped fashion.
The main advantage of step-stress testing is that it quickly yields failure, because
increasing stress ensures that failures occur. A disadvantage is that failure modes
that occur at high stress may differ from those at normal use conditions. Quick
failures do not guarantee more accurate estimates of life or reliability. A constant-
stress test with a few failures usually yields greater accuracy in estimating the actual
time to failure than a shorter step-stress test; however, we may need to do both to
correlate the results so that the results of the shorter test can be used to predict the
life. (Always remember that failures must be related to the stress conditions to be
valid. Other test discrepancies should be noted and repaired and the testing continued.)
PROGRESSIVE-STRESS TESTING
Progressive-stress testing is step-stress testing carried to the extreme. In this test,
the stress on a test unit is continuously increased, rather than being increased in
steps. Usually, the accelerating variable is increased linearly with time.
Several different rates of increase are used, and a number of test units are tested
at each rate of increase. Under a low rate of increase of stress, specimens tend to
live longer and to fail at lower stress because of the natural aging effects or cumu-
lative effects of the stress on the component. Progressive-stress testing has some of
the same advantages and disadvantages as step-stress testing.
ACCELERATED-TEST MODELS
The data from accelerated tests are interpreted and analyzed using different models.
The model that is used depends upon the:
Product
Testing method
Accelerating variables
The models give the product life or performance as a function of the accelerating
stress. Keep these two points in mind as you analyze accelerated test data:
1. Units run at a constant high stress tend to have shorter life than units run
at a constant low stress.
SL3151Ch07Frame Page 306 Thursday, September 12, 2002 6:07 PM
Reliability 307
2. Distribution plots show the cumulative percentage of the samples that fails
as a function of time. In fact, over time the smoothing of the curve in the
shape of an S is indeed the estimate of the actual cumulative percentage
failing as a function of time.
Two common models although appropriate for component level testing
that deal specically with accelerated tests are:
1. Inverse Power Law Model
2. Arrhenius Model
Inverse Power Law Model
The inverse power law model applies to many failure mechanisms as well as to
many systems and components. This model assumes that at any stress, the time to
failure is Weibull distributed. Thus:
The Weibull shape parameter has the same value for all the stress levels.
The Weibull scale parameter is an inverse power function of the stress.
The model assumes that the life at rated stress divided by the life at accelerated
stress is equal to the quantity, accelerated stress divided by rated stress, raised to
the power n, where: n = acceleration factor determined from the slope of the S-N
diagram on the log-log scale.
Using the above information, we can say that:

u
=
a
[Accelerated stress/Rated stress]
n
where
u
= life at the rated (usage) stress level;
a
= life at the accelerated stress
level; and n = acceleration factor determined from the slope of the S-N diagram on
the log-log scale.
EXAMPLE
Let us assume we tested 15 incandescent lamps at 36 volts until all items in the
sample failed. A second sample of 15 lamps was tested at 20 volts. Using these data,
we will determine the characteristic life at each test voltage and use this information
to determine the characteristic life of the device when operated at 5 volts.
From the accelerated test data:

20 volts
= 11.7 hours

36 volts
= 2.3 hours
Since we know these two factors, we can determine the acceleration factor, n. We
have the following relationship:
SL3151Ch07Frame Page 307 Thursday, September 12, 2002 6:07 PM
308 Six Sigma and Beyond
[Life at rated stress/life at accelerated stress] = [Accelerated stress/rated stress]
n
This relationship becomes
[
20 volts
/
36 volts
] = [36 volts/20 volts]
n
Substituting the values for theta 20 v and theta 36 v we have
Therefore,
n = 2.767
Now we can use the following equation to determine the characteristic life at 5 volts:
[Accelerated stress/Rated stress]
n
[Accelerated stress/Rated stress]
n
= = 542 hours
The characteristic life at 5 volts is 542 hours.
The reader must be very careful here because not all electronic parts or assem-
blies will follow the inverse power law model. Therefore, its applicability must
usually be veried experimentally before use.
Arrhenius Model
The Arrhenius relationship for reaction rate is often used to account for the effect
of temperature on electrical/electronic components. The Arrhenius relationship is as
follows:
Reaction rate = A exp
where: A = normalizing constant; K
B
= Boltzmans constant (8.63 10
5
ev/degrees
K); T = ambient temperature in degrees Kelvin; and E
a
= activation energy type
constant (unique for each failure mechanism).
In those situations where it can be shown that the failure mechanism rate follows
the Arrhenius rate with temperature, the following Acceleration Factor (AF) can be
developed:
11 7
2 3
36
20
.
.
hrs
hrs
v
v
n

j
(
,
\
,
(

u a


5 36 v v
2 3
36
5
2 767
.
.
j
(
,
\
,
(

j
(
,
\
,
(
E
K T
a
B
SL3151Ch07Frame Page 308 Thursday, September 12, 2002 6:07 PM
Reliability 309
Rate
use
= A exp
Rate
accelerated
= A exp
Acceleration Factor = AF = Rate
a
/Rate
u
=
AF = exp
where T
a
= acceleration test temperature in degrees Kelvin and T
u
= actual use
temperature in degrees Kelvin.
EXAMPLE
Assume we have a device that has an activation energy of 0.5 and a characteristic
life of 2750 hours at an accelerated operating temperature of 150C. We want to nd
the characteristic life at an expected use temperature of 85C. (Remember that the
conversion factor for Celsius to Kelvin is: K = C + 273 You may want to review
Volume II.)
Therefore:
T
a
= 150 + 273 = 423K and T
u
= 85 + 273 = 358K
The E
a
=

0.5. Our calculations would look like:
AF = exp
AF = exp [2.49] = 12. Therefore, the acceleration factor is 12. To determine life at
85C, multiply the acceleration factor times the characteristic life at the accelerated
test level of 150C.
Characteristic life at 85C = (12) (2750 hours) = 33,000 hours

j
(
,
\
,
(
E
K T
a
B use

j
(
,
\
,
(
E
K T
a
B accelerated
A
E
K T
A
E
K T
a
B a
a
B u
exp
exp

j
(
,
\
,
(

j
(
,
\
,
(

j
(
,
\
,
(
,

,
,
]
]
]
]

j
(
,
\
,
(
,

,
,
]
]
]
]
E
K T T
E
K T T
a
B a u
a
B u a
1 1 1 1
exp
.
.
5
8 63 10
1
358
1
423
5
x


j
(
,
\
,
(
,

,
,
]
]
]
]
SL3151Ch07Frame Page 309 Thursday, September 12, 2002 6:07 PM
310 Six Sigma and Beyond
AST/PASS
HALT (Highly Accelerated Life Test) and HASS (Highly Accelerated Stress Screens)
are two types of accelerated test processes used to simulate aging in manufactured
products. The HALT/HASS process was invented by Dr. Gregg Hobbs in the early
1980s. It has since been used with much success in various military and commercial
applications. The HALT/HASS methods and tools are still in the development phase
and will continue to evolve as more companies embrace the concept of accelerated
testing. Many companies use this type of testing, which they call AST (Accelerated
Stress Test) and PASS (Production Accelerated Stress Screen).
The goal of accelerated testing is to simulate aging. If the stress-strength rela-
tionships are plotted, the design strength and eld stress are distributed around
means. Let us assume the stress and strength distributions are overlapped (the right
tail of the stress curve is overlapped with the left tail of the strength curve). When
that happens, there is an opportunity for the product to fail in the eld. This area of
overlap is called interference.
Many products, including some electronic products, have a tendency to grow
weaker with age. This is reected in a greater overlap of the curves, thus increasing
the interference area. Accelerated testing attempts to simulate the aging process so
that the limits of design strength are identied quickly and the necessary design
modications can be implemented.
PURPOSE OF AST
AST is a highly accelerated test designed to fail the target component or module.
The goal of this process is to cause failure, discover the root cause, x it, and retest
it. This process continues until the limit of technology is reached and all the
components of one technology (i.e., capacitors, diodes, resistors) fail. Once a design
reaches its limit of technology, the tails of the stress-strength distribution should
have minimal overlap.
The AST method uses step-stress techniques to discover the operating and
destruct limits of the component or module design. This method should be used in
the pre-prototype and/or pre-bookshelf phase of the product development cycle or
as soon as the rst parts are available. Let us look at an example:
We want to discover the operating and destruct limits of a component/module
design for minimum temperature. The unit is placed in a test chamber, stabilized at
40C, then powered up to verify the operation. The unit is then unpowered, the
temperature lowered to 45C and the unit allowed to stabilize at that temperature.
It is then powered on and veried. This process is repeated as the temperature is
lowered by 5 increments.
At 70C, the unit fails. The unit is warmed to 65C to see if it recovers.
Normally, it will recover. The temperature of 65C is said to be its operational
limit. The test continues to determine the destruct limit. The limit is lowered to
75C, stabilized, powered to see if it operates, then returned it to 65C to see if
it recovers. If when this unit is taken down to 95C and returned to 65C, it does
not recover, the minimum temperature destruct limit for this module is determined
SL3151Ch07Frame Page 310 Thursday, September 12, 2002 6:07 PM
Reliability 311
to be 95C. The failed module is then analyzed to determine the root cause of the
failure.
The team must then determine if the failure mode is the limit of technology or
if it is a design problem that can be xed. Experience has shown that 80% of the
failures are design problems accelerated to failure using the AST or similar accel-
erated stress test methods.
AST PRE-TEST REQUIREMENTS
Before AST is run on a product, the product development team should verify that:
The component/module meets specication requirements at minimum and
maximum temperature.
The vibration evaluation test (sine-sweep) is complete.
Data are available for review by the reliability engineer.
A copy of all schematics is available for review.
The product development team will provide the component/module monitoring
equipment used during AST and will work with the reliability engineer to dene
what constitutes a failure during the test.
OBJECTIVE AND BENEFITS OF AST
The objective of AST is to discover the operational and destruct limits of a design
and to verify how close these limits are to the technological limits of the components
and materials used in the design. It also veries that the component/module is strong
enough to meet the requirements of the customer and product application. These
requirements must be balanced with reasonable cost considerations. The benets of
AST include:
Easier system and subsystem validation due to:
Elimination of component- /module-related failures
Verication of worst-case stress analysis and derating requirements
A list of failure modes and corrections to be shared with the design team
and incorporated into future designs
Products that allow the manufacturing team to use PASS and to eliminate
the in-process build and check types of tests
The failure modes from the AST and PASS are used by the manufacturing team
to ensure that they do not see any of these problems in their products.
PURPOSE OF PASS
PASS is incorporated into a process after the design has been rst subjected to AST.
The purpose of PASS is to take the process aws created in the component/module
from latent (invisible) to patent (visible). This is accomplished by severely stressing
a component enough to make the aws visible to the monitoring equipment. These
SL3151Ch07Frame Page 311 Thursday, September 12, 2002 6:07 PM
312 Six Sigma and Beyond
aws are called outliers, and they result from process variation, process changes,
and different supplier sources. The goal of PASS is to nd the outliers, which will
assist in the determination of the root cause and the correction of the problem before
the component reaches the customer. This process offers the opportunity for the
organization to eliminate module conditioning and burn-in.
PASS development is an iterative process that starts when the pre-pilot units
become available in the pre-pilot phase of the product development cycle. The initial
PASS screening test limits are the AST operational limits and will be adjusted
accordingly as the components/modules fail and the root cause determinations indi-
cate whether the failures are limits of technology or process problems. The PASS
also incorporates ndings from process failure mode and effect analysis (PFMEA)
regarding possible signicant process failure modes that must be detected if
present.
When PASS development is complete, a strength-of-PASS test is performed to
verify that the PASS has not removed too much useful life from the product. A
sample of 12 to 24 components is run through 10 to 20 PASS cycles. These samples
are then tested using the design verication life test. If the samples fail this test, the
screen is too strong. The PASS will be adjusted based on the root cause analysis,
and the strength-of-PASS will be rerun.
OBJECTIVE AND BENEFITS OF PASS
The objective of PASS is to precipitate all manufacturing defects in the compo-
nent/module at the manufacturing facility, while still leaving the product with sub-
stantially more fatigue life after screening than is required for survival in the normal
customer environment. The benets of PASS include:
Accelerated manufacturing screens
Reduced facility requirements
Improved rate of return on tester costs
CHARACTERISTICS OF A RELIABILITY
DEMONSTRATION TEST
Eight characteristics are important in reliability demonstration testing. These are:
1. Specied reliability, R
s
: This value is sometimes known as the customer
reliability. Traditionally, this value is represented as the probability of
success (i.e., 0.98); however, other measures may be used, such as a
specied MTBF.
2. Condence level of the demonstration test: While customers desire a
certain reliability, they want the demonstration test to prove the reliability
at a given condence level. A demonstration test with a 90% condence
level is said to demonstrate with 90% condence that the specied
reliability requirement is achieved.
SL3151Ch07Frame Page 312 Thursday, September 12, 2002 6:07 PM
Reliability 313
3. Consumers risk, : Any demonstration test runs the risk of accepting bad
product or rejecting good product. From the consumers point of view,
the risk is greatest if bad product is accepted. Therefore, the consumer
wants to minimize that risk. The consumers risk is the risk that a test can
accept a product that actually fails to meet the reliability requirement.
Consumers risk can be expressed as: = 1 condence level
4. Probability distribution: This is the distribution that is used for the number
of failures or for time to failure. These are generally expressed as normal,
exponential, or Weibull.
5. Sampling scheme
6. Number of test failures to allow
7. Producers risk, : From the producers standpoint, the risk is greatest if
the test rejects good product. Producers risk is the risk that the test will
reject a product that actually meets the reliability requirement.
8. Design reliability, R
a
: This is the reliability that is required in order to
meet the producers risk, , requirement at the particular sample size
chosen for the test. Small test sample sizes will require a high design
reliability in order to meet the producers risk objective. As the sample
size increases, the design reliability requirement will become smaller in
order to meet the producers risk objective.
THE OPERATING CHARACTERISTIC CURVE
The relationship between the probability of acceptance and the population reliability
can be shown with an operating characteristic (OC) curve. An operating characteristic
curve can also be used to show the relationship between the probability of acceptance
and MTBF or failure rate. Given then an OC curve, one may calculate the:
1. Producers risk,
2. Consumers risk,
3. Probability of acceptance at any other population reliability or MTBF or
failure rate
Obviously, a specic OC curve will apply for each test situation and will depend
on the number of pieces tested and the number of failures allowed.
ATTRIBUTES TESTS
If the components being tested are merely being classied as acceptable or unac-
ceptable, the demonstration test is an attributes test. Attributes tests:
May be performed even if a probability distribution of the time to failure
is not known
May be performed if a probability distribution such as normal, exponen-
tial, or Weibull is assumed by dichotomizing the life distribution into
acceptable and unacceptable time to failure
SL3151Ch07Frame Page 313 Thursday, September 12, 2002 6:07 PM
314 Six Sigma and Beyond
Are usually simpler and cheaper to perform than variables tests
Usually require larger sample sizes to achieve the same condence or
risks as variables tests
VARIABLES TESTS
Variables tests are used when more information is required than whether the unit
passed or failed, for example, What was the time to failure? The test is a variables
test if the life of the items under test is:
Recorded in time units
Assumed to have a specic probability distribution such as normal, expo-
nential, or Weibull
FIXED-SAMPLE TESTS
When the required reliability and the test condence/risk are known, statistical theory
will dictate the precise number of items that must be tested if a xed sample size
is desired.
SEQUENTIAL TESTS
A sequential test may be used when the units are tested one at a time and the
conclusion to accept or reject is reached after an indeterminate number of observa-
tions. In a sequential test:
1. The average number of samples required to reach a conclusion will
usually be lower than in a xed-sample test. This is especially true if the
population reliability is very good or very poor.
2. The required sample size is unknown at the beginning of the test and can
be substantially larger than that in the xed-sample test in certain cases.
3. The test time required is much longer because samples are tested one at
a time (in series) rather than all at the same time (in parallel), as in xed-
sample tests.
Now that you are familiar with the four test types, let us look at the test methods.
Note that the four test types are not mutually exclusive. We can have xed-sample
or sequential-attributes tests as well as xed-sample or sequential-variables tests.
RELIABILITY DEMONSTRATION TEST METHODS
Attributes tests can be used when:
The accept/reject criterion is a go/no-go situation.
The probability distribution of times to failure is unknown.
Variables tests are found to be too expensive.
SL3151Ch07Frame Page 314 Thursday, September 12, 2002 6:07 PM
Reliability 315
SMALL POPULATIONS FIXED-SAMPLE TEST
USING THE HYPERGEOMETRIC DISTRIBUTION
When items from a small population are tested and the accept/reject decision is based
on attributes, the hypergeometric distribution is applicable for test planning. The de-
nition of successfully passing the test will be that an item survives the test. The parameter
to be evaluated is the population reliability. The estimation of the parameter is based
on a xed sample size and testing without repair. The method to use is described below:
1. Dene the criteria for success/failure.
2. Dene the acceptance reliability, RS.
3. Specify the condence level or the corresponding consumers risk, .
4. Specify, if desired, producers risk, . (Producers risk can be used to
calculate the design reliability target, Rd, needed in order to meet the
requirements.)
The process consists of a trial-and-error solution of the hypergeometric equation
until the conditions for the probability of acceptance are met.
The equation that is used is:
Pr(x f) =
where Pr(x < f) = probability of acceptance; f = maximum number of failures to
be allowed; x = observed failures in sample; R = reliability of population; N =
population size; and n = sample size.
LARGE POPULATION FIXED-SAMPLE TEST
USING THE BINOMIAL DISTRIBUTION
When parts from a large population are tested and the accept/reject decision is based
on attributes, the binomial distribution can be used. Note that for a large N (one in
which the sample size will be less than 10% of the population), the binomial
distribution is a good approximation for the hypergeometric distribution. The bino-
mial attribute demonstration test is probably the most versatile for use on product
components for several reasons:
1. The population is large.
2. The time-to-failure distribution for the parts is probably unknown.
3. Pass/fail criteria are usually appropriate.
N R
x
NR
n x
N
n
x
f
( ) 1
0

j
(
,
\
,
(

j
(
,
\
,
(
j
(
,
\
,
(

N
n
N
n N n
j
(
,
\
,
(

( )
!
! !
SL3151Ch07Frame Page 315 Thursday, September 12, 2002 6:07 PM
316 Six Sigma and Beyond
As with the hypergeometric distribution, the procedure begins by identication of:
1. Specied reliability, R
s
2. Condence level or consumers risk,
3. Producers risk, (if desired)
The process consists of a trial-and-error solution of the binomial equation until
the conditions for the probability of acceptance are met. The equation that is used is:
Pr(x f) =
where Pr(x < f) = probability of acceptance; f = maximum number of failures to be
allowed; x = observed failures in sample; R = reliability of population; and n =
sample size.
LARGE POPULATION FIXED-SAMPLE TEST
USING THE POISSON DISTRIBUTION
The Poisson distribution can be used as an approximation of both the hypergeometric
and the binomial distributions if:
The population, N, is large compared to the sample size, n.
The fractional defective in the population is small (R
population
> 09).
The process consists of a trial-and-error solution using the following equation
or Poisson tables, R
s
, R
d
, and various sample sizes until the conditions of
and are satised.
Pr(x f) =
where Pr(x f) = probability of acceptance; f = maximum number of failures to be
allowed; x = observed failures in sample;
poi
= (n) (1 R) (The reader should note
that the
poi
is the Poisson density and does not relate to failure rate); r = reliability
of population; and n = sample size.
SUCCESS TESTING
Success testing is a special case of binomial attributes testing for large populations
where no failures are allowed. Success testing is the simplest method for demon-
strating a required reliability level at a specied condence level. In this test case,
n items are subjected to a test for the specied time of interest, and the specied
n
x
R R
x
f
x n x
j
(
,
\
,
(

0
1 ( ) ( )


poi
x
x
f
e
x
poi

!
0
SL3151Ch07Frame Page 316 Thursday, September 12, 2002 6:07 PM
Reliability 317
reliability and condence levels are demonstrated if no failures occur. The method
uses the following relationship:
R = (1 C)
1/n
= ()
1/n
where R = reliability required; n = number of units tested; C = condence level;
and = consumers risk.
The necessary sample size to demonstrate the required reliability at a given
condence level is:
SEQUENTIAL TEST PLAN FOR THE BINOMIAL DISTRIBUTION
The sequential test is a hypothesis testing method in which a decision is made after
each sample is tested. When sufcient information is gathered, the testing is discon-
tinued. In this type of testing, sample size is not xed in advance but depends upon
the observations. Sequential tests should not be used when the exact time or cost of
the test must be known beforehand or is specied. This type of test plan may be
useful when the:
1. Accept/reject criterion for the parts on test is based on attributes
2. Exact test time available and sample size to be used are not known or
specied
The test procedure consists of testing parts one at a time and classifying the
tested parts as good or defective. After each part is tested, calculations are made
based on the test data generated to that point. The decision is made as to whether
the test has been passed or failed or if another observation should be made. A
sequential test will result in a smaller average number of parts tested when the
population tested has a reliability close to either the specied or design reliability.
The method to use is described below:
Determine R
s
, R
d
, .
Calculate the accept/reject decision points using:
and
As each part is tested, classify it as either a failure or success. Evaluate the
following expression for the binomial distribution,
n
C
R

ln( )
ln
1

1
1

L
R
R
R
R
s
d
f
s
d
s

j
(
,
\
,
(
j
(
,
\
,
(
1
1
SL3151Ch07Frame Page 317 Thursday, September 12, 2002 6:07 PM
318 Six Sigma and Beyond
where F = total number of failures and S = total number of successes.
If L > , the test is failed.
If L < , the test is passed.
If L , the test should be continued.
GRAPHICAL SOLUTION
A graphical solution for critical values of f and s is possible by solving the following
equations:
and
VARIABLES DEMONSTRATION TESTS
This section deals with demonstration tests where you can test by variables. Rather
than being a straight accept/reject, the variables test will determine whether the
product meets other reliability criteria.
FAILURE-TRUNCATED TEST PLANS FIXED-SAMPLE TEST
USING THE EXPONENTIAL DISTRIBUTION
This test plan is used to demonstrate life characteristics of items whose failure times
are exponentially distributed and when the test will be terminated after a pre-assigned
number of failures. The method to use is as follows:
First, obtain the specied reliability (R
S
), failure rate (
s
), or MTBF (
s
), and
test condence. Remember that for the exponential distribution:
R
S
=
1

1
1

ln ( )ln ( )ln
1 1
1

j
(
,
\
,
(
+
j
(
,
\
,
(

f
R
R
s
R
R
s
d
s
d
ln ( )ln ( )ln

1
1
1

j
(
,
\
,
(
+
j
(
,
\
,
(
f
R
R
s
R
R
s
d
s
d
e e
s s
t
t


SL3151Ch07Frame Page 318 Thursday, September 12, 2002 6:07 PM
Reliability 319
Then, solve the following equation for various sample sizes and allowable
failures:
where = MTBF demonstrated; t
i
= hours of testing for unit i; f = number of failures;
= the percentage point of the chi-square distribution for 2f degrees of
freedom; and = 1 condence level.
TIME-TRUNCATED TEST PLANS FIXED-SAMPLE TEST USING THE
EXPONENTIAL DISTRIBUTION
This type of test plan is used when:
1. A demonstration test is constrained by time or schedule.
2. Testing is by variables.
3. Distribution of failure times is known to be exponential.
The method to use will be the same as with the failure-truncated test. In this case:
where = MTBF demonstrated; t
i
= hours of testing for unit i; f = number of failures;
= the percentage point of the chi-square distribution for 2(f + 1) degrees
of freedom; and = 1 condence level.
For the time-truncated test, the test is stopped at a specic time and the number
of observed failures (f) is determined. Due to the fact that the time at which the next
failure would have occurred after the test was stopped is unknown, it will be assumed
to occur in the next instant after the test is stopped. This is the reason that the number
is added to the number of failures in the degrees of freedom for chi-squared.
EXAMPLE
How many units must be checked on a 2000-hour test if zero failures are allowed
and
s
= 32,258? A 75% condence level is required.

j
(
,
,
,
,
,
\
,
(
(
(
(
(

2
1
2
2
t
i
i
n
f ,

,2
2
f

j
(
,
,
,
,
,
\
,
(
(
(
(
(

2
1
2 1
2
t
i
i
n
f , ( )

, ( ) 2 1
2
f +
SL3151Ch07Frame Page 319 Thursday, September 12, 2002 6:07 PM
320 Six Sigma and Beyond
From the information, we know that:
= 1 0.75 = 0.25
2(f + 1) = 2(0 + 1) = 2
Therefore:
= = 32,258
By rearranging this equation, we see that:
Since no failures are allowed, all units must complete the 2000-hour test and:
Solving for n:
n = 44,709.59/2000 = 22.35 or 23 units. We can say that if we place 23 units on test
for 2000 hours and have no failures, we can be 75% condent that the MTBF is
equal to or greater than 32,258 hours. (Note: This assumes that the test environment
duplicates the use environment such that one hour on test is equal to one hour of
actual use.)
Failure-truncated and time-truncated demonstration test plans for the exponential
distribution can also be designed in terms of
S
,
d
, , and by using methods
covered in the sources listed in the references and selected bibliography.
WEIBULL AND NORMAL DISTRIBUTIONS
Fixed-sample tests using the Weibull distribution and for the normal distribution
have also been developed. If you are interested in pursuing the tests for either of
these distributions, see the sources listed in the selected bibliography.

j
(
,
,
,
,
,
\
,
(
(
(
(
(

2
1
0 25 2
2
t
i
i
n
. ,

j
(
,
,
,
,
,
\
,
(
(
(
(
(

2
2 772
1
t
i
i
n
.
t
i
i
n

( . )( , )
, .
2 772 32 258
2
44 709 59
1
t n
i
i
n

44 709 59 2 000
1
, . ( )( , )
SL3151Ch07Frame Page 320 Thursday, September 12, 2002 6:07 PM
Reliability 321
SEQUENTIAL TEST PLANS
Sequential test plans can also be used for variables demonstration tests. The sequen-
tial test leads to a shorter average number of part hours of test exposure if the
population MTBF is near
S
,
d
(i.e., close to the specied or design MTBF).
EXPONENTIAL DISTRIBUTION SEQUENTIAL TEST PLAN
This test plan can be used when:
1. The demonstration test is based upon time-to-failure data.
2. The underlying probability distribution is exponential.
The method to be used for the exponential distribution is to:
1. Identify
S
,
d
, , and
2. Calculate accept/reject decision points
and
Evaluate the following expression for the exponential distribution:
where t
i
= time to failure of the ith unit tested and n = number tested.
If L > , the test is failed.
If L < , the test is passed.
If L , the test should be continued.
A graphical solution can also be used by plotting decision lines:
nb h
1
and nb + h
2

1
1

L t
d
s s d
i
n
i

j
(
,
\
,
(
,

,
,
]
]
]
]


exp
1 1
1
1

1
1

SL3151Ch07Frame Page 321 Thursday, September 12, 2002 6:07 PM


322 Six Sigma and Beyond
where n = number tested; b = ; D = ; h
1
=
; and
h
2
=
.
Let t
i
equal time to failure for the ith item. Make conclusions based on the
following:
If < nb h
1
, the test has failed.
If nb + h
2
, the test is passed.
If nb h
1
< nb + h
2
, continue the test.
EXAMPLE
Assume you are interested in testing a new product to see whether it meets a specied
MTBF of 500 hours with a consumers risk of 0.10. Further, specify a design MTBF
of 1000 hours for a producers risk of 0.05. Run tests to determine whether the
product meets the criteria.
Determine D based on the known criteria:
D = = (1/500) (1/1000) = .001
Then calculate
h
1
= = (1/.001) ln[(1 .10)/.05] 2890
h
2
= = (1/.001) ln[(1 .05)/.10] 2251
Now solve for b
b = = (1/.001) ln(1000/500) 693
Using these results, we can determine at which points we can make a decision, by
using the following:
1
D
d
s
ln

1 1

s d

1 1
D
ln

1 1
D
ln

t
i
t
i
t
i
1 1

s d

1 1
D
ln

1 1
D
ln

1
D
d
s
ln

SL3151Ch07Frame Page 322 Thursday, September 12, 2002 6:07 PM


Reliability 323
If < nb h
1
, 693n 2890, the test has failed.
If nb + h
2
, 693n + 2251, the test is passed.
If nb h
1
< nb + h
2
, 693n 2890 < 693n + 2251, continue the test.
WEIBULL AND NORMAL DISTRIBUTIONS
Sequential test methods have also been developed for the Weibull distribution and
for the normal distribution. If you are interested in pursuing the sequential tests for
either of these distributions, see the selected bibliography
INTERFERENCE (TAIL) TESTING
Interference demonstration testing can sometimes be used when the stress and
strength distributions are accurately known. If a random sample of the population
is obtained, it can be tested at a point stress that corresponds to a specic percentile
of the stress distribution. By knowing the stress and strength distributions, the
required reliability, the desired condence level, and the number of allowable fail-
ures, it is possible to determine the sample size required.
RELIABILITY VISION
Reliability is valued by the organization and is a primary consideration in all decision
making. Reliability techniques and disciplines are integrated into system and com-
ponent planning, design, development, manufacturing, supply, delivery, and service
processes. The reliability process is tailored to t individual business unit require-
ments and is based on common concepts that are focused on producing reliable
products and systems, not just components.
RELIABILITY BLOCK DIAGRAMS
Reliability block diagrams are used to break down a system into smaller elements and
to show their relationship from a reliability perspective. There are three types of reli-
ability block diagrams: series, parallel, and complex (combination of series and parallel).
1. A typical series block diagram is shown in Figure 7.2 with each of the
three components having R1, R2, and R3 reliability respectively.
FIGURE 7.2 A series block diagram.
R1 R2 R3
t
i
t
i
t
i
t
i
SL3151Ch07Frame Page 323 Thursday, September 12, 2002 6:07 PM
324 Six Sigma and Beyond
The system reliability for the series is
R
total
= (R
1
) (R
2
) (R
3
) (R
n
)
EXAMPLE
If the reliability for R1 = .80, R2 = .99, and R3 = .99, the system reliability is: R
total
=
(.80)(.99)(.99) = .78. Please notice that the total reliability is no more than the weakest
component in the system. In this case, the total reliability is less than R1.
2. A parallel reliability block diagram shows a system that has built-in
redundancy. A typical parallel system is shown in Figure 7.3.
The system reliability is
R
total
= 1 [1 R
1
(t) (1 R
2
(t) (1 R
3
)(t) (1 R
n
(t)]
EXAMPLE
If the reliability for R1 = .80, R2 = .90, and R3 = .99, the system reliability is: R
total
=
1 [(1 .80)(1 .90)(1 .99)] = .9998 Please notice that the total reliability is more
than that of the strongest component in the system. In this case, the total reliability
is more than the R3.
3. Complex reliability block diagrams show systems that combine both series
and parallel situations. A typical complex system is shown in Figure 7.4.
The system reliability for this system is calculated in two steps:
Step 1. Calculate the parallel reliability.
Step 2. Calculate the series reliability which becomes the total reli-
ability.
EXAMPLE
If the reliability for R1 = .80, R2 = .90, R3 = .95, R4 = .98, and R5 = .99, what is
the total reliability for the system?
FIGURE 7.3 A parallel reliability block diagram.
R1
R2
R3
SL3151Ch07Frame Page 324 Thursday, September 12, 2002 6:07 PM
Reliability 325
Step 1. The parallel reliability for R3 and R4 is
R
total
= 1 [1 R
1
(t) (1 R
2
(t) = 1 [1 .95) (1 .98)] = .999
Step 2. The series reliability for R1, R2, (R3 & R4), and R5 is
R
total
= (R
1
) (R
2
) (R
3
& R
4
) (R
5
) = (.80)(.90)(.999)(.99) = .712
Please notice that the parallel reliability was actually converted into a single reliability
and that is why it is used in the series as a single value.
WEIBULL DISTRIBUTION INSTRUCTIONS FOR PLOTTING AND ANALYZING
FAILURE DATA ON A WEIBULL PROBABILITY CHART
This technique is useful for analyzing test data and graphically displaying it on
Weibull probability paper. The technique provides a means to estimate the percent
failed at specic life characteristics together with the shape of the failure distribution.
The following procedure presents a manual method of conducting the analysis, but
many computer programs can do the same calculations and also plot the Weibull
curve. Weibull analysis is one of the simpler analytical methods, but it is also one
of the most benecial. The technique can be utilized for other than just analyzing
failure data. It can be used for comparing two or more sets of data such as different
designs, materials, or processes. Following are the steps for conducting a Weibull
analysis.
1. Gather the failure data (it can be in miles, hours, cycles, number of parts
produced on a machine, etc.), then list in ascending order. For example:
We conduct an experiment and the following failures (sample size of 10
failures) are identied (actual hours to failure): 95, 110, 140, 165, 190,
205, 215, 265, 275, and 330.
2. Using the table of median ranks (Table 7.2), nd the column correspond-
ing to the number of failures in the sample tested. In our example we
have a sample size of ten, so we use the sample size 10 column. The
% Median Ranks are then read directly from the table.
FIGURE 7.4 A complex reliability block diagram.
R1 R2
R3
R4
R5
SL3151Ch07Frame Page 325 Thursday, September 12, 2002 6:07 PM
326 Six Sigma and Beyond
3. Match the hours (or some other failure characteristic that is measured)
with the median ranks from the sample size selected. For example:
4. In constructing the Weibull plot, label the Life on the horizontal log
scale on the Weibull graph in the units in which the data were measured.
Try to center the life data close to the center of the horizontal scale
(Figure 7.5).
5. Plot each pair of actual hours to failure (on the horizontal logarithmic
scale) and % median rank (on the vertical axis, which is a log-log scale)
on the graph. The matching points are shown as dots ( s) on Figure 7.5.
Draw a line of best t (generally a straight line) as close to the data
pairs as possible. Half the data points should be on one side of the line,
and the other half should be on the other side. No two people will generate
the exact same line, but analysts should keep in mind that this is a visual
estimate. (If the line is computer generated, it is actually calculated based
on the best t regression line.)
6. After the line of best t is drawn, the life at a specic point can be
found be going vertically to the Weibull line then going horizontally to
the Cumulative % Failed. In other words, this is the percent that is
expected to fail at the life that was selected. In the example, 100 was
selected as the life, then going up to the line and then across, we can see
the expected % failed to be 10%. In this case, the life at 100 hours is also
known as the B
10
life (or 90% reliability) and is the value at which we
would expect 10% of the parts to fail when tested under similar conditions.
(Please note that there is nothing secret about the B
10
life. Any B
x
life can
be identied. It just happens that the B
10
is the conventional life that most
engineers are accustomed to using.) In addition, we can plot the 5% and
the 95% condence using Tables 7.3 and 7.4 respectively.
The confidence lines are drawn for our example in Figure 7.5. The reader
will notice that the confidence lines are not straight. That is because as we
move in the fringes of the reliability we are less confident about the results.
Actual Hours
to Failure % Median Ranks
95 6.7
110 16.2
140 25.9 Sample size
165 35.5 of 10 failures
190 45.2
205 54.8
215 64.5
265 74.1
275 83.8
330 93.3
SL3151Ch07Frame Page 326 Thursday, September 12, 2002 6:07 PM
Reliability 327
FIGURE 7.5 The Weibull distribution for the example.
2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9
99.9
99.0
95.0
90.0
80.0
70.0
60.0
50.0
40.0
30.0
20.0
10.0
5.0
4.0
3.0
2.0
1.0
0.50
0.40
0.30
0.20
0.10
0.05
0.04
0.03
6
.
0
4
.
0
3
.
0
2
.
0
1
.
4
1
.
2
1
.
0
0
.
8
0
.
7
0
.
6
0
.
5
WEIBULL
SLOPE
P
E
R
C
E
N
T
SL3151Ch07Frame Page 327 Thursday, September 12, 2002 6:07 PM
328 Six Sigma and Beyond
TABLE 7.3
Five Percent Rank Table
Sample Size (n)
1 1 2 3 4 5 6 7 8 9 10
1 5.000 2.532 1.695 1.274 1.021 0.851 0.730 0.639 0.568 0.512
2 22.361 13.535 9.761 7.644 6.285 5.337 4.639 4.102 3.677
3 36.840 24.860 18.925 15.316 12.876 11.111 9.775 8.726
4 47.237 34.259 27.134 22.532 19.290 16.875 15.003
5 54.928 41.820 34.126 28.924 25.137 22.244
6 60.696 47.820 40.031 34.494 30.354
7 65.184 52.932 45.036 39.338
8 68.766 57.086 49.310
9 71.687 60584
10 74.113
Sample Size (n)
j 11 12 13 14 15 16 17 18 19 20
1 0.465 0.426 0.394 0.366 0.341 0.320 0.301 0.285 0.270 0.256
2 3.332 3.046 2.805 2.600 2.423 2.268 2.132 2.011 1.903 1.806
3 7.882 7.187 6.605 6.110 5.685 5.315 4.990 4.702 4.446 4.217
4 13.507 12.285 11.267 10.405 9.666 9.025 8.464 7.969 7.529 7.135
5 19.958 18.102 16.566 15.272 14.166 13.211 12.377 11.643 10.991 10.408
6 27.125 24.530 22.395 20.607 19.086 17.777 16.636 15.634 14.747 13.955
7 34.981 31.524 28.705 26.358 24.373 22.669 21.191 19.895 18.750 17.731
8 43.563 39.086 35.480 32.503 29.999 27.860 26.011 24.396 22.972 21.707
9 52.991 47.267 42.738 39.041 35.956 33.337 31.083 29.120 27.395 25.865
10 63.564 56.189 50.535 45.999 42.256 39.101 36.401 34.060 32.009 30.195
11 76.160 66.132 58.990 53.434 48.925 45.165 41.970 39.215 36.811 34.693
12 77.908 68.366 61.461 56.022 51.560 47.808 44.595 41.806 39358
13 79.418 70.327 63.656 58.343 53.945 50.217 47.003 44.197
14 80.736 72.060 65.617 60.436 56.112 52.420 49.218
15 81.896 73.604 67.381 62.332 58.088 54.442
16 82.925 74.988 68.974 64.057 59.897
17 83.843 76.234 70.420 65.634
18 84.668 77.363 71.738
19 85.413 78.389
20 86.089
SL3151Ch07Frame Page 328 Thursday, September 12, 2002 6:07 PM
Reliability 329
TABLE 7.4
Ninety-ve Percent Rank Table
Sample Size (n)
j 1 2 3 4 5 6 7 8 9 10
1 95.000 77.639 63.160 52.713 45.072 39.304 34.816 31.234 28.313 25.887
2 97.468 86.465 75.139 65.741 58.180 52.070 47.068 42.914 39.416
3 98.305 90.239 81.075 72.866 65.874 59.969 54.964 50.690
4 98.726 92.356 84.684 77.468 71.076 65.506 60.662
5 98.979 93.715 87.124 80.710 74.863 69.646
6 99.149 94.662 88.889 83.125 77.756
7 99.270 95.361 90.225 84.997
8 99.361 95.898 91.274
9 99.432 96.323
10 99.488
Sample Size (n)
j 11 12 13 14 15 16 17 18 19 20
1 23.840 22.092 20.582 19.264 18.104 17.075 16.157 15.332 14.587 13.911
2 36.436 33.868 31.634 29.673 27.940 26.396 25.012 23.766 22.637 21.611
3 47.009 43.811 41.010 38.539 36.344 34.383 32.619 31.026 29.580 28.262
4 56.437 52.733 49.465 46566 43.978 41.657 39.564 37.668 35.943 34366
5 65.019 60.914 57.262 54.000 51.075 48.440 46.055 43.888 41.912 40.103
6 72.875 68.476 64.520 60.928 57.744 54.835 52.192 49.783 47.580 45.558
7 80.042 75.470 71.295 67.497 64.043 60.899 58.029 55.404 52.997 50.782
8 86.492 81.898 77.604 73.641 70.001 66.663 63.599 60.784 58.194 55.803
9 92.118 87.715 83.434 79.393 75.627 72.140 68.917 65.940 63.188 60.641
10 96.668 92.813 88.733 84.728 80.913 77.331 73.989 70.880 67.991 65.307
11 99.535 96.954 93.395 89.595 85.834 82.223 78.809 75.604 72.605 69.805
12 99.573 97.195 93.890 90.334 86.789 83.364 80.105 77.028 74.135
13 99.606 97.400 94.315 90.975 87.623 84.366 81.250 78.293
14 99.634 97.577 94.685 91.535 88.357 85.253 82.269
15 99.659 97.732 95.010 92.030 89.009 86.045
16 99.680 97.868 95.297 92.471 89.592
17 99.699 97.989 95.553 92.865
18 99.715 98.097 95.783
19 99.730 98.193
20 99.744
SL3151Ch07Frame Page 329 Thursday, September 12, 2002 6:07 PM
330 Six Sigma and Beyond
7. The graph can be used for estimating the cumulative % failure at a
specied life, or it can be used for determining the estimated life at a
cumulative % failure. In the example, we would expect 63.2% of the test
units to fail at 222 hours. This value at 63.2% is also known as the
characteristic life or the mean time between failures (MTBF) for the
example distribution. Or looking at the chart another way, we would like
to estimate the failure hours at a specied % failure. For example at 95%
cumulative % failed, the hours to failure are 325 hours. Once the Weibull
plot is determined, an analyst can go either way.
8. The Weibull graph can also be used to estimate the reliability at a given
life, using the equation of R(t) = 1 F(t). A designer who wishes to
estimate the reliability of life at 200 hours would go vertically to the
Weibull line, then go horizontally to 52%, which is the percent expected
to fail. The estimated reliability at 200 hours would be 1 0.52 = 0.48
or 48%. At 80 hours it would be 1 0.056 = 0.944 or 94.4%. The slope
is obtained by drawing a line parallel to the Weibull line on the Weibull
slope scale that is in the upper left corner of the chart.
9. If a computer program is used, the calculation for the line of best t is
determined by the computer. Some programs draw the graph and show
the paired points, the line of best t (using the least squares method or
the maximum likelihood method), the reliability at a specied hour (or
other designated parameter), and the slope of the line.
10. One of the interesting observations regarding the Weibull graph is the
interpretations that can be made about the distribution by the portrayal of
the slope. When the slope is:
Less than 1, this indicates a decreasing failure rate, early life, or infant
mortality
Approximately 1, the distribution indicates a nearly constant failure
rate (useful life or a multitude of random failures)
Exactly 1, the distribution has an exponential pattern
Greater than 1, the start of wear out
Approximately 3.55, a normal distribution pattern,
11. Weibull plots can be made if test data also include test samples that have
not failed. Parts that have not failed (for whatever reason during the
testing) can be included in the calculations together with the failed parts
or assemblies. The non-failed data are referred to as suspended items. The
method of determining the Weibull plot is shown in the next set of
instructions.
SL3151Ch07Frame Page 330 Thursday, September 12, 2002 6:07 PM
Reliability 331
INSTRUCTIONS FOR PLOTTING FAILURE AND SUSPENDED ITEMS DATA
ON A WEIBULL PROBABILITY CHART
1. Gather the failure and suspended items data, then including the suspended
items, list in ascending order.
2. Calculate the mean order number of each failed unit. The mean order
numbers before the rst suspended item are the respective item numbers
in the order of occurrence, i.e., 1, 2, 3, and 4. The mean order numbers
after the suspended items are calculated by the following equations.
Mean order number = (previous mean order number) + (new number)
where, new increment =
and N = total sample size.
For example, to compensate for S1 (first suspended item), new increment =
[(13 + 1) 4]/(1 + 8) = 1.111 and the mean order number of F5 (fifth
failed item) = 4 + 1.111 = 5.111.
Note: Only one new increment is found each time a suspended item is en-
countered. Mean order number of F6 = 5.111 + 1.111 = 6.222.
New increment for mean order number of F7 = [(13 + 1) 6.222] (1 + 5) =
1.296.
Item Number
Hours to Failure
or Suspension
Failure
or Suspension Code
a
1 95 F1
2 110 F2
3 140 F3
4 165 F4 Sample Size 13
5 185 S1 10 failures
6 190 F5 3 suspensions
7 205 F6
8 210 S2
9 215 F7
10 265 F8
11 275 F9
12 330 F10
13 350 S3
a
Code items as failed (F) or suspended (S).
N+1 previous mean order number
number of items beyond present suspended item
( )

( )
+
( )
1
SL3151Ch07Frame Page 331 Thursday, September 12, 2002 6:07 PM
332 Six Sigma and Beyond
Then, the mean order number of F7 (seventh failed item) is 6.222 + 1.296 =
7.518 (and so on for F8, F9, and F10).
This new increment also applies to mean order numbers:
3. A rough check on the calculations can be made by adding the last incre-
ment to the nal mean order number. If the value is close to the total
sample size, the numbers are correct. In our example, 11.407 + [11.407
10.111] = 11.407 + 1.296 = 12.702, which is a close approximation to
the sample size of 13.
4. Using the table of median ranks for a sample size of 13 we can determine
the median rank for the rst four failures, or we can use the approximate
median rank formula.
Median rank = [J .3]/[N + .4]
where J = mean order number and N = total sample size.
For example, the median rank of F5 is:
and, the remainder of the failures:
and so on.
Item
Number
Hours to Failure
or Suspension
Failure or
Suspension Code
Mean Order
Number
1 95 F1 1
2 110 F2 2
3 140 F3 3
4 165 F4 4
5 185 S1
6 190 F5 5.111
7 205 F6 6.222
8 210 S2
9 215 F7 7.518
10 265 F8 8.815
11 275 F9 10.111
12 330 F10 11.407
13 350 S3
5 111 3
13 4
0 359
. .
.
.

+

6 222 3
13 4
0 442
. .
.
.

+

7 518 3
13 4
. .
.

+
SL3151Ch07Frame Page 332 Thursday, September 12, 2002 6:07 PM
Reliability 333
5. Label the Life on the horizontal log scale on the Weibull graph in the
units in which the data were measured. Try to center the life data close
to the center of the horizontal scale.
6. Plot each pair of actual hours to failure (on the horizontal scale) and
% median rank (on the vertical scale) on the graph. Draw a line of
best t (generally a straight line) as close to the data pair as possible.
Half the data points should be on one side of the line, and the other half
should be on the other side.
7. Once the line is drawn, the life at a specic point can be found by going
vertically to the Weibull line then going horizontally to the Cumulative
% failed. In other words, this is the percent that is expected to fail at the
life that was selected. In the example, 200 hours was selected as the life,
then going up to the line and then across, we can see the expected %
failed to be 40%.
8. Other reliability parameters that can be read from the Weibull plot are:
MTBF = 240 hours
B
10
= 105 hours
B = 2.5
Reliability at 100 hours is 1 0.09 = 0.91 reading from the graph, or using
the Weibull equation
R = = = 0.9038
9. Comparing the two examples shows that the analysis with suspended items
results in a slightly higher reliability characteristics. This is using the same
failure data plus the three suspended items.
Hours to Failure or Mean
Item Failure or Suspension Order % Median
Number Suspension Code Number Rank
1 95 F1 1 5.2
2 110 F2 2 12.6
3 140 F3 3 20.0
4 165 F4 4 27.5
5 185 S1
6 190 F5 5.111 35.9
7 205 F6 6.222 44.2
8 210 S2
9 215 F7 7.518 53.9
10 265 F8 8.815 63.5
11 275 F9 10.111 73.2
12 330 F10 11.407 82.9
13 350 S3
e
t
MTBF
B

j
(
,
\
,
(
e

j
(
,
\
,
(
100
240
2 5 .
SL3151Ch07Frame Page 333 Thursday, September 12, 2002 6:07 PM
334 Six Sigma and Beyond
ADDITIONAL NOTES ON THE USE OF THE WEIBULL
1. Weibull plotting is an invaluable tool for analyzing life data; however,
some precautions should be taken. Goodness-of-t is one concern. This
can be tested with various tests such as the Kolmogorov-Smirnov or Chi-
square. The use of an adequate sample size is another concern. Generally
a sample size should be greater than ten, but if the failure rate is in a tight
pattern (with relatively low variability), this generality may be relaxed.
Be suspicious of a curved line that best ts the data. This may indicate a
mixed sample of failures or inappropriate sampling.
2. If the Weibull plot is made and a curvilinear relation develops for the
connecting points, it usually indicates that two or more distributions are
making up the data. This may be due to infant mortality failures being
mixed with the data, failures due to components from two different
machines or assembly operations, or some other underlying cause. If a
curved relationship is indicated, the analyst should revisit the data and try
to determine if the data are made up of two or more distributions and then
manage each distribution separately.
3. There is another parameter in the Weibull analysis that was not discussed.
Beside the shape or slope (b) of the Weibull line and the scale or characteristic
life (the mean life or MTBF at the 63.2% cumulative percentage), there is
the location parameter. In most cases it is usually zero and should be of
little concern. In effect, it states that the distribution of failure times starts at
zero time, which is more often the case because it is difcult to imagine
otherwise. The characteristic life splits the distribution in two areas of 0.632
before and 0.368 ( ) after.
4. One of the advantages of using the Weibull is that it is very exible in its
interpretations. A wealth of information can be derived from it. If the
Weibull slope is equal to one, the distribution is the same as the exponen-
tial, or a constant failure rate. If the slope is in the vicinity of 3.5, it is a
near normal distribution. If the slope is greater than one, the plot starts
to represent a wear out distribution, or an increasing hazard rate. A slope
less than one generally indicates a decreasing hazard rate, or an infant
mortality distribution.
5. Analysts should be careful about extrapolating beyond the data when
making predictions. Remember that the failure points fall within certain
bounds and that the analyst should have a valid reason when venturing
beyond these bounds. When making projections over and above these
connes, sound engineering judgment, statistical theory, and experience
should all be taken into consideration.
6. The three-parameter Weibull is a distribution with non-zero minimum life.
This means that the population of products goes for an initial period of
time without failure. The reliability function for the three-parameter
Weibull is given by
R(t) = , t
R e e ( ) .


( ) 1
368
e
t

j
(
,
\
,
(

SL3151Ch07Frame Page 334 Thursday, September 12, 2002 6:07 PM


Reliability 335
where t = time to failure (t ); = minimum life parameter ( 0); =
Weibull slope ( > 0); and = characteristic life ( ).
For a given reliability
t = + ( )
and the B
10
life is
B
10
= + ( )
DESIGN OF EXPERIMENTS
IN RELIABILITY APPLICATIONS
Certainly we can use DOE in passive observation of the covariates in the tested
components. We can also use DOE in directed experimentation as part of our
reliability improvement. Covariates are usually called factors in the experimentation
framework. Two main technical problems arise in the reliability area, however, when
standard methods of experimental design are employed.
1. Failure time data are rarely normally distributed, so standard analysis tools
that rely on symmetry, e.g., normal plots, do not work too well.
2. Censoring.
The rst problem can be overcome by considering a transformation of the fail
times to make them approximately normal the log transformation is usually a
good choice. The exact form of the fail time distribution is not important because
we are looking for effects that improve reliability, rather than exact predictions of
the reliability itself.
The second problem of censoring is a little bit trickier but can be dealt with by
iteration as follows:
1. Choose a basic model to t to the data.
2. Fit the model to the data, treating the censor times as failure times.
3. Using this model, make a conditional prediction for the unobserved fail
times for each censored observation. The prediction is conditional because
the actual failure time must be consistent with the censoring mechanism.
4. Replace censor times with the fail time predictions from step 3.
5. Go back to step 2.
ln( )
1
1
R
,

,
]
]
]

ln(
.
)
1
0 90
1
,

,
]
]
]

SL3151Ch07Frame Page 335 Thursday, September 12, 2002 6:07 PM


336 Six Sigma and Beyond
Eventually this process will converge, i.e., the predictions for the fail times of
the censorings will stop changing from one iteration to the next. If necessary, the
process can be tried with several model choices for step 1. In fact, the algorithm of
the ve steps leads to the same results as maximum likelihood estimation.
RELIABILITY IMPROVEMENT
THROUGH PARAMETER DESIGN
Two special categories of covariates in any parameter design are design parameters
(or control factors) and error variables (or noise factors). The terms in parenthesis
are the equivalent terms within the context of robustness, which we already have
discussed in Volume V of this series.
The achievement of higher reliability can also be viewed as an improvement to
robustness. Robustness is dened as reduced sensitivity to noise factors. In most
industries, noise factors have ve main categories:
1. Piece to piece variation
2. Changes to component characteristics over time
3. Customer duty cycle
4. Environmental conditions
5. Interfacing (environment created by neighboring components in the sys-
tem)
Typically, noises in categories 3, 4, and 5 can induce noises in category 2. If
the function of the component can be made robust to noises in category 2, then the
component will, by denition, be more reliable. Often, noise category 1 contributes
to infant mortality, category 2 to degradation, and categories 3, 4, and 5 to useful
life problems. Recognizing this pattern of noises, we can relate them to the bathtub
curve (see Figure 7.1) for the hazard function.
Often, knowing the type of failure rate that is acting on our component can give
a clue as to the offending noise factor and hence lead to a root cause analysis of the
failure mechanism. Components can be made robust to noises by experimenting
with control factors. The idea (as in robustness generally) is to look for interactions
between control and noise factors. The reliability connection is made if there is a
time lag between the extremes of the noise space, denoted N and N+, say see
Figure 7.6.
Note that the functional measure is not failure time, but some ideal function of
the system. C
1
and C
2
represent two settings of a control factor. A design with C
2
is more robust to noise than one with C
1
and is therefore more reliable. Note: A
closely related area to robustness in reliability studies is Accelerated Degradation
Testing (ADT), which is closely associated with Accelerated Life Testing (ALT).
A parameter design layout in reliability applications follows the pattern for
parameter design studies, as in the example shown in Figure 7.7.
SL3151Ch07Frame Page 336 Thursday, September 12, 2002 6:07 PM
Reliability 337
The idea of experimental layouts of this type is to look for interactions between
control factors and noise factors, which lead to congurations with minimum dif-
ference between the y values.
DEPARTMENT OF DEFENSE RELIABILITY
AND MAINTAINABILITY STANDARDS
AND DATA ITEMS
Table 7.5 provides very useful information about reliability and maintainability
(R&M) standards and data items used in reliability.
FIGURE 7.6 Control factors and noise interactions.
FIGURE 7.7 An example of a parameter design in reliability usage.
Functional
measure
C
2
C
1
N - N+
Time
Control Factors Noise Factors
Config-
uration
time
A B C ... G N- N+
(new) (old)
- - - - yl- Y1+
2 + - - + Y2- Y2+
3 - + - + Y3- Y3+
4 + + - - Y4- Y4+
5 - - + + Ys- Ys+
6 + - + - Y6- Y6+
7 - + + - Y7- Y7+
8 + + + + Y8- Y8+
SL3151Ch07Frame Page 337 Thursday, September 12, 2002 6:07 PM
338 Six Sigma and Beyond
TABLE 7.5
Department of Defense Reliability and Maintainability Standards and
Data Items
Standard Explanation
General Design Standards
MIL-STD-454M
MIL-HDBK-727
MIL-STD-810E
MIL-STD-1629A
MIL-STD-1686A
MIL-E-4158E-(USAF)
MIL-E-5400T
MIL-HDBK-251
MIL-HDBK-263A
MIL-HDBK-338A
Standard General Requirements for Electronic Equipment
Design Guidance for Producibility
Environmental Test Methods & Engineering Guidelines
Procedures for Performing a Failure Mode Effects & Criticality Analysis
Electrostatic Discharge Control Program for Protection of Electrical &
Electronic Parts, Assemblies & Equipment
General Specication for Ground Electronic Equipment
General Specication for Aerospace Electronic Equipment
Reliability/Design Thermal Applications
Electrostatic Discharge Handbook for Protection of Electrical &
Electronic Parts, Assemblies & Equipment
Electronic Reliability Design Handbook
Reliability Standards
MIL-STD-721C
MIL-STD-756B
MIL-STD-781 D
MIL-STD-785B
MIL-STD-1543B-(USAF)
MIL-STD-2155-(AS)
MIL-STD-2164-(EC)
MIL-09858A
MIL-HDBK-189
MIL-HDBK-217F
MIL-HDBK-781
DoD-HDBK-344-(USAF)
Denitions of Terms for Reliability & Maintainability
Reliability Modeling & Prediction
Reliability Testing for Engineering Development Qualication &
Production
Reliability Program Systems & Equipment Development & Production
Reliability Program Requirements for Space & Launch Vehicles
Failure Reporting Analysis & Corrective Action System
Environmental Stress Screening Process for Electronic Equipment
Quality Program Requirements
Reliability Growth Management
Reliability Prediction of Electronic Equipment
Reliability Test Methods, Plans & Environments for Engineering
Development, Qualication & Production
Environmental Stress Screening of Electronic Equipment
Maintainability Standards
MIL-STD-470B
MIL-STD-471A
MIL-STD-2084-(AS)
MIL-STD-2165
MIL-HDBK-472
Maintainability Program for Systems & Equipment
Maintainability Demonstration
General Requirements for Maintainability
Testability Program for Electronic Systems & Equipment
Maintainability Prediction
Major Parts Standards
MIL-STD-198E
MIL-STD-199E
MIL-STD-202E
MIL-STD-701N
MIL-STD-750C
Selection & Use of Capacitors
Selection & Use of Resistors
Test Methods for Electronic & Electrical Component Parts
Lists of Standard Semiconductor Devices
Test Methods for Semiconductor Devices
SL3151Ch07Frame Page 338 Thursday, September 12, 2002 6:07 PM
Reliability 339
MIL-STD-790E
MIL-STD-883D
MIL-STD-965A
MIL-STD-983
MIL-STD-1546A-(USAF)
MIL-STD-1547A-(USAF)
MIL-STD-1556B
MIL-STD-1562W
MIL-STD-1772B
MIL-S-19500H + OPL
MIL-M-38510J + QPL
MIL-H-38534A + QML
MIL-138535A + QML
MIL-HDBK-339-(USAF)
MIL-HDBK-780A
MIL-BUL-103J
Reliability Assurance Program for Electronic Part Specications
Test Methods & Procedures for Microelectronics
Parts Control Program
Substitution List for Microcircuits
Parts, Materials & Processes Control Program for Space & Launch
Vehicles
Electronic Parts, Materials & Processes for Space & Launch Vehicles
Government/Industry Data Exchange Program (GIDEP) Contractor
Participation Requirements
Lists of Standard Microcircuits
Certication Requirements for Hybrid Microcircuit Facility & Lines
General Specication for Semiconductor Devices
General Specication for Microcircuits
General Specication for Hybrid Microcircuits
General Specication for Integrated Circuits (Microcircuits)
Manufacturing
Custom LSI Circuit Development & Acquisition for Space Vehicles
Standardized Military Drawings
List of Standardized Military Drawings (SMDs)
Reliability Analysis Center Publications
DSR
FMD
FTA
MFAT-1
MFAT-2
NONOP-1
NPRD
NPS-1
PRIM
RDSC-1
RMST
SOAR-2
SOAR-4
SOAR-6
SOAR-7
SOAR-8
VZAP
Discrete Semiconductor Device Reliability
Failure Mode/Mechanism Distributions
Fault Tree Analysis
Microelectronics Failure Analysis Techniques A Procedural Guide
GaAs Characterization & Failure Analysis Techniques
Nonoperating Reliability Data
Nonelectronic Parts Reliability Data
Analysis Techniques for Mechanical Reliability
A Primer for DoD Reliability, Maintainability, Safety and Logistics
Standards
Reliability Sourcebook
Reliability and Maintainability Software Tools
Practical Statistical Analysis for the Reliability Engineer
Condence Bounds for System Reliability
ESD Control in the Manufacturing Environment
A Guide for Implementing Total Quality Management
Process Action Team (PAT) Handbook
Electrostatic Discharge Susceptibility Data
Computer Formats
NPRD-P
NRPS
VZAP-P
Nonelectronic Parts Reliability Data (IBM PC database)
Nonoperating Reliability Prediction Software (Includes NONOP-1)
VZAP Data (IBM PC database)
TABLE 7.5 (continued)
Department of Defense Reliability and Maintainability Standards and
Data Items
SL3151Ch07Frame Page 339 Thursday, September 12, 2002 6:07 PM
340 Six Sigma and Beyond
Rome Laboratory Technical Reports
Rome Laboratory (formerly Rome Air Development Center) has published hundreds of useful R&M
technical reports that are available from the Defense Technical Information Center and the National
Technical Information Service. Call RAC for a list. [Address at publication time: Reliability Analysis
Center * 201 Mill Street * Rome, NY. 134406916 * Telephone: 315.337.0900]
Data Item Descriptions
MIL-STD-756 Reliability Modeling and Prediction
DI-R-7081
DI-R-7082
DI-R-7094
DI-R-7095
DI-R-7100
B Mathematical Model(s)
B Predictions Report(s)
B Block Diagrams & Math. Models Report
B Predict. & Doc. of Support. Material
B Report for Explor. Advanced Develop.
MIL-STD-781 Reliability Test Methods, Plans, and Environments for engineering development,
Qualication and Production
DI-RELI-80247
DI-RELI-80248
DI-RELI-80249
DI-RELI-80250
DI-RELI-80251
DI-RELI-80252
DI-RELI-80253
DI-RELI-80254
DI-RELI-80255
Thermal Survey Report
Vibration Survey Report
ESS Report
B Test Plan
B Test Procedures
B Test Report
Failed Item Analysis Report
Corrective Action Plan
Failure Summary and Analysis Report
MIL-STD-785 Reliability Program for Systems and Equipment Development and Production and
MIL-STD-1543 Reliability Program Requirements for Space and Launch Vehicles
DI-R-7079
DI-R-7084
DI-R-7086
DI-A-7088
DI-A-7089
DI-OCIC-80125
DI-OCIC-80126
DI-RELI-80249
DI-RELI-80250
DI-RELI-80251
DI-RELI-80252
DI-RELI-80253
DI-RELI-80255
DI-RELI-80685
DI-RELI-80686
DI-RELI-80687
R Program Plan
Elect. Parts/Circuits Tol. Analysis Report
FMECA Plan
Conference Agenda
Conference Agenda
ALERT/SAFE ALERT
Response to ALERT/SAFE ALERT
ESS Report
Test Plan
Test and Demo. Procedures
Test Reports
Failed Item Analysis Report
Report, Failure Summary and Analysis
Critical Item List
Allocat., Assess. & Analysis Report
Report, FMECA
TABLE 7.5 (continued)
Department of Defense Reliability and Maintainability Standards and
Data Items
SL3151Ch07Frame Page 340 Thursday, September 12, 2002 6:07 PM
Reliability 341
MIL-STD-2155 FRACA System
DI-E-2178
DI-R-21597
DI-R-21598
DI-R-21599
Computer Software Trouble Report
FRACA System Plan
Failure Report
Develop. & Product. Failure Summary Report
MIL-STD-2164 ESS Process for Electronic Equipment
DI-ENVR-80249 Environmental Stress Screening Report
DOD-HDBK-344 ESS of Electronic Equipment
DI-ENVR-80249 Environmental Stress Screening Report
MIL-STD-810 Environmental Test Methods and Engineering Guidelines
DI-ENVR-80859
DI-ENVR-80860
DI-ENVR-80861
DI-ENVR-80862
DI-ENVR-80863
Environmental Management Plan
Life Cycle Environmental Prole
Environmental Design Test Plan
Operational Environment Verif. Plan
Environmental Test Report
MIL-STD-1629 Procedures for Performing a FMECA
DI-R-7085
DI-R-7086
FMECA Report
FMECA Plan
MIL-STD-1686 ESD Control Program for Protection of Electrical and Electronic Parts, Assemblies
and Equipment
DI-RELI-80669
DI-RELI-80670
DI-RELI-80671
ESD Control Program Plan
Reporting Results of ESD Sensitivity Tests of Electrical & Electronic Parts
Handling Procedure for ESD Sensitive Items
MIL-STD-1546 Parts, Materials, and Processes Control Program for Space and Launch Vehicles
DI-A-7088
DI-A-7089
DI-MI SC-80526
DI-MISC-80072
DI-MISC-80071
Conference Agenda
Conference Minutes
Parts Control Program Plan
Program Parts Selection List (PPSL)
Part Approval Requests
MIL-STD-1556 GIDEP Contractor Participation Requirements
DI-QCIC-80125
DI-QCIC-80126
DI-QCIC-80127
ALERT/SAFE-ALERT
Response to an ALERT/SAFE-ALERT
GIDEP Annual Progress Report
TABLE 7.5 (continued)
Department of Defense Reliability and Maintainability Standards and
Data Items
SL3151Ch07Frame Page 341 Thursday, September 12, 2002 6:07 PM
342 Six Sigma and Beyond
REFERENCES
Anon., Warranty Cost Issue Hurts Chrysler, USA Today, Oct. 24, 1994, p. 3B.
ANSI/IEEE Standard 1001988, 4th ed., IEEE Standard Dictionary of Electrical and Elec-
tronic Terms, The Institute of Electrical and Electronic Engineers, Inc., New York,
1988.
Flint, J., It Is Time To Get Realistic, WARDS AUTOWORLD, Oct. 2001, p. 21.
Mayne, E. et al., Quality Crunch, Wards AUTOWORLD, July 2001, pp. 1418.
VonAlven, W.H., Ed., Reliability Engineering, Prentice Hall, Inc., Englewood Cliffs, NJ, 1964.
MIL-STD-470 Maintainability Program for Systems and Equipment
DI-R-2129
DI-R-7085
DI-MNTY-80822
DI-MNTY-80823
DI-MNTY-80824
DI-MNTY-80825
DI-MNTY-80826
DI-MNTY-80827
DI-MNTY-80828
DI-MNTY-80829
DI-MNTY-80830
DI-MNTY-80831
DI-MNTY-80832
M Demo. Plan (MIL-STD-470A, Task 301 only)
FMECA Report
Program Plan
M Status Report
Data Collect., Anal. & Correct. Action System
M Modeling Report
M Allocations Report
M Predictions Report
M Analysis Report
M Design Criteria Plan
Inputs to the Detailed Maintenance Plan & LSA
M Testability Demo. Test Plan
M Testability Demo. Test Report
MIL-STD-471 Maintainability Demonstration
DI-R-2129
DI-MNTY-80831
DI-MNTY-80832
DI-MNTY-81188
DI- QCIC-81187
M Demonstration Plan
M Testability Demo. Test Plan
M Testability Demonstration Report
Verif., Demo., Assess. & Evaluation Plan
Quality Assessment Report
MIL-STD-2165 Testability Program for Electronic Systems and Equipments
DI-E-5423
DI-T-7198
DI-T-7199
DI-MNTY-80824
DI-MNTY-80831
DI-MNTY-80832
Design Review Data Package
Testability Program Plan
Testability Analysis Report
Data Collect., Anal. & Correct. Act. System Plan
M/Testability Demo. Test Plan
M/Testability Demo. Report
MIL-HDBK-472 Maintainability Prediction
DI-MNTY-80827 M Predictions Report
Note: Only data items specied in the Contract Data Requirements List (CDRL) are deliverable.
TABLE 7.5 (continued)
Department of Defense Reliability and Maintainability Standards and
Data Items
SL3151Ch07Frame Page 342 Thursday, September 12, 2002 6:07 PM
Reliability 343
SELECTED BIBLIOGRAPHY
Aitken, M., A note on the regression analysis of censored data, Technometrics, 23, 161163,
1981.
Box, G.E.P. and Meyer, R.D., Finding the active factors in fractionated screening experiments,
Journal of Quality Technology, 25, 94105, 1993.
Cox, D.R. and Oakes, D., Analysis of Survival Data, Chapman Hall, London, 1984.
Grove, D.M. and Davis, T.P., Engineering, Quality, and Experimental Design, Longman,
Harlow, England, 1992.
Hamada, M. and Wu, C.F.J., Analysis of censored data from highly fractionated experiments.
Technometrics, 33, 253, 1991.
Hamada, M. and Wu, C.F.J., Analysis of designed experiments with complex aliasing, Journal
of Quality Technology, 23, 130137, 1992.
Kalbeisch, J.D. and Prentice, R.L., The Statistical Analysis of Failure Time Data, Wiley,
New York, 1980.
Kapur, K.C. and Lamberson, L.R., Reliability in Engineering Design, Wiley, New York, 1977.
Kececioglu, D., Reliability Engineering Handbook, Vols. 1 and 2, Prentice Hall, Englewood
Cliffs, NJ, 1991.
Lawless, J. F., Statistical Models and Methods for Lifetime Data, Wiley, New York, 1982.
McCormick, N.J., Reliability and Risk Analysis, Academic Press, New York, 1981.
Nelson, W., Theory and applications of hazard plotting for censored failure data, Technomet-
rics, 14, 945966, 1972.
Schmee, J. and Hahn, G., A simple method of regression analysis with censored data. Tech-
nometrics, 21, 417432, 1979.
Smith, R.L., Weibull regression models for reliability data, Reliability Engineering and System
Safety, 34, 5557, 1991.
SL3151Ch07Frame Page 343 Thursday, September 12, 2002 6:07 PM
SL3151Ch07Frame Page 344 Thursday, September 12, 2002 6:07 PM

345

8

Reliability and
Maintainability

As the world moves towards building more competitive products, it is important to
put additional emphasis on reliability and maintainability (R&M), which support
reduction of inventories and build to schedule targets.
The

Quality Systems Requirements, Tooling & Equipment (TE) Supplement

to
QS-9000 was developed by Chrysler, Ford, General Motors, and Riviera Die & Tool
to enhance quality systems while eliminating redundant requirements, facilitating
consistent terminology, and reducing costs. It is important that everyone involved
in the design or purchase of machinery be aware of this supplement and their
responsibilities as outlined in the QS-9000 process. It is also important that everyone
understand that the TE supplement denes machinery as tooling and equipment
combined. Machinery is a generic term for all hardware, including necessary oper-
ational software, which performs a manufacturing process.
The TE goal is to improve the quality, reliability, maintainability, and durability
of products through development and implementation of a fundamental quality
management system. The supplement communicates additional common system
requirements unique to the manufacturers of tooling and equipment as applied to
the QS-9000 requirements. This particular chapter will emphasize the reliability and
maintainability areas. Quality operating systems (QOS) and durability are equally
important subjects but are beyond the scope of this work. The reader is encouraged
to review Volume IV the material on machine acceptance.

WHY DO RELIABILITY AND MAINTAINABILITY?

Due to a lack of condence in the performance of our equipment, we have tradi-
tionally purchased excessive facilities and tooling in order to meet



production objec-
tives. It is estimated that approximately 73% of the total cost in a program devel-
opment through launching, in the automotive industry for example, is in this area.
Additionally, capital spent on insurance-type spare tooling hidden for unplanned
breakdowns shows a lack of condence in production equipment. Operational effects
of production shortfall and the inability to predict downtime are countless. They
include unplanned overtime, unplanned and increasing maintenance requirements
and costs, and excessive work in process around constraint operations.
The R&M process builds condence in predicting performance of machinery,
and, through this process, we can improve the expected and demonstrated levels of
machinery performance. Properly predicting and improving performance contributes
to lower total cost and improved prots for the organization.

SL3151Ch08Frame Page 345 Thursday, September 12, 2002 6:07 PM

346

Six Sigma and Beyond

The R&M process consists of ve phases that form a continuous loop. The ve
phases are: (1) concept; (2) design and development; (3) machinery build and
installation; (4) machinery operation, continuous improvement, performance analy-
sis; and (5) conversion concept of next cycle. As the loop continues, each generation
of machinery improves.
In this chapter we will concentrate on the rst three phases of the loop, not
because they are more important, but because they are the major focus of this
planning effort of the design for six sigma (DFSS) campaign. The last two phases
should be well documented in each organization for they are facility dependent.

OBJECTIVES

The emphasis of all R & M is focused on three objectives:

Reliability

The probability that machinery and equipment can perform
continuously, without failure, for a specied interval of time (when oper-
ating under stated conditions)

Maintainability

A characteristic of design, installation, and operation, usu-
ally expressed as the probability that a machine can be retained in, or
restored to, specied operable conditions within a specied interval of time
(when maintenance is performed in accordance with prescribed procedures)

Durability

Ability to perform intended function over a specied period
(under normal use with specied maintenance) without signicant deteri-
oration

MAKING RELIABILITY AND
MAINTAINABILITY WORK

Machinery reliability and maintainability should be considered an integral part of
all facilities and tooling (F&T) purchases. However, the appropriate degree of



time
and effort dedicated to R&M engineering must be individually applied for each
unique application and purchase situation. Each project engineering manager should
consider the value proposition of applying varying degrees of R&M engineering for
the unique circumstances surrounding each equipment purchase.
For example, we may choose to apply a large amount of R&M engineering
resources to a project that includes a large quantity of single design machines. The
value proposition would show that investing up-front resources on a single design
that can be leveraged beyond a single application would offer a large payoff. We
would also consider applying high-level R&M engineering to equipment critical to
a continuous operation. On the other hand, we may choose to apply a minimal level
of R&M engineering on a purchase of equipment that has a mature design and
minimally demonstrated eld problems.
Some of the issues to consider when determining appropriate levels of R&M
engineering for a project include:

SL3151Ch08Frame Page 346 Thursday, September 12, 2002 6:07 PM

Reliability and Maintainability

347

1. Review the availability of existing machines in the organization that may
be idle. This is a good opportunity for reusability.
2. How many units are we ordering with identical or leverageable design?
3. What is the condition of the existing machinery that will be rehabilitated?
4. What is the status of the operating conditions? Are they extremely
demanding?
5. What is the cycle plan for the machinery? Does it require continuous or
intermittent duty? For how many years is the equipment expected to
produce?
6. Where is the machinery in the manufacturing process? Is it a constraint
(bottleneck) operation?
7. How well documented and complete is the root cause analysis for the
design? Will it decrease up-front work?
8. How much data exist to support known design problems?

WHOS RESPONSIBLE?

Full realization of R&M benets requires consistent application of the process.
Simultaneous engineering (SE) teams, together with the plants and the supply base,
must align their



efforts and objectives to provide quality machinery designed for
R&M. Reliability and maintainability engineering is the responsibility of everyone
involved in machinery design, as much as the collection and maintenance of oper-
ational data are the responsibility of those operating and maintaining the equipment
day to day.
The R&M process places responsibility on the groups possessing the skills or
knowledge necessary to efciently and accurately complete a given set of tasks. It
turns out that much of the expertise is in the supply base, and as such, the suppliers
must take the lead role and responsibility in R&M efforts. The R&M process
encourages the organization and suppliers to lock into budget costs based on Life
Cycle Costing (LCC) analysis of options and cost targets. Warranty issues should
be considered in the LCC analysis so that design helps decrease excessive warranty
costs after installation. The focus places responsibility for correcting design defects
on the machinery designers.
Facility and tooling producers who practice R&M will ultimately reduce the
cost (such as warranty) of their product and will become more competitive over
time. Further, suppliers that practice R&M will qualify as QS-9000 certied, pre-
ferred, global sourcing partners. Engineers and program managers who practice and
encourage R&M will reduce operational costs over time. In doing so, they will meet
manufacturing and cost objectives for their projects or programs.

TOOLS

There are many R&M tools. The ones mentioned here are required in the Design
and Development Planning (4.4.2) section of the TE Supplement. Many others
beyond the few that are addressed here are available and can improve reliability.

SL3151Ch08Frame Page 347 Thursday, September 12, 2002 6:07 PM

348

Six Sigma and Beyond

Mean Time Between Failure

(MTBF) is dened as the average time between
failure occurrences. It is simply the sum of the operating time of a machine divided
by the total number of failures. For example, if a machine runs for 100 hours and
breaks down four times, the MTBF is 100 divided by 4 or 25 hours. As changes are
made to the machine or process, we can measure the success by comparing the new
MTBF with the old MTBF and quantify the action that has been taken.

Mean Time to Repair

(MTTR) is dened as the average time to restore machinery
or equipment to its specied conditions. This is accomplished by dividing the total
repair time by the number of failures. It is important to note that the MTTR
calculation is based on repairing one failure and one failure only. The length of time
it takes to repair each failure directly affects up-time, up-time %, and capacity. For
example, if a machine runs 100 hours and has eight failures recorded with a total
repair time of four hours, the MTTR for



this machine would be four hours divided
by eight failures or .5 hours. This is the mean time it takes to repair each failure.

Fault Tree Analysis

(FTA) is an effect-and-cause diagram. It is a method used
to identify the root causes of a failure mode using symbols developed in the defense
industry. The FTA is a great prescriptive method for determining the root causes
associated with failures and can be used as an alternative to the Ishikawa Fish Bone
Diagram. It compliments the Machinery Failure Mode and Effects Analysis
(MFMEA) by representing the relationship of each root cause to other failure-mode
root causes. Some feel the FTA is better suited than the FMEA to providing an
understanding of the layers and relationships of causes. An FTA also aids in establishing
a troubleshooting guide for maintenance procedures. It is a top down approach.

Life Cycle Costs

(LCC) are the total costs of ownership of the equipment or
machinery during its operational life. A purchased system must be supported during
its total life cycle. The importance of life cycle costs related to R&M is based on
the fact that up to 95% of the total life cycle costs are determined during the early
stages of the design and development of the equipment. The rst three phases of
the equipments life cycle are typically identied as non-recurring costs. The remain-
ing two phases are associated with the equipments support costs.

SEQUENCE AND TIMING

The R&M process is a generic model of logically sequenced events that guides the
simultaneous engineering team through the main drivers of good design for R&M
engineering. The amount of time budgeted for each activity or task should vary
depending on the circumstances surrounding the equipment or processes in design.
However, regardless of the unique conditions, all of the steps in the R&M process
need to be considered in their logical sequence and applied as needed.
In Table 8.1, we identify different activities that you may consider in the rst
three phases of the R&M process. These phases are divided into main areas for
consideration; then, various activities are listed for each area. This list is not com-
plete, but it focuses the reader on the type of activities that should occur during each
time period. This list also helps identify the sequence



in which these activities may
be completed, depending on the project.

SL3151Ch08Frame Page 348 Thursday, September 12, 2002 6:07 PM

Reliability and Maintainability

349

To determine

timing

for the R&M process, you may use the following procedure:
1. Determine deadline dates to meet production requirements.
2. Check relevance of R&M activities with regard to achieving pro-
gram/project targets.
3. Plan relevant R&M activities by working backwards from deadline dates,
estimating time required for completion of each activity.
4. Set appropriate start dates for each activity/stage based on requirements
and timing.
5. Determine and assign responsibility for stage-based deliverables.
6. Continually track progress of your plan, within and at the conclusion of
each stage.

CONCEPT
B

OOKSHELF

D

ATA

Activities associated with the bookshelf data stage include:
1. Identify good design practices.
2. Collect machinery things gone right/things gone wrong (TGR/TGW).
3. Document successful machinery R&M features.
4. Collect similar machinery history of mean time between failures (MTBF).
5. Collect similar standardized component history of mean time between
failures (MTBF).
6. Collect similar machinery history of mean time to repair (MTTR).
7. Collect similar machinery history of overall equipment effectiveness
(OEE).
8. Collect similar machinery history of reliability growth.
9. Collect similar machinery history of root cause analyses.
At this point it is important to ask and answer this question: Have we collected
all of the relevant historical data from similar operations or designs and documented
them for use during the process selection and design stages?

TABLE



8.1
Activities in the First Three Phases of the R&M Process

Concept Design/Development Build and Installation

Bookshelf data manufacturing
process selection
R&M and production needs
analysis
R&M planning
Process design for R&M
machinery FMEA design review
Equipment run-off
Operation of machinery

SL3151Ch08Frame Page 349 Thursday, September 12, 2002 6:07 PM

350

Six Sigma and Beyond

M

ANUFACTURING

P

ROCESS

S

ELECTION

Activities associated with the manufacturing process selection stage include:
1. Identify general life cycle costs to drive the manufacturing process selec-
tion.
2. Establish OEE targets including availability, quality, and performance
efciency numbers that drive the manufacturing process selection.
3. Establish broad R&M target ranges that drive the manufacturing process
selection.
4. Establish manufacturing assumptions based on cycle plan, including vol-
umes and dollar targets.
5. Identify simultaneous engineering (SE) partners for project.
6. Select manufacturing process based on demonstrated performance and
expected ability to meet established targets.
7. Search for other surplus equipment to be considered for reuse.
8. If surplus machinery has not been identied for reuse, identify a supplier,
based on manufacturing process selection (evaluate R&M capability).
9. Generate detailed life cycle costing analysis on selected manufacturing
process.
At this point it is important to ask and answer these questions: Have broad, high
level R&M targets been set to drive detailed process trade-off decisions? Is the life
cycle cost analysis complete for the selected manufacturing process? Do the projec-
tions support the budget per the affordable business structure?

R&M

AND

P

REVENTIVE

M

AINTENANCE

(PM) N

EEDS

A

NALYSIS

Activities associated with the R&M and PM needs analysis stage include:
1. Establish a clear denition of failure by using all known operating con-
ditions and unique circumstances surrounding the process.
2. Establish R&M requirements for the unique operating conditions sur-
rounding the chosen manufacturing process.
3. Establish/issue R&M engineering requirements for the project to the
designers of the machinery.
4. Identify PM requirements for maintainability.
At this point it is important to ask and answer this question: Have specic R&M
targets been set to support the unique operating conditions and PM program objectives?

DEVELOPMENT AND DESIGN
R&M P

LANNING

Activities associated with the R&M planning stage include:

SL3151Ch08Frame Page 350 Thursday, September 12, 2002 6:07 PM

Reliability and Maintainability

351

1. Conduct process concept review.
2. Identify design effects for other related equipment (automation, integra-
tion, processing, etc.).
3. Standardize fault diagnostics (controls, software, interfaces, level of diag-
nosis, etc.).
4. Develop R&M/PM plan (process/machinery FMEA, mechanical/electri-
cal derating, materials compatibility, thermal analyses, nite element anal-
ysis to support machine condition signature analysis, R&M predictions,
R&M simulations, design for maintainability, etc.).
5. Establish R&M/PM testing requirements (burn-in testing, voltage cycling,
probability ratio sequential testing, design of experiments for process opti-
mization, environmental stress screening, life testing, test-analyze-x, etc.).
At this point it is important to ask and answer these questions: Does the R&M
plan address each project target? Is the R&M plan sufcient to meet project targets?

P

ROCESS

D

ESIGN



FOR

R&M

Activities associated with the process design for R&M stage include:
1. Conduct process design review.
2. Develop process ow chart.
3. Develop process simulation model.
4. Conduct process design simulation for multiple scenarios by analyzing
operational effects of various R&M design trade-offs.
5. Develop life cycle costing analysis on process-related equipment.
6. Review process FMEA.
7. Complete nal process review and simultaneous engineering team input.
At this point it is important to ask and answer this question: Is the process FMEA
complete, and have causes of potentially common failure modes been addressed and
redesigned?

M

ACHINERY

FMEA D

EVELOPMENT

Activities associated with the machinery FMEA development stage include:
1. Develop plant oor computer data collection system (activity tracking,
downtime, reliability growth curves).
2. Establish machinery data feedback plan (crisis maintenance, MTBF,
MTTR, tool lives, OEE, production report, etc.).
3. Verify completion of machinery FMEA on all critical machinery. Conrm
design actions, maintenance burdens, things gone wrong, root cause anal-
yses, etc.
4. Develop fault diagnostic strategy (built in test equipment, rapid problem
diagnosis, control measures).

SL3151Ch08Frame Page 351 Thursday, September 12, 2002 6:07 PM

352

Six Sigma and Beyond

5. Review equipment and material handling layouts (panels, hydro, coolant
systems).
At this point it is important to ask and answer these questions: Is the machinery
FMEA complete, and have causes of potentially common failure modes been
addressed and redesigned? Is the data collection plan complete?

D

ESIGN

R

EVIEW

Activities associated with the design review stage include:
1. Conduct machinery design review (eld history, machinery FMEA, test
or build problems, R&M simulation and reliability predictions, maintain-
ability, thermal/mechanical/electrical analyses, etc.).
2. Provide R&M requirements to tier two suppliers (levels, root cause anal-
yses, standardized component applications, testing, etc.).
At this point it is important to ask and answer this question: Have the R&M
plan requirements been incorporated in the machinery design?

BUILD AND INSTALL
E

QUIPMENT

R

UN

-O

FF

Activities associated with the equipment run-off stage include:
1. Conduct machinery run-off (perform root cause analysis, Failure, Report-
ing Analysis, and Corrective Action System [FRACAS], complete testing,
verify R&M and TPM requirements, validate diagnostic logic and data
collection).
2. Complete preventative maintenance/predictive maintenance manuals and
review maintenance burden.
At this point it is important to ask and answer this question: Has the plant
maintenance department devised a maintenance plan based on expected machine
performance?

O

PERATION



OF

M

ACHINERY

Activities associated with the operation of machinery stage include:
1. Implement and utilize machinery data feedback plan.
2. Implement and utilize FRACAS.
3. Evaluate PM program.
4. Update FMEA and reliability predictions.
5. Conduct reliability growth curve development and analysis.
At this point it is important to ask and answer this question: Have design practices
been documented for use by the next generation design teams? (Also note that as

SL3151Ch08Frame Page 352 Thursday, September 12, 2002 6:07 PM

Reliability and Maintainability

353

the machinery begins to operate, the continuous improvement cycle phases begin to
lead the R&M effort in phases four and ve.)

OPERATIONS AND SUPPORT

After the equipment has been installed and the run-off has been performed, the
Durability



phase of the cycle begins. The PM program now begins to utilize the
R&M team member more as a team leader than a participant. Durability, as dened
in the TE supplement, is the ability to perform intended function over a specied
period under normal use (with specied maintenance, without signicant deteriora-
tion). As the machinery begins to acquire additional operation hours, PM personnel
identify issues and take corrective action. These issues and corrections are fed back
to FMEA personnel and R&M planners as lessons learned for the next generation
of machinery. Whether these corrections involve the design of the machinery or the
maintenance schedule/tasks, each must be incorporated into the continuous improve-
ment loop.

CONVERSION/DECOMMISSION

Conversion is one of the key elements of the investment efciency loop. The R&M
process for reuse of equipment is very similar to the purchase of new equipment
except that you have more limitations on the concept of the new process. The data
are collected and phase one is repeated, often, with more specic direction as the
current equipment may limit some of the other concepts.
While decommission may be the process of equipment disposal, it is necessary
to verify and record R&M data from this equipment to help identify the best design
practices. It is also important to make note of those design practices that did not
work as well as planned.
As plans for decommission become rm, it is important to generate forecasts
for equipment availability. These forecasts should then be entered into a database
for future forecasted and available machinery and equipment. Maintenance data,
including condition, operation description, and reason for availability should be
included. This will assist engineers evaluating surplus machinery and equipment for
reuse in their programs.

TYPICAL R&M MEASURES
R&M M

ATRIX

Perhaps the most important document in the R&M process is the R&M matrix. This
matrix identies the requirements of the customer on a per phase basis. Three major
categories of tasks are usually identied. They are:
R&M programmatic tasks
Engineering tasks
R&M continuous improvement

SL3151Ch08Frame Page 353 Thursday, September 12, 2002 6:07 PM

354

Six Sigma and Beyond

R

ELIABILITY

P

OINT

M

EASUREMENT

This may be expressed by:
where R(t) = reliability point estimate during a constant failure rate period; e =
natural logarithm which is 2.718281828; t = schedule time or mission time of the
equipment or machinery; and MTBF = mean time between failure.

Special note:

This calculation may be performed

only

when the machine has
reached the bottom of the bathtub curve.

E

XAMPLE

A water pump is scheduled (mission time) to operate for 100 hours. The MTBF for
this pump is also rated at 100 hours and the MTTR is 2 hours. The probability that
the pump will not fail during the mission is:

= = .37 or 37%.

This means that the pump will have a 37% chance of not breaking down during the
100-hour mission time.
Conversely, the unreliability of the pump can be calculated as:

= 1 R = 1 .37 = .63 or 63%.

This means that the pump has a 63% chance of failing during the 100 hour mission.

MTBE

Mean time between event can be calculated as:
MTBE = Total Operating Time/N
where Total Operating Time = the total scheduled production time when machinery
or equipment is powered and producing parts and N = the total number of downtime
events, scheduled and unscheduled.

E

XAMPLE

The total operating time for a machine is 550 hours. In addition, the machine
experiences 2 failures, 2 tool changes, 2 quality checks, 1 preventive maintenance
meeting, and 5 lunch breaks. What is the MTBE?
R e
t
t
MTBF
( )

=
R e
t
t
MTBF
( )

= R e
t
( )

=
100
100
R

SL3151Ch08Frame Page 354 Thursday, September 12, 2002 6:07 PM

Reliability and Maintainability

355

MTBE = Total Operating Time/N = 550/12 = 45.8 hours

MTBF

Mean time between failure is the average time between failure occurrences and is
calculated as:
MTBF = Operating Time/N
where Operating Time = scheduled production time and N = total number of failures
observed during the operating period.

E

XAMPLE

If machinery is operating for 400 hours and there are eight failures, what is the
MTBF?
MTBF = Operating Time/N = 400/8 = 50 hours. (

Special note

: Sometimes C (cycles)
is substituted for T. In that case, we calculate the MCBF. The steps are identical to
those of the MTBF calculation.)

F

AILURE

R

ATE

Failure rate estimates the number of failures in a given unit of time, events, cycles, or
number of parts. It is the probability of failure within a unit of time. It is calculated as:
Failure rate = 1/MTBF

E

XAMPLE

The failure rate of a pump that experiences one failure within an operating time
period of 2000 hours is:

Failure rate = 1/MTBF = 1/2000 = .0005 failures per hour.

This means that there is a .0005 probability that a failure will occur with every hour
of operation.

MTTR

Mean time to repair is a calculation based on one failure and one failure only. The
longer each failure takes to repair, the more the equipments cost of ownership goes
up. Additionally, MTTR directly effects uptime, uptime percent, and capacity. It is
calculated as:
MTTR
t
N
=


SL3151Ch08Frame Page 355 Thursday, September 12, 2002 6:07 PM

356

Six Sigma and Beyond

where = total repair time and N = total number of repairs.

E

XAMPLE

A pump operates for 300 hours. During that period there were four failure events
recorded. The total repair time was 5 hours. What is the MTTR?

= 5/4 = 1.25 hours

A

VAILABILITY

Availability is the measure of the degree to which machinery or equipment is in an
operable and committable state at any point in time. Availability is dependent upon
(a) breakdown loss, (b) setup and adjustment loss, and (c) other factors that may
prevent machinery from being available for operation when needed. When calculat-
ing this metric, it is assumed that maintenance starts as soon as the failure is reported.
(

Special note:

Think of the measurement of R&M in terms of availability. That is,
MTBF is reliability and MTTR is maintainability.) Availability is calculated as:
Availability = MTBF/(MTBF + MTTR)

E

XAMPLE

What is the availability for a system that has an MTBF of 50 hours and an MTTR
of 1 hour?

Availability = MTBF/(MTBF + MTTR) = 50/(50 + 1) = .98 or 98%

O

VERALL

E

QUIPMENT

E

FFECTIVENESS

(OEE)

Overall equipment effectiveness (OEE) is a measure of three variables. They are:
1. Availability = percent of time a machine is available to produce
2. Performance efciency = actual speed of the machine as related to the
design speed of the machine
3. Quality rate = percent of resulting parts that are within specications
A good OEE is considered to be 85% or higher.

L

IFE

C

YCLE

C

OSTING

(LCC)

Life cycle costing (LCC) is the total cost over the life of the machine or equipment.
It is calculated based on the following:
LCC = Acquisition costs (A) + Operating costs (O) +
Maintenance costs (M)


Conversion and or decommission costs (c)
t

MTTR
t
N
=


SL3151Ch08Frame Page 356 Thursday, September 12, 2002 6:07 PM

Reliability and Maintainability

357

E

XAMPLE

What is the LCC for the two machines shown in Table 8.2 and which one is a better deal?
The reader should notice that before the decision is made

all

costs should be eval-
uated. In this case, machine A has a higher acquisition cost than machine B, but it
turns out that machine A has a lower LCC than machine B. Therefore, machine A
is the better deal.

T

OP

10 PROBLEMS AND RESOLUTIONS
This list allows the designer to see the major sources of downtime associated with
the current equipment. Once the list items are identied, a root cause analysis or
problem resolution should be conducted on each of the failures. If the design is
known, the designer can then modify the design to reect the changes. (Sometimes
the top ten problems are based on historical data and must be adjusted to reect
current design considerations.)
THERMAL ANALYSIS
This analysis is conducted to help the designer to develop the appropriate and
applicable heat transfer (Table 8.3). The actual analysis is conducted by following
these six steps:
1. Develop a list of all electrical components in the enclosure.
2. Identify the wattage rating for each component located in the enclosure.
3. Sum the total wattage for the enclosure.
4. Add in any external heat generating sources.
5. Calculate the surface area of the enclosure that will be available for
cooling.
6. Calculate the thermal rise above ambient.
EXAMPLE
The electrical enclosure is 5 ft. tall by 4 ft. deep. The surface area for this enclosure
is calculated as follows:
TABLE 8.2
Cost Comparison of Two Machines
Costs Machine A Machine B
Acquisition costs (A)
Operating costs (O)
Maintenance costs (M)
Conversion and/or decommission costs (C)
Total LCC
$2,000.00
$9,360.00
$7,656.00
$19,016.00
$1,520.00
$10,870.00
$9,942.00
$22,332.00
SL3151Ch08Frame Page 357 Thursday, September 12, 2002 6:07 PM
358 Six Sigma and Beyond
Front and Back = 5 ft. 4ft. 2 = 40 sq. ft.
Sides = 2 ft. 5 ft. 2 = 20 sq. ft.
Enclosure top = 2ft. 4ft. = 8 sq. ft
Bottom is ignored due to the fact that heat rises.
Total surface area = 40 + 20 + 8 = 68 sq. ft.
To calculate the thermal rise (T) we use the following formula:
Thermal rise (T) = Thermal resistance (
CA
) cabinet to ambient Power (W)

CA
= 1/(Thermal conductivity Cooling area)
The thermal conductivity value is found in the catalog of the National Electrical
Manufacturing Association (NEMA).

CA
= 1/(.25 W/degree F) (square footage)

CA
= 1/.25 68 = .0588
Thus, .25 W/degree F is the thermal conductivity value for a NEMA 12 enclosure.
If the equipment inside the enclosure generates 234.7 watts, then the thermal rise is
TABLE 8.3
Thermal Calculation Values
Thermal Calculation Values
Component Name Quantity
Individual Wattage
Maximum
Total
Wattage
Internal
Relay
A18 contactor
A25 contactor
PS27 power supply
Monochrome monitor
4
1
2
1
1
2.5
1.7
2
71
85
10.0
1.7
4.0
71.0
85.0
Subtotal wattage 171.7
External
Servo transformer 1 450 63.0
Subtotal wattage 63.0
Total enclosure wattage 234.7
Note: The servo transformer is mounted externally and next to the enclosure. There-
fore, only 14% of the total wattage is estimated to radiate into the enclosure
SL3151Ch08Frame Page 358 Thursday, September 12, 2002 6:07 PM
Reliability and Maintainability 359
T =
CA
wattage = .0588 234.7 = 13.8F.
If the ambient temperature is 100F, then the enclosure temperature will reach
113.8F. If the enclosure temperature is specied as 104F, then the design exceeds
the specication by approximately 9.8F. The enclosure must be increased in size,
the load must be reduced, or active cooling techniques need to be applied. (Special
note: Remember that a 10% rise in temperature decreases the reliability by about
50%. Also the method just mentioned in this example is not valid for enclosures that
have other means of heat dissipation such as fans, or for those made of heavier metal
or if the material were changed. This specic calculation assumes that the heat is
being radiated through convection to the outside air.)
ELECTRICAL DESIGN MARGINS
Design margins in electrical engineering of the equipment are referred to as derating.
On the other hand, mechanical design margins are referred to as safety margins. A
rule of thumb for derating is about 20% for electrical components. However, the
actual calculation is
% derating =
where I
T
= total circuit current draw and I
S
= total supply current.
EXAMPLE
During a design review, the question arose as to whether the 24 V power supply for
a motor was adequately derated. The power supply takes 480 VAC three phase with
a 2 A circuit breaker and has a rated output of 10 A. An examination of the system
reveals that 24 V power is delivered to the load through three circuit breakers (A =
.477 A, B = .73 A, and C = 5.53 A. The total for the three circuits is therefore 6.737
A.) When these circuit breakers are combined, 11 A of current ow to the load. This
situation may not happen, but further investigation is required.
% derating = = = 32.63%
This means that in this case the power supply will not be overloaded and the circuit
breakers are generously oversized. In other words, the circuit breakers should not be
tripped due to false triggers.
SAFETY MARGINS (SM)
For mechanical components, SM are generally dened as the amount of strength of
a mechanical component relating to the applied stress. A rule of thumb for SM with
a normally distributed stress load relationship is that the safety margin should always
be greater or equal to three. However, the actual calculation for the MS is
1
I
I
T
S
1
I
I
T
S
1
6 737
10 0

.
.
SL3151Ch08Frame Page 359 Thursday, September 12, 2002 6:07 PM
360 Six Sigma and Beyond
SM =
Where SM = safety margin; U
STRENGTH
= mean strength; U
LOAD
= mean load; Lv
2
=
load variance; and Sv
2
= strength variance.
EXAMPLE
A robots arm has a mean strength of 80 kg. The maximum allowable stress applied
by the end of arm tooling is 50 kg. The strength variance is 8 kg and the stress
variance is 7 kg. What is the SM?
SM = = = 2.822
(A low SM may indicate the need to assign another size robot or redesign the tooling
material.)
INTERFERENCE
Once the SM is calculated, it can be used to calculate the interference and reliability
of the components under investigation. Interference may be thought of as the overlap
between the stress and the strength distributions. In more formal terms, it is the
probability that a random observation from the load distribution exceeds a random
observation from the strength distribution. To calculate interference, we use the SM
equation and substitute the z for the SM distribution:
Z =
EXAMPLE
If we use the answer from the previous example (z = 2.822), we can use the z table
(in this case the area under the z = 2.822 is .0024). This means that there exists a
.0024 or .24% probability of failure.
Reliability, on the other hand, may be calculated as
R = 1 interference or R = 1
R = 1 .0024 = .9976 or 99.76%.
This means that even though the strength and the load have a very low (.24%)
probability of failure, the reliability of the system is very high with a 99.76%.
U U
Sv Lv
STRENGTH STRESS

+
2 2
U U
Sv Lv
STRENGTH STRESS

+
2 2
80 50
8 7
2 2

+
U U
Sv Lv
STRENGTH STRESS

+
2 2
SL3151Ch08Frame Page 360 Thursday, September 12, 2002 6:07 PM
Reliability and Maintainability 361
CONVERSION OF MTBF TO FAILURE RATE AND VICE VERSA
The relationship between these two metrics is
MTBF = and FR =
RELIABILITY GROWTH PLOTS
This plot is an effective method to track continual improvement for R&M as well
as to predict reliability growth of machinery from one machine to the other. The
steps to generate this plot are:
Step 1. Collect data on the machine and calculate the cumulative MTBF value
for the machine.
Step 2. Plot the data on loglog paper. (An increasing slope indicates a
reliability growth atness, which indicates that the machine has achieved
its inherent level of MTBF and cannot get any better)
Step 3. Calculate the slope, using regression analysis or best t line. Once the
slope (the beta value) is calculated, we can apply the Duane model inter-
pretation. The guidelines (Table 8.4) for the interpretation are
MACHINERY FMEA
Machinery FMEA is a systematic approach that applies the tabular method to aid
the thought process used by simultaneous engineering teams to identify the
machines potential failure modes, potential effects, and potential causes and to
develop corrective action plans that will remove or reduce the impact of the failure
modes. Perhaps the most important use of the machinery FMEA is to identify and
correct all safety issues. A more detailed discussion will be given in Chapter 6.
TABLE 8.4
Guidelines for the Duane Model
Recommended Actions
0 to .2 No priority is given to reliability improvement; failure data not analyzed; corrective action
taken for important failure modes, but with low priority
.2 to .3 Routine attention to reliability improvement; corrective action taken for important failure
modes
.3 to .4 Priority attention to reliability improvement; normal (typical stresses) environment utilization;
well-managed analysis and corrective action for important failure modes
.4 to .6 Eliminating failures takes top priority; immediate analysis and corrective action for all failures
1
FR
1
MTBF
SL3151Ch08Frame Page 361 Thursday, September 12, 2002 6:07 PM
362 Six Sigma and Beyond
KEY DEFINITIONS IN R&M
The following terms are commonly encountered in R&M:
Accelerated life testing Verication of machine and equipment design
relationship much sooner than if operated typically. Intended especially for
new technology, design changes, and ongoing development.
Derating The practice of limiting stresses that may be applied to a com-
ponent to levels below the specied maxima in order to enhance reliability.
Derating values of electrical stress are expressed as ratios of applied stress
to rated maximum stress. The applied stress is taken as the maximum likely
to be applied during worst-case operating conditions. Thermal derating is
expressed as a temperature value.
Design of experiments (DOE) A technique that focuses on identifying factors
that affect the level or magnitude of a product/process response, examining
the response surface, and forming the mathematical prediction model.
Design review A review providing in-depth detail relative to the evolving
design supported by drawings, process ow descriptions, engineering anal-
yses, reliability design features, and maintainability design considerations.
Dry run The rehearsal or cycling of machinery, normally with the intent
of not processing the work piece, to verify function, clearances, and con-
struction stability.
Durability Ability to perform intended function over a specied period
under normal use with specied maintenance, without signicant deterio-
ration.
Equipment The portion of process machinery that is not specic to a
component or sub assembly.
Failure An event when machinery/equipment is not available to produce
parts under specied conditions when scheduled or is not capable of pro-
ducing parts or performing scheduled operations to specications. For every
failure, an action is required.
Failure mode and effects analysis (FMEA) A technique to identify each
potential failure mode and its effect on machinery performance.
Failure reporting, analysis, and corrective action system (fracas) An
orderly system of recording and transmitting failure data from the suppliers
plant to the end users ts into a unitary database. The database allows
identication of pattern failures and rapid resolution of problems through
rigorous failure analysis.
Fault tree analysis (FTA) A top down approach to failure analysis starting
with an undesirable event and determining all the ways it can happen.
Feasibility A determination that a process, design, procedure, or plan can
be successfully accomplished in the required time frame.
Finite element analysis (FEA) A computational structure analysis tech-
nique that quanties a structures response to applied loading conditions.
Total productive maintenance (TPM) Natural cross-functional groups
working together in an optimal balance to improve the overall effectiveness
SL3151Ch08Frame Page 362 Thursday, September 12, 2002 6:07 PM
Reliability and Maintainability 363
of their equipment and processes within their work areas. TPM implemen-
tation vigorously benchmarks, measures, and corrects all losses resulting
from inefciencies.
Life cycle The sequence through which machinery and equipment pass
from conception through decommission.
Life cycle costs (LCC) The sum of all cost factors incurred during the
expected life of machinery.
Machine condition signature analysis (MCSA) An application that
applies mechanical signature (vibration) analysis techniques to characterize
machinery and equipment on a systems level to signicantly improve reli-
ability and maintainability.
Machinery Tooling and equipment combined. A generic term for all hard-
ware (including necessary operational software) that performs a manufac-
turing process.
Maintainability A characteristic of design, installation, and operation,
usually expressed as the probability that a machine can be retained in, or
restored to, specied operable condition within a specied interval of time
when maintenance is performed in accordance with prescribed procedures.
Mean time between failures (MTBF) The average time between failure
occurrences. The sum of the operating time of a machine divided by the
total number of failures. Predominantly used for repairable equipment.
Mean time to failure (MTTF) The average time to failure for a specic
equipment design. Used predominantly for non-repairable equipment.
Mean time to repair (MTTR) The average time to restore machinery or
equipment to specied conditions.
Overall equipment effectiveness (OEE) Percentage of the time the
machinery is available (Availability) how fast the machinery is running
relative to its design cycle (Performance efciency) percentage of the
resulting product within quality specications (Yield).
Perishable tooling Tooling which is consumed over time during a manu-
facturing operation.
Plant oor information system (PFIS) An information gathering system
used on the plant oor to gather data relating to plant operations including
maintenance activities.
Predictive maintenance (PdM) A portion of scheduled maintenance ded-
icated to inspection for the purpose of detecting incipient failures.
Preventative maintenance (PM) A portion of scheduled maintenance ded-
icated to taking planned actions for the purpose of reducing the frequency
or severity of future failures, including lubrication, lter changes, and part
replacement dictated by analytical techniques and predictive maintenance
procedures.
Probability ratio sequential testing (PRST) A reliability qualication test
to demonstrate if the machinery/equipment satises a specied MTBF
requirement and is not lower than an acceptable MTBF (MIL-STD-781).
Process Any operation or sequence of operations that contributes to the
transformation of raw material into a nished part or assembly.
SL3151Ch08Frame Page 363 Thursday, September 12, 2002 6:07 PM
364 Six Sigma and Beyond
Product In relation to tooling and equipment suppliers, the term product
refers to the end item produced (e.g., machine, tool, die, etc.).
Production In relation to tooling and equipment suppliers, the term pro-
duction refers to the process required to produce the product.
R&M plan A reliability and maintainability (R&M) plan shall establish a
clear implementation strategy for design assurance techniques, reliability
testing and assessment, and R&M continuous improvement activities during
the machinery/equipment life cycle.
R&M targets The range of values that MTBF and MTTR are expected to
fall between plus an improvement factor that leads to MTBF and MTTR
requirements.
Reliability The probability that machinery and equipment can perform
continuously, without failure, for a specied interval of time when operating
under stated conditions.
Reliability growth Machine reliability improvement as a result of identi-
fying and eliminating machinery or equipment failure causes during
machine testing and operations.
Root cause analysis (RCA) A logical, systematic approach to identifying
the basic reasons (causes, mechanisms, etc.) for a problem, failure, non-
conformance, process error, etc. The result of root cause analysis should
always be the identication of the basic mechanism by which the problem
occurs and a recommendation for corrective action.
Simultaneous engineering (SE) Product engineering that optimizes the
nal product by the proper integration of requirements, including product
function, manufacturing and assembly processing, service engineering, and
disposal.
Things gone right/things gone wrong (TGR/TGW) An evolving pro-
gram-level compilation of lessons learned that capture successful and
unsuccessful manufacturing engineering activity and equipment/perfor-
mance for feedback to an organization and its suppliers for continuous
improvement.
Tooling The portion of the process machinery that is specic to a compo-
nent of sub assembly.
DFSS AND R&M
R&Ms goal is to make sure that the machinery/tool delivered to the customer meets
or exceeds its requirements. DFSS, on the other hand, is the methodology that
controls the process for satisfying the customers expectations early on in the product
development cycle. This is very important since in R&M the reliability matrix actually
attempts to quantify the initial product vision with the customers requirements.
Having said that, we must also recognize that quite often in product development
we do not have all the answers. In fact, quite often we are on a fuzzy front end.
This is where DFSS offers its greatest contribution. That is, with the process knowl-
edge of DFSS, the engineer not only will be aware but also will make sure that the
appropriate design ts within both the customers and the organizations goals.
SL3151Ch08Frame Page 364 Thursday, September 12, 2002 6:07 PM
Reliability and Maintainability 365
DFSS may be applied in an original design, which involves elaborating original
solutions for a given task; adaptive design, which involves adapting a known system
to a changed task or evolving a signicant subsystem of a current product; variant
design, which involves varying parameters of certain aspects of a product to develop
a new or more robust design; and redesign, which implies any of the items just
mentioned. A redesign is not a variant design, rather it implies that a product already
exists that is perceived to fall short in some criteria, and a new solution is needed.
The new solution can be developed through any of the above approaches. In fact, it
is often difcult to argue against the maxim that all design is redesign (Otto and
Wood, 2001).
REFERENCES
Otto, K. and Wood, K., Product Design, Prentice Hall, Upper Saddle River, NJ, 2001.
SELECTED BIBLIOGRAPHY
Anon., Reliability and Maintainability Guideline for Manufacturing Machinery and Equip-
ment, M-110.2, 2nd ed., Society of Automotive Engineers, Inc., Warrendale, PA and
National Center for Manufacturing Sciences, Inc., Ann Arbor, MI, 1999.
Anon., ISO/TS16949. International Automotive Task Force. 2nd ed. AIAG. Southeld, MI,
2002.
Automotive Industry Action Group, Potential Failure Mode and Effect Analysis, 3rd ed.,
Chrysler Corp., Ford Motor Co., and General Motors. Distributed by AIAG, South-
eld, MI, 2001.
Blenchard, B.S., Logistics Engineering and Management, 3rd ed., Prentice Hall, Englewood
Cliffs, NJ, 1986.
Chrysler, Ford, and GM, Quality System Requirements: QS-9000, distributed by Automotive
Industry Action Group, Southeld, MI, 1995.
Chrysler, Ford, and GM, Quality System Requirements: Tooling and Equipment Supplement,
distributed by Automotive Industry Action Group, Southeld, MI, 1996.
Creveling, C.M., Tolerance Design: A Handbook for Developing Optimal Specications,
Addison Wesley Longman, Reading, MA, 1997.
Hollins, B. and Pugh, S., Successful Product Design, Butterworth Scientic. London, 1990.
Kapur, K.C. and Lamberson, L.R., Reliability in Engineering Design, Wiley, New York, 1977.
Nelson, W., Graphical analysis of system repair data, Journal of Quality Technology, 20,
2435, 1988.
Stamatis, D.H., Implementing the TE Supplement to QS-9000, Quality Resources, New York,
1998.
SL3151Ch08Frame Page 365 Thursday, September 12, 2002 6:07 PM
SL3151Ch08Frame Page 366 Thursday, September 12, 2002 6:07 PM

367

9

Design of Experiments

SETTING THE STAGE FOR DOE

Design of Experiments (DOE) is a way to efciently plan and structure an investi-
gatory testing program. Although DOE is often perceived to be a problem-solving
tool, its greatest benet can come as a problem avoidance tool. In fact, it is this
avoidance that we emphasize in design for six sigma (DFSS).
This chapter is organized into nine sections. The user who is looking for a basic
DOE introduction in order to participate with some understanding in a problem-
solving group is urged to study and understand the rst two sections or go back and
review Volume V of this series. The remaining sections discuss more complex topics
including problem avoidance in product and process design, more advanced exper-
imental layouts, and understanding the analysis in more detail.

W

HY

DOE (D

ESIGN



OF

E

XPERIMENTS

) I

S



A

V

ALUABLE

T

OOL

DOE is a valuable tool because:
1. DOE helps the responsible group plan, conduct, and analyze test programs
more efciently.
2. DOE is an effective way to reduce cost.
Usually the term DOE brings to mind only the analysis of experimental data.
The application of DOE necessitates a much broader approach that encompasses the
total process involved in testing. The skills required to conduct an effective test
program fall into three main categories:
1. Planning/organizational
2. Technical
3. Analytical/statistical
The planning of the experiment is a critical phase. If the groundwork laid in the
planning phase is faulty, even the best analytic techniques will not salvage the
disaster. The tendency to run off and conduct tests as soon as a problem is found,
without planning the outcome, should be resisted. The benets from up-front plan-
ning almost always outweigh the small investment of time and effort. Too often,
time and resources are wasted running down blind alleys that could have been
avoided. Section 2 of this chapter contains a more detailed discussion of planning
and the techniques used to ensure a well-planned experiment.

SL3151Ch09Frame Page 367 Thursday, September 12, 2002 6:05 PM

368

Six Sigma and Beyond

DOE can be a powerful tool in situations where the effect on a measured output
of several factors, each at two or more levels, must be determined. In the traditional
one factor at a time approach, each test result is used in a small number of
comparisons. In DOE, each test is used in every comparison. A simplied example
follows.

E

XAMPLE

A problem-solving brainstorming group suspects 7 factors (named A, B, C, D, E, F,
and G), each at two levels (level 1 and level 2), of inuencing a critical, measurable
function of the design. The group wants to determine the best settings of these factors
to maximize the measured test results see Table 9.1. Two evaluations (a and b)
are run at each test conguration rather than a single evaluation in order to attain a
higher condence in the difference between factor levels (this assumes no need for
a tie breaker). The group makes comparisons as shown in Table 9.2. Sixteen total
tests are run, and four tests are used to determine the difference between levels for
each factor. The best combination of factors is (1, 2, 1, 2, 2, 1, 1) for factors A
through G.
However, using DOE the group runs test congurations as shown in Table 9.3.
The group makes comparisons as shown in Table 9.4. Eight total tests are run, and
eight tests are used to determine the difference between levels for each factor. This
can be done because each level of every factor equally impacts the determination of
the average response at all levels of all of the other factors (i.e., of the four tests run
at A = 1, two were run at B = 1 and two were run at B = 2; this is also true of the
four tests run at A = 2). This relationship is called orthogonality. This concept is
very important, and the reader should work through the relationships between the
levels of at least two other factors to better understand the use of orthogonality in
this testing matrix. The best level is [1, (1 or 2), 1, 2, (1 or 2), 1, 1] for A through
G. Factors B and E are not signicant and may be set to the least expensive level.

TABLE



9.1
One Factor at a Time

The group tests congurations containing the following combinations of the factors:
Test

Level of Factor (1 and 2 Indicate the Different Levels)

Results
Number A B C D E F G a b

1 1 1 1 1 1 1 1 271.4 266.3
2 2 1 1 1 1 1 1 215.0 211.2
3 1 2 1 1 1 1 1 275.3 271.1
4 1 2 2 1 1 1 1 235.2 231.5
5 1 2 1 2 1 1 1 296.6 301.6
6 1 2 1 2 2 1 1 305.2 301.1
7 1 2 1 2 1 2 1 278.8 275.3
8 1 2 1 2 1 1 2 251.9 254.3

SL3151Ch09Frame Page 368 Thursday, September 12, 2002 6:05 PM

Design of Experiments

369

TABLE



9.2
Test Numbers for Comparison

Test Numbers Used to Determine:
Difference
Factor Level 1 Level 2 Level 1 Level 2

A 1a, 1b 2a, 2b 55.8
B 1a, 1b 3a, 3b 4.4
C 3a, 3b 4a, 4b 39.9
D 3a, 3b 5a, 5b 25.7
E 5a, 5b 6a, 6b 4.3
F 6a, 6b 7a, 7b 26.1
G 6a, 6b 8a, 8b 50.1

TABLE



9.3
The Group Runs Using DOE Congurations

Test
Level of Factor

(1 and 2 Indicate the Different Levels)
Number A B C D E F G Result

1 1 1 1 1 1 1 1 270.7
2 1 1 1 2 2 2 2 223.8
3 1 2 2 1 1 2 2 158.2
4 1 2 2 2 2 1 1 263.1
5 2 1 2 1 2 1 2 129.3
6 2 1 2 2 1 2 1 175.1
7 2 2 1 1 2 2 1 195.4
8 2 2 1 2 1 1 2 194.6

TABLE



9.4
Comparisons Using DOE

Test Numbers Used to Determine
Difference
Factor Level 1 Level 2 Level 1 Level 2

A 1, 2, 3, 4 5, 6, 7, 8 55.4
B 1, 2, 5, 6 3, 4, 7, 8 3.1
C 1, 2, 7, 8 3, 4, 5, 6 39.7
D 1, 3, 5, 7 2, 4, 6, 8 25.8
E 1, 3, 6, 8 2, 4, 5, 7 3.3
F 1, 4, 5, 8 2, 3, 5, 8 26.3
G 1, 4, 6, 7 2, 3, 5, 8 49.6

SL3151Ch09Frame Page 369 Thursday, September 12, 2002 6:05 PM

370

Six Sigma and Beyond

For a comparison of the two methods, see Table 9.5. Half as many tests are
required using a DOE approach and the estimate at each level is better (four tests
per factor level versus two). This is almost like getting something for nothing. The
only thing that is required is that the group plan out what is to be learned before
running any of the tests. The savings in time and testing resources can be signicant.
Direct benets include reduced product development time, improved problem cor-
rection response, and more satised customers. And that is exactly what DFSS should
be aiming at.

This approach to DOE is also very exible and can accommodate known or
suspected interactions and factors with more than two levels. A properly structured
experiment will give the maximum amount of information possible. An experiment
that is less well designed will be an inefcient use of scarce resources.

T

AGUCHI


S

A

PPROACH

Here it is appropriate to summarize Dr. Taguchis approach, which is to minimize
the total cost to society. He uses the Loss Function (Section 4) to evaluate the
total cost impact of alternative quality improvement actions. In Dr. Taguchis view,
we all have an important societal responsibility to minimize the sum of the internal
cost of producing a product and the external cost the customer incurs in using the
product. The customers cost includes the cost of dissatisfaction. This responsibility
should be in harmony with every companys objectives when the long-term view of
survival and customer satisfaction is considered. Prots may be maximized in the
short run by deceiving todays customers or trading away the future.
Traditionally, the next quarters or next years bottom line has been the driving
force in most corporations. Times have changed, however. Worldwide competition
has grown, and customers have become more concerned with the total product cost.
In this environment, survival becomes a real issue, and customer satisfaction must
be a part of the cost equation that drives the decision process.
Dr. Taguchi uses the signal-to-noise (S/N) ratio as the operational way of incor-
porating the loss function into experimental design. Experiment S/N is analogous
to the S/N measurement developed in the audio/electronics industry. S/N is used to
ensure that designs and processes give desired responses over different conditions
of uncontrollable noise factors. S/N is introduced in Section 4 and developed in
examples in later sections.

TABLE



9.5
Comparison of the Two Means

Number of Tests Estimate at the Best Levels
Condence Interval
at 90% Condence

One factor at a time 16 301.1


3.7
DOE 8 299.6


3.3

SL3151Ch09Frame Page 370 Thursday, September 12, 2002 6:05 PM

Design of Experiments

371

There are three basic types of product design activity in Dr. Taguchis approach:
1. System design
2. Parameter design
3. Tolerance design

System design

involves basic research to understand nature. System design
involves scientic principles, their extension to unknown situations, and the devel-
opment of highly structured basic relationships. Parameter and tolerance design
involves optimizing the system design using empirical methods. Taguchis methods
are most useful in parameter and tolerance designs. The rest of this chapter will
discuss these applications.

Parameter design

optimizes the product or process design to reach the target
value with minimum possible variability with the cheapest available components.
Note the emphasis on striving to satisfy the requirements in the least costly manner.
Parameter design is discussed in Section 8.

Tolerance design

only occurs if the variability achieved with the least costly
components is too large to meet product goals. In tolerance design, the sensitivity
of the design to changes in component tolerances is investigated. The goal is to
determine which components should be more tightly controlled and which are not
as crucial. Again, the driving force is cost. Tolerance design is discussed in Section 9.
Problem resolution might appear to be another type of product design. If targets
are set correctly, however, and parameter and tolerance design occur, there will be
little need for problem resolution. When problems do arise, they are attacked using
elements of both parameter and tolerance design, as the situation warrants.

M

ISCELLANEOUS

T

HOUGHTS

A tremendous opportunity exists when the basic relationships between components
are dened in equation form in the system design phase. This occurs in electrical
circuit design, nite element analysis, and other situations. In these cases, once the
equations are known, testing can be simulated on a computer and the best com-
ponent values and appropriate tolerances obtained. It might be argued that the true
best values would not be located using this technique; only the local maxima would
be obtained. The equations involved are generally too complex to solve to the true
best values using calculus. Determining the local best values in the region that the
experienced design engineer considers most promising is generally the best available
approach. It denitely has merit over choosing several values and solving for the
remaining ones. The cost involved is computation time, and the benet is a robust
design using the widest possible tolerances.
Those readers who have some experience in classical statistics may wonder
about the differences between the classical and Taguchi approaches. Although there
are some operational differences, the biggest difference is in philosophical
emphasis see Volume V of this series. Classical statistics emphasizes the pro-
ducers risk. This means a factors effect must be shown to be signicantly different

SL3151Ch09Frame Page 371 Thursday, September 12, 2002 6:05 PM

372

Six Sigma and Beyond

from zero at a high condence level to warrant a choice between levels. Taguchi
uses percent contribution as a way to evaluate test results from a consumers risk
standpoint. The reasoning is that if a factor has a high percent contribution, more
often than not it is worth pursuing. In this respect, the Taguchi approach is less
conservative than the classical approach. Dr. Taguchi uses orthogonal arrays exten-
sively in his approach and has formulated them into a cookbook approach that is
relatively easy to learn and apply. Classical statistics has several different ways of
designing experiments including orthogonal arrays. In some cases, another approach
may be more efcient than the orthogonal array. However, the application of these
methods may be complex and is usually left to statisticians. Dr. Taguchi also
approaches uncontrollable noise differently. He emphasizes developing a design
that is robust over the levels of noise factors. This means that the design will perform
at or near target regardless of what is happening with the uncontrollable factors.
Classical statistics seeks to remove the noise factors from consideration by block-
ing the noise factors.
In certain cases, the approaches Taguchi recommends may be more complicated
than other statistical approaches or may be questioned by classical statisticians. In
these cases, alternative approaches are presented as supplemental information at the
end of the appropriate section. Additional analysis techniques are also presented in
section supplements.
The reader is encouraged to thoroughly analyze the data using all appropriate
tools. Incomplete analysis can result in incorrect conclusions.

PLANNING THE EXPERIMENT

The purpose of this section is to:
1. Impress upon the reader the importance of planning the experiment as a
prerequisite to achieving successful results
2. Present some tools to use and points to consider during the planning phase
3. Demonstrate DOE applications via simple examples

B

RAINSTORMING

The rst steps in planning a DOE are to dene the situation to be addressed, identify
the participants, and determine the scope and the goal of the investigation. This
information should be written down in terms that are as specic as possible so that
everyone involved can agree on and share a common understanding and purpose.
The experts involved should pool their understanding of the subject. In a brainstorm-
ing session, each participant is encouraged to offer an opinion of which factors cause
the effect. All ideas are recorded without question or discussion at this stage. To aid
in the organization of the proposed factors, a branching (shbone) format is often
used, where each main branch is a main aspect of the effect under investigation
(e.g., material, methods, machine, people, measurement, environment). The con-
struction of a cause-and-effect (shbone or Ishikawa) diagram in a brainstorming
session provides a structured, efcient way to ensure that pertinent ideas are collected

SL3151Ch09Frame Page 372 Thursday, September 12, 2002 6:05 PM

Design of Experiments

373

and considered and that the discussion stays on track. An example of a partially
completed cause-and-effect diagram is shown in Figure 9.1.
After the participants have expressed their ideas on possible causes, the factors
are discussed and prioritized for investigation. Usually, a three-level (high, moderate,
and low) rating system is used to indicate the group consensus on the level of
suspected contribution. Quite often, the rating will be determined by a simple vote
of the participants. In situations where several different areas of contributing exper-
tise are represented, participants votes outside of their areas of expertise may not
have the importance of the experts vote. Handling this situation becomes a man-
agement challenge for the group leader and is beyond the scope of this document
the reader may need to review Volume II of this series.
During the brainstorming and prioritization process, the participants should
consider the following:
1. The situation What is the present state of affairs and why are we
dissatised?
2. The goal When will we be satised (at least in the short term)?
3. The constraints How much time and resources can we use in the
investigation?
4. The approach Is DOE appropriate right now or should we do other
research rst?
5. The measurement technique and response What measurement tech-
nique will be used and what response will be measured?

C

HOICE



OF

R

ESPONSE

The choice of measurement technique and response is an important point that is
sometimes not given much thought. The obvious response is not always the best.

FIGURE 9.1

An example of a partially completed shbone diagram.
Cooling
F/A
Ratio
Calibration
Spark Advance
Spark Scatter
Too Great
Too Small
Range
Fuel
Flow
Poor Ground
Internal
to Veh
EMI
Air Ports
CBs
Engine Control
Hardware
Injectors
Stuck
Broken
Contaminated
Harness &
Connectors
Intermittents
Improper
Connector
Fit
Fit
Piston Ring
Scuff/Power
Loss
Rings
Suppliers
Finish
Buffs
Assembly
Manufacturing
Grinding
Piston
Compression
Height
Timing
Camshaft
Compression
Ratio
Engine
Hardware
Bolt Torque
Bore
Distortion
Piston
Design
Bore
Finish

SL3151Ch09Frame Page 373 Thursday, September 12, 2002 6:05 PM

374

Six Sigma and Beyond

As an example, consider the gap between two vehicle body panels. At rst thought,
that gap could be used as the response in a DOE aimed at achieving a target gap.
However, the gap can be a symptom of more basic problems with the:
Width of the panels
Location holes in the panels
Location of the attachment points on the body frame
All of these must be right for the gap to be as intended. If the goal of the
experiment is to identify which of these has the biggest impact on the gap, the choice
of the gap as a response is appropriate. If the purpose is to minimize the deviation
from the target gap, the gap may not be the right response. A more basic investigation
of the factors that contribute to the underlying cause is required. Do not confuse the
symptom with the underlying causes. This thought process is very similar to the
thought process used in SPC and failure mode and effect analysis (FMEA) and
draws heavily upon the experience of experts to frame the right question. In DOE,
the choice of an improper response could result in an inconclusive experiment or in
a solution that might not work as things change due to interactions between the
factors. An interaction occurs when the change in the response due to a change in
the level of a factor is different for the different levels of a second factor. An example
is shown in Figure 9.2.
The choice of the proper response characteristic will usually result in few
interactions being signicant. Since there is a limitation as to how much information
can be extracted from a given number of experiments, choosing the right response
will allow the investigation of the maximum number of factors in the minimum
number of tests without interactions between factors blurring the factor main effect.
Interactions will be discussed in more detail in Section 3. The proper setup of an

FIGURE 9.2

An example of interaction.
Factor 2 = Low
Response
Factor 2 = High
Low Level High Level
Factor 1

SL3151Ch09Frame Page 374 Thursday, September 12, 2002 6:05 PM

Design of Experiments

375

experiment is not only a statistical task. Statistics serve to focus the technical
expertise of the participating experts into the most efcient approach.
In summary, the response should:
1. Relate to the underlying causes and not be a symptom
2. Be measurable (if possible, a continuous response should be chosen)
3. Be repeatable for the test procedure
The prioritization process continues until the most critical factors that can be
addressed within the resources of the test program are identied. The next step is
to determine:
1. Are the factors controllable or are some of them noise beyond our
control?
2. Do the factors interact?
3. What levels of each factor should be considered?
4. How do these levels relate to production limits or specs?
5. Who will supply the parts, machines, and testing facilities, and when will
they be available?
6. Does everyone agree on the statement of the problem, goal, approach,
and allocation of roles?
7. What kind of test procedure will be used?
When all of these questions have been answered, the person who is acting as
the statistical resource for the group can translate the answers into a hypothesis and
experimental setup to test the hypothesis. The following example illustrates how the
process can work:

E

XAMPLE

A particular bracket has started to fail in the eld with a higher than expected
frequency. Timothy, the design engineer, and Christine, the process engineer, are
alerted to the problem and agree to form a problem-solving team to investigate the
situation. Timothy reviews the design FMEA, while Christine reviews the process
FMEA. The information relating to the previously anticipated potential causes of
this failure and SPC charts for the appropriate critical characteristics are brought to
the rst meeting. The team consists of Timothy, Christine, Cary (the machine oper-
ator), Stephen (the metallurgist), and Eric (another manufacturing engineer who has
taken a DOE course and has agreed to help the group set up the DOE).
In the rst meeting, the group discussed the applicable areas from the FMEAs,
reviewed the SPC charts, and began a cause-and-effect listing for the observed failure
mode. At the conclusion of the meeting, Timothy was assigned to determine if the
loads on the bracket had changed due to changes in the components attached to it;
Christine was asked to investigate if there had been any change to the incoming
material; Stephen was asked to consider the testing procedure that should be used
to duplicate eld failure modes and the response that should be measured, and all

SL3151Ch09Frame Page 375 Thursday, September 12, 2002 6:05 PM

376

Six Sigma and Beyond

of the group members were asked to consider additions to the cause-and-effect list.
At the second meeting, the participants reported on their assignments and continued
constructing the cause-and-effect (C & E) diagram. Their cause-and-effect diagram
is shown in Figure 9.3 with the specic causes shown as C1, C2, rather than
the actual descriptions that would appear on a real C & E diagram.
The group easily reached the consensus that seven of the potential causes were
suspected of contributing to the eld problem. Eric agreed to set up the experiment
assuming two levels for each factor, and the others determined what those levels
should be to relate the experiment to the production reality. Eric returned to the group
and announced that he was able to use an L8 orthogonal array to set up the experiment
and that eight tests were all that were needed at this time. The test matrix for the
seven suspected factors is shown in Table 9.6.
Eric explained that this matrix would allow the group to determine if a difference
in test responses existed for the two levels of each factor and would prioritize the

FIGURE 9.3

Example of cause-and-effect diagram.

TABLE



9.6
The Test Matrix for the Seven Factors

Test

Levels for Each Suspected Factor for Each of Eight Tests
Number C1 C2 C7 C11 C13 C15 C16

1
2
3
4
5
6
7
8
1
1
1
1
2
2
2
2
1
1
2
2
1
1
2
2
1
1
2
2
2
2
1
1
1
2
1
2
1
2
1
2
1
2
1
2
2
1
2
1
1
2
2
1
1
2
2
1
1
2
2
1
2
1
1
2
Operator/Machine
Interface
C8
C9
C6
C7
C4
C5
Machine Material
C1
C2
C3
Bracket
Breaks
Process
C16
C17
C15
C14
C12
C13
C11
C10
Design

SL3151Ch09Frame Page 376 Thursday, September 12, 2002 6:05 PM

Design of Experiments

377

within-factor differences. Since the two levels of each factor represented an actual
situation that existed in production during the time the failed parts were produced,
this information could be used to correct the problem. By now, Stephen had identied
a test procedure and response that seemed to t the requirements outlined in this
section.

Two weeks were required to gather all the material and parts for the experiment
and to run the experiment. The test results are shown in Table 9.7. While Eric entered
the data into the computer for analysis, Timothy and Christine plotted the data to
see if anything was readily apparent. The factor level plots are shown in Figure 9.4.

TABLE



9.7
Test Results

Test Number Result

1
2
3
4
5
6
7
8
10
13
15
17
14
16
19
21

FIGURE 9.4

Plots of averages (higher responses are better).
R
E
S
P
O
N
S
E
18
17
16
15
14
13
1 2
LEVEL
C1


R
E
S
P
O
N
S
E
18
17
16
15
14
13
1 2
LEVEL
C13


1 2
LEVEL
C2


1 2
LEVEL
C7


1 2
LEVEL
C11


1 2
LEVEL
C15


1 2
LEVEL
C16



SL3151Ch09Frame Page 377 Thursday, September 12, 2002 6:05 PM

378

Six Sigma and Beyond

When Eric nished with the computer, he reported that of all the variability observed
in the data, 53.65% was due to the change in factor C2; 33.38% was due to the
change in factor C1; and 11.92% was due to the change in factor C11. The remaining
1.04% was due to the other factors and experimental error. The large percentage
variability contribution, coupled with the fact that the differences between the levels
of the three factors are signicant from an engineering standpoint, indicate that these
three factors may indeed be the culprits. The computer analysis indicated that the
best estimate for a test run at C1 = 2, C2 = 2, and C11 = 2 is 21.4. One of the eight
tests in the experiment was run at this condition and the result was 21. Two conr-
matory tests were run and the results were 11 and 20. The group then moved into a
second phase of the investigation to identify what the specs limits should be on C1,
C2, and C11. In the second round of testing, eight tests were required to investigate
three levels for each of the three factors. The setup for the second round of testing
involved an advanced procedure (idle column method) that will be presented later
in this chapter, so the example will be concluded for now.

In summary, the group in the example took the following actions:
1. Gathered appropriate backup data
2. Called together the right experts
3. Made a list of the possible causes for the problem
4. Prioritized the possible causes
5. Determined the proper test procedure and response to be measured
6. Reached agreement prior to running any tests
7. Approached the investigation in a structured manner
8. Asked and addressed one question at a time
Obviously, there are many ways to approach a particular DOE. In a situation
where testing or material is very expensive, the most efcient experimental layout
must be used. In the following sections, techniques are introduced that help the
experimenter optimize the experimental design. Additional opportunities to optimize
the experiment should be examined. Consider the situation where there is a ve-part
process. A brainstorming group has constructed a cause-and-effect diagram for a
particular process problem. The number of suspected factors for each part of the
process is shown in Figure 9.5.
The obvious approach would be to set up the experiment with 21 factors. An
alternative approach would be to consider only seven factors for the rst round of
testing. These would be the six factors within part 5 plus one factor for the best and
worst input to part 5. If the difference in input to part 5 is signicant, then the
investigation is expanded upstream. The decision to approach a problem in this
manner is dependent upon the beliefs of the experts. If the experts have a strong

FIGURE 9.5

A linear example of a process with several factors.
Part 5
6 factors
Part 4
3 factors
Part 2
2 factors
Part 3
5 factors
Part 1
5 factors

SL3151Ch09Frame Page 378 Thursday, September 12, 2002 6:05 PM

Design of Experiments

379

prior belief that a factor in part 1, for instance, is signicant, then a different approach
should be used. This approach is also dependent upon the structure of the situation.
The above example is presented to illustrate the point that the experimenter
should be alert for ways to test more efciently and effectively.

M

ISCELLANEOUS

T

HOUGHTS

An additional useful method of looking at the data is to plot the contrasts on
normal probability paper. For a two-level factor, the contrast is the average of all
the tests run at one level subtracted from the average of the tests run at the other
level. For the example in this section, the contrasts are shown in Table 9.8.
These contrasts are plotted on normal probability paper versus median ranks.
The values for median ranks are available in many statistics and reliability books
and are used in Weibull reliability plotting. For this example, the normal contrast
plot is shown in Figure 9.6.
To plot the contrasts on normal paper, the contrasts are ranked in numerical
order, here from 0.25 (C7) to 4.75 (C2). The contrasts are then plotted against the
median ranks or, in this case, against the rank number shown on the left margin of
the plot. Factors that are signicant have contrasts that are relatively far from zero
and do not lie on a line roughly dened by the rest of the factors. These factors can
lie off the line on the right side (level 2 higher) or on the left side (level 1 higher).
In the example, two separate lines seem to be dened by the contrasts. This could
be due to either of these situations:
C1, C2, and C11 are signicant and the others are not.
There may be one or more bad data points that occur when C1, C2, and
C11 are at one level and the other factors are set at the other level.
In this example, C1, C2, C11, and C16 were at level 2 and the other factors
were set at level 1 for run number eight. Depending upon the situation, it would be
worthwhile to either rerun that test or to investigate the circumstances that accom-
panied that the test (e.g., was the test hard to run because of the factor settings or

TABLE



9.8
An Example of Contrasts

Factor
Average at
Level One
Average at
Level Two
Contrast
(Level 2 Avg. Level 1 Avg.)

C1
C2
C7
C11
C13
C15
C16
13.75
13.25
15.18
14.50
15.50
15.50
15.50
17.50
18.00
15.50
16.75
15.75
15.75
15.75
3.75
4.75
0.25
2.25
0.25
0.25
0.25

SL3151Ch09Frame Page 379 Thursday, September 12, 2002 6:05 PM

380

Six Sigma and Beyond

did something else change that was not in the experiment?). In the example, this
combination of factors represented the best observed outcome, and the conrmation
runs supported the results of the original test.
Plotting contrasts is a way of better understanding the data. It helps the exper-
imenter visualize what is happening with the data. Sometimes, information that
might be lost in a table of data will be crystal clear on a plot.

SETTING UP THE EXPERIMENT

This section discusses:
1. The choice of the number of levels for each factor
2. Fitting a linear graph to the experiment
3. Special applications to reduce the number of tests
4. How to handle noise factors in an experiment

C

HOICE



OF



THE

N

UMBER



OF

F

ACTOR

L

EVELS

To review:

A factor is a unique component or characteristic about which a decision will be made.

FIGURE 9.6

Contrasts shown in a graphical presentation.
Contrast
-1 0 1 2 3 4 5
1
2
3
4
5
6
7
C7
C15
C13
C16
C11
C1
C2
N
u
m
e
r
i
c
a
l

R
a
n
k

C
o
r
r
e
s
p
o
n
d
i
n
g

T
o
M
e
d
i
a
n

R
a
n
k

P
r
o
b
a
b
i
l
i
t
y

SL3151Ch09Frame Page 380 Thursday, September 12, 2002 6:05 PM

Design of Experiments

381

A factor level is one of the choices of the factor to be evaluated (e.g., if the
screw speed of a machine is the factor to be investigated, two factor levels might
be 1200 and 1400 rpm).
Investigating a larger number of levels for a factor requires more tests than
investigating a smaller number of levels. There is usually a trade-off required con-
cerning the amount of information needed from the experiment to be very condent
of the results and the time and resources available. If testing and material are cheap
and time is available, evaluate many levels for each factor. Usually, this is not the
case, and two or three levels for each factor are recommended. An exception to this
occurs when the factor is non-continuous, and several levels are of interest. Examples
of this type of factor include the evaluation of a number of suppliers, machines, or
types of material. This situation will be discussed later in this section.
The rst round of testing is usually designed to screen a large number of factors.
To accomplish this in a small number of tests, two levels per factor are usually tested.
The choice of the levels depends upon the question to be addressed. If the question is
Have we specied the right spec limits? or What happens to the response in the
clear worst possible situation? then the choice of what the levels should be clear.
A more complicated question to address is How will the distribution in pro-
duction affect the response? As suppliers become capable of maintaining low
variability about a target value, testing at the spec limits will not give a good answer
to this question. There are at least two approaches that can be used:
1. Test at the production limits, as a worst case.
2. Test at other points that put less emphasis on the tails of the distribution
where few parts are produced and more emphasis on the bulk of the
distribution. It is a difcult choice to pick two points to represent an entire
distribution. If this approach is being used, a rule of thumb is to choose
a level that encompasses approximately 70% of that distribution (mean


1 standard deviation).
The main point of this discussion is that the choice of levels is an integral part
of the experimental denition and should be carefully considered by the group setting
up the experiment.
The second and subsequent rounds of testing are usually designed to investigate
particular factors in more detail. Generally, three levels per factor are recommended.
Using two levels allows the estimation of a linear trend between the points tested.
The testing of three levels gives an indication of non-linearity of the response across
the levels tested. This non-linearity can be used in determining specication limits
to optimize the response. Although this concept will be explored in more detail in
a later section on tolerance design, its application can be illustrated as follows:
First round of testing Level B of factor 1 gives a response that is more
desirable than that given by level A. See Figure 9.7.
Second round of testing Level B gives a response that is more desirable
than those given by either C or D. However, the differences are not great.
Spec limits are set at C and D with B as the nominal. See Figure 9.8.

SL3151Ch09Frame Page 381 Thursday, September 12, 2002 6:05 PM

382

Six Sigma and Beyond

In a manner similar to the two-level-per-factor situation, the choice of the specic
three levels to be tested depends upon the question under investigation. Testing at
three levels can be used by the experimenter to focus on a particular area of the
possible factor settings to optimize the response over as large a range as possible.
If three levels of a factor are used to gain understanding for an entire distribution,
a rule of thumb is to choose the levels at the mean and mean


1.225 standard
deviations that encompass approximately 78% of the distribution. These rules of
thumb will be used in tolerance design.

L

INEAR

G

RAPHS

After the number of levels has been determined for each factor, the next step is to
decide which experimental setup to use. Dr. Taguchi uses a tool called linear graphs
to aid the experimenter in this process. Linear graphs are provided in the Appendix
of Volume V for several situations. Typical designs, however, are:
1. All factors at two levels (L4, L8, L12, L16, L32)
2. All factors at three levels (L9, L27)
3. A mix of two- and three-level factors (L18, L36)

FIGURE 9.7

First round testing.

FIGURE 9.8

Second round testing.
Response
Levels of Factor 1 A B
Response
Levels of Factor 1 C B D

SL3151Ch09Frame Page 382 Thursday, September 12, 2002 6:05 PM

Design of Experiments

383

D

EGREES



OF

F

REEDOM

In the orthogonal array designation, the number following the L indicates how many
testing setups are involved. This number is also one more than the degrees of freedom
available in the setup. Degrees of freedom are the number of pair-wise comparisons
that can be made. In comparing the levels of a two-level factor, one comparison is
made and one degree of freedom is expended. For a three-level factor, two compar-
isons are made as follows: rst, compare A and B, then compare whichever is best
with C to determine which of the three is best. Two degrees of freedom are
expended in this comparison. Once the number of levels for each factor is deter-
mined, the degrees of freedom required for each factor are summed. This sum plus
one becomes the bottom limit to the orthogonal array choice.
The degrees of freedom for an interaction are determined by multiplying the
degrees of freedom for the factors involved in the interaction.
A two-level factor interacting with a two-level factor requires one degree of
freedom (df) (1


1 = 1).
A three-level factor interacting with a three-level factor requires 4 df (2


2 = 4).
A three-level factor interacting with a two-level factor requires 2 df (2


1 = 2).
Although the test response should be chosen to minimize the occurrence of
interactions, there will be times when the experts know or strongly suspect that
interactions occur. In these cases, linear graphs allow the interaction to be readily
included in the experiment.
If more than one test is run for each test setup, the total df is the total number of
tests run minus one. The dfs used for assigning factors remain the same as without the
repetition. The other dfs are used to estimate the non-repeatability of the experiment.

U

SING

O

RTHOGONAL

A

RRAYS

AND LINEAR GRAPHS
In an orthogonal array, the number of rows corresponds to the number of tests to
be run and, in fact, each row describes a test setup. The factors to be investigated
are each assigned to a column of the array. The value that appears in that column
for a particular test (row) tells to what level that factor should be set for that test.
As an example, consider an L4 test setup Table 9.9. If factor A was assigned to
column 1 and factor B was assigned to column 2, then test number 3 would be set
up with A at level 2 and B at level 1.
The sum of the degrees of freedom required for each column (a two-level column
requires 1 df; a three-level column requires 2 df) equals the sum of the available dfs
in the setup. Another property of the arrays is that orthogonally is maintained among
the columns. Orthogonally, mentioned earlier, is the property that allows each level
of every factor to equally impact the average response at each level of all other
factors. Using the L4 as an example, for the test where column 1 (factor A) is at
level one, column 2 (factor B) is tested at the low level and at the high level an
equal number of times. This is also column 1 at level 2. In fact, orthogonality is
maintained for all three columns. The reader is invited to study the L4 and verify
this statement.
SL3151Ch09Frame Page 383 Thursday, September 12, 2002 6:05 PM
384 Six Sigma and Beyond
Generally, near the orthogonal array are line-and-dot gures that look a little
like stick drawings. These are linear graphs. The dots represent the factors that
can be assigned to the orthogonal array, and the lines represent the possible inter-
action of the two dots joined by the line. The numbers next to the dots and lines
correspond to the column numbers in the orthogonal array. For example, the linear
graph for the L4 is shown in Figure 9.9.
The interpretation of this linear graph is that if a factor is assigned to column 1
and a factor is assigned to column 2, column 3 can be used to evaluate their
interaction. If the interaction is not suspected of inuencing the response, another
factor can be assigned to column 3. If no other factor remains, column 3 is left
unassigned and becomes an estimator of experimental error or non-repeatability.
This will be explained in more detail later in this chapter. The interrelationships
between the columns are such that there are many ways of writing the linear graphs.
COLUMN INTERACTION (TRIANGULAR) TABLE
Also shown in Volume V near the orthogonal array is the column interaction table
for that particular array. This table shows in which column(s) the interaction would
be located for every combination of two columns. The linear graphs have been
constructed using this information. The L8 column interaction table is shown in
Table 9.10.
The interaction between two factors can be assigned by nding the intersection in
the column interaction table of the orthogonal array columns to which those factors
have been assigned. As an example, suppose that a factor was assigned to column 3
and another factor was assigned to column 5. If the brainstorming group suspects that
the interaction of these two factors is a signicant inuence and includes that interaction
in the analysis, that interaction must be assigned to column 6 in the orthogonal array.
(Note that the interaction of two two-level factors [one degree of freedom each] can be
assigned to one column which as one degree of freedom [1 1 = 1]).
TABLE 9.9
L4 Setup
Column
Row 1 2 3
1
2
3
4
1
1
2
2
1
2
1
2
1
2
2
1
FIGURE 9.9 Linear graph for L4.
1 3 2
SL3151Ch09Frame Page 384 Thursday, September 12, 2002 6:05 PM
Design of Experiments 385
FACTORS WITH THREE LEVELS
The orthogonal arrays, linear graphs, and column interaction tables for factors with
three levels are similar to the two-level situation. Since a three-level factor requires
two degrees of freedom, the three-level orthogonal array columns use two of the
available dfs. The interaction of two three-level factors requires 4 dfs (2 2). In the
linear graphs and column interaction table, and interaction is shown with two column
numbers. If an interaction is being investigated, it must be assigned to two columns.
The L9 orthogonal array, linear graph, and column interaction table are presented
in Figure 9.10.
INTERACTIONS AND HARDWARE TEST SETUP
The orthogonal array species the hardware setup for each test. To set up the
hardware for a particular test in the orthogonal array, the experimenter should
disregard the interaction columns and use only the columns assigned to single factors.
If an interaction is included in the experiment, its level will be based solely upon
the levels of the interacting factors. The interaction will come into consideration
during the analysis of the data. An example will demonstrate the use of the linear
graph and the layout of a simple experiment.
EXAMPLE
A brainstorming group has constructed a cause-and-effect diagram and determined
that four factors (A through D) are suspected of being contributors to the problem.
In addition, two interactions are suspected (B D and C D). The group has decided
to use two levels for each factor. The experiment is laid out as follows:
1. Determine the df requirement.
Four dfs are required for the main factors (one for each two level factor).
Two dfs are required for the interactions (one for each interaction of two level factors).
Six dfs are required in total.
TABLE 9.10
The L8 Interaction Table
Column
Column 1 2 3 4 5 6 7
1
2
3
4
5
6
3 2
1
5
6
7
4
7
6
1
7
4
5
2
3
6
5
4
3
2
1
SL3151Ch09Frame Page 385 Thursday, September 12, 2002 6:05 PM
386 Six Sigma and Beyond
2. Determine a likely orthogonal array.
Since 6 dfs + 1 = 7 tests minimum and all factors have two levels, the L8 array is
a likely place to start.
3. Draw the linear graph required for the experiment.
The linear graph required for the experiment is shown in Figure 9.10A.
4. Compare the linear graph(s) of the orthogonal array to the linear graph required
for the experiment.
One of the linear graphs for the L8 that could t is shown in Figure 9.10B.
5. Assign factors to the orthogonal array columns.
Make the column assignments shown in Figure 9.10C.
FIGURE 9.10 The orthogonal array (OA), linear graph (LG), and column interaction for L9.
1. A******
2. B***********
3.
Orthogonal Array
Column
Row 1 2 3 4
1
2
3
4
5
6
7
8
9
1
1
1
2
2
2
3
3
3
1
2
3
1
2
3
1
2
3
1
2
3
2
3
1
3
1
2
1
2
3
3
1
2
2
3
1
Linear Graph
1 3, 4 2
Column Interaction Table
Column
Column 1 2 3 4
1
2
3
3
4
2
4
1
4
2
3
4
3
1
2
A D B
C
2
3
1 5 4
6
7
SL3151Ch09Frame Page 386 Thursday, September 12, 2002 6:05 PM
Design of Experiments 387
CHOICE OF THE TEST ARRAY
For a particular experiment, the test response should be chosen to minimize inter-
action, and the smallest orthogonal array that ts the situation should be used. The
emphasis should be on assigning factors to as many columns as possible. This allows
the question posed by the situation to be answered using a minimum number of tests.
FIGURE 9.10 (continued)
C**********
Column 1 2 3 4 5 6 7
Factor D B B x D C C x D A unassigned
where, B x D indicates the interaction between B and D.
D*****
E*****
Column
1 2 Four Level Factor
1
1
2
2
1
2
1
2
1
2
3
4
F*****
Test Columns
Number 1 2 3 4 5 6 7
1
2
3
4
5
6
7
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
1
2
2
3
3
4
1
2
1
2
1
2
1
1
2
1
2
2
1
2
1
2
2
1
1
2
2
1
2
2
1
2
1
1
8 0 0 4 2 1 1 2
1
7
2
5
3 4
6
SL3151Ch09Frame Page 387 Thursday, September 12, 2002 6:05 PM
388 Six Sigma and Beyond
FIGURE 9.10 (continued)
G*********
H*****
Column
Eight Level
1 2 4
Factor
1
1
1
1
2
2
2
2
1
1
2
2
1
1
2
2
1
2
1
2
1
2
1
2
1
2
3
4
5
6
7
8
1
3 5
7
2 6 4
I ********
1
2
5
8
11
3, 4
6, 7
9, 10
12, 13
1
2
5
8
11
3, 4
6, 7
9, 10
12, 13
SL3151Ch09Frame Page 388 Thursday, September 12, 2002 6:05 PM
Design of Experiments 389
Whether an interaction exists or not is an important issue that must be addressed
in setting up the experiment. If an interaction does exist and provision is not made
for it in the experimental setup, its effect becomes mixed up or confounded with
the effect of the factor assigned to the column where the interaction would be
assigned. The analysis will not be able to separate the two. This is an important
reason why conrmatory runs are necessary. Conrmatory runs should be made with
the nonsignicant factors set to their different levels, just to make sure.
Another way to minimize the effect of interactions is to use an L12, L18, or
L36 orthogonal array. These arrays have a special property that some, or all, of the
interactions between columns are spread across all columns more or less equally
instead of being concentrated in a column. This property can be used by the exper-
imenter to rank the contribution of factors without worrying about interactions. There
are times when this can be a valuable tool for the experimenter. The linear graphs
for those arrays tell which interactions can be estimated and which cannot.
FACTORS WITH FOUR LEVELS
A factor with four levels can easily be assigned to a two-level orthogonal array. A
four-level factor requires 3 dfs. Since a two-level column has 1 df, three two-level
columns are used for the four-level factor. The three columns chosen must be
represented in the linear graph by two dots and the connecting interaction line. One
of the L8 linear graphs is shown in Figure 9.10D.
The line enclosing the column 1, 2, 3 designators indicates that these columns
will be used for a four-level factor. The particular level of the four-level factor for
each run can be determined by taking any two of the three columns that are to be
combined and assigning the four combinations to the four levels of the factor. As
an example, consider columns 1 and 2 (see Figure 9.10E).
Although column 3 is not used in determining the level of the four-level factor,
its df is used and no other factor can be assigned to it.
In the orthogonal array, one of the columns used for the four-level factor is set
to the levels of the four-level factor and the other two columns are set to zero for
each test. For the L8 example, the modied array would be Figure 9.10F.
FACTORS WITH EIGHT LEVELS
In a similar manner, a factor with eight levels requires 7 dfs and takes up seven two-
level columns. The particular columns are chosen by taking a closed triangle in the
linear graph and the interactions column of one of the points of the triangle with
the opposite base. One example is shown in Figure 9.10G.
The column interaction table indicates that the interaction of columns 1 and 6
will be in column 7. The actual factor level for each test is determined by looking
at the combinations of the three columns that make up the corners of the triangle
(see Figure 9.10H).
None of the seven columns which are used for the eight-level factor can be
assigned to another factor. In the orthogonal array, one of the columns used for the
eight-level factor is set to the levels of the eight-level factor and the other six columns
are set to zero for each test.
SL3151Ch09Frame Page 389 Thursday, September 12, 2002 6:05 PM
390 Six Sigma and Beyond
FACTORS WITH NINE LEVELS
A factor with nine levels is handled in a similar manner to a four-level factor. The
nine-level factor requires 8 dfs, which are available in four three-level columns. Two
three-level columns and their two interaction columns are used. One of the L27
linear graphs is shown in Figure 9.10I.
The line enclosing the column 1, 2, 3, 4 designators indicates that these four
columns will be used for the nine-level factor. The level of the nine-level factor to
be used in a particular test can be determined by taking any two of the four columns
that are to be combined and assigning their nine combinations to the nine levels of
the factor. This is left to the reader to demonstrate.
In the orthogonal array, one of the columns used for the nine-level factor is set
to the levels of the nine-level factor and the other three columns are set to zero.
USING FACTORS WITH TWO LEVELS IN A THREE-LEVEL ARRAY
Dummy Treatment
Often, the situation calls for a mix of factors with two and three levels. A two-level
factor can be assigned to a three-level column by using one of the two levels as the
third level in the test determination. Consider using a two-level factor in an L9
array see Table 9.11.
In column 1, the second set of 1s (in experiments 7, 8, and 9) is the dummy
treatment. In the analysis, the average at level one of the factor assigned to column
1 is determined with more accuracy than the average at level two since more tests
are run at level one. The level that is of more interest to the experimenter should be
the one used for the dummy treatment.
Combination Method
Two two-level factors can be assigned to a single three-level column. This is done by
assigning three of the four combinations of the two two-level factors to the three-level
TABLE 9.11
An L9 with a Two-Level Column
Test
Columns
Number 1 2 3 4
1
2
3
4
5
6
7
8
9
1
1
1
2
2
2
1
1
1
1
2
3
1
2
3
1
2
3
1
2
3
2
3
1
3
1
2
1
2
3
3
1
2
2
3
1
SL3151Ch09Frame Page 390 Thursday, September 12, 2002 6:05 PM
Design of Experiments 391
factor and not testing the fourth combination. As an example, two two-level factors
are assigned to a three-level column as in Table 9.12. Note that the combination
A
2
B
2
is not tested. In this approach, information about the AB interaction is not available,
and many ANOVA (analysis of variance) computer programs are not able to break apart
the effect of A and B. A way of doing that manually will be presented later.
USING FACTORS WITH THREE LEVELS IN A TWO-LEVEL ARRAY
A factor with three levels requires 2 dfs. Although it would seem that two two-level
columns combined would give the required dfs, the interaction of those two columns
is confounded with the three-level factor. The approach used to assign one three-
level factor to a two-level array is to construct a four-level column and use the
dummy treatment approach to assign the three-level factor to the four-level column.
Assigning more than one three-level factor to a two-level array uses a variation
of this approach. Recall that in constructing a four-level column, three two-level
columns are used. These three must be shown in the linear graph as two dots
connected by an interaction line. Any two of these columns are used to determine
the level to be tested. The third columns df is used up in assigning a four-level
factor. In assigning a three-level factor, the third columns df is not used for the level
three factor since it require only 2 dfs. However, the third column is confounded
with the three-level factor and should not be assigned to another factor. That column
is said to be idle. When two or more three-level factors are assigned to a two-level
array, the three-level factors can share the same idle column. An example of assigning
two three-level factors to an L8 array is shown in Figure 9.11.
Here column 1 would be idle (a factor cannot be assigned to column 1), columns
2 and 3 would be used to determine the levels of a three-level factor, columns 4 and
5 would be used to determine the levels of the second three-level factor, and columns
6 and 7 are available for two-level factors. The modied orthogonal array for this
experiment is shown in Table 9.13 (level 2 is the dummy treatment in both cases).
The idle column approach cannot be used with four-level factors. If it were
attempted, insufcient degrees of freedom would exist and the four-level factors
would be confounded.
OTHER TECHNIQUES
There are other techniques for setting up an experiment that will be mentioned here
but will not be discussed in detail. The user is invited to read the chapter on pseudo-
factor design in Quality Engineering Product and Process Design Optimization,
TABLE 9.12
Combination Method
Factor A Factor B
Three-Level
Column
1
1
2
1
2
1
1
2
3
SL3151Ch09Frame Page 391 Thursday, September 12, 2002 6:05 PM
392 Six Sigma and Beyond
by Yuin Wu and Dr. Willie Hobbs Moore or to consult with a statistician to use these
techniques.
Nesting of Factors
Occasionally, levels of one factor have meaning only at a particular level of another
factor. Consider the comparison of two types of machine. One is electrically operated
and the other is hydraulically operated. The voltage and frequency of the electrical
power source and the temperature and formulation of the hydraulic uid are factors
that have meaning for one type of machine but not the other. These factors are nested
within the machine level and require a special setup and analysis which is discussed
in the reference given above.
Setting Up Experiments with Factors with Large Numbers of Levels
Experiments with factors with large numbers of levels can be assigned to an exper-
imental layout using combinations of the techniques that have been covered in this
booklet.
FIGURE 9.11 Three-level factors in a L8 array.
TABLE 9.13
Modied L8 Array
Test
Columns
Number 1 2 3 4 5 6 7
1
2
3
4
5
6
7
8
1
1
1
1
2
2
2
2
0
0
0
0
0
0
0
0
1
1
2
2
3
3
2
2
0
0
0
0
0
0
0
0
1
2
1
2
2
3
2
3
1
2
2
1
1
2
2
1
1
2
2
1
2
1
1
2
1 (idle)
2 4
5
7
6
3
SL3151Ch09Frame Page 392 Thursday, September 12, 2002 6:05 PM
Design of Experiments 393
INNER ARRAYS AND OUTER ARRAYS
Factors are generally divided into three basic types:
1. Control factors are the factors that are to be optimized to attain the
experimental goal.
2. Noise factors represent the uncontrollable elements of the system. The
optimum choice of control factor levels should be robust over the noise
factor levels.
3. Signal factors represent different inputs into the system for which system
response should be different. For example, if several micrometers were
to be compared, the standard thickness to be measured would be levels
of a signal factor. The optimum micrometer choice would be the one that
operated best at all the standard thicknesses. Signal factors are discussed
in more detail on pages 430441.
Control and noise factors are usually handled differently from one another in
setting up an experiment. Control factors are entered into an orthogonal array called
an inner array. The noise factors are entered into a separate array called an outer
array. These arrays are related so that every test setup in the inner array is evaluated
across every noise setup in the outer array. As an example, consider an L8 inner
(control) array with an L4 outer (noise) array, as shown in Table 9.14.
The purpose of this relationship is to equally and completely expose the control
factor choices to the uncontrollable environment. This ensures that the optimum
factor will be robust. A signal-to-noise (S/N) ratio can be calculated for each of the
control factor array test situations. This allows the experimenter to identify the
control factor level choices that meet the target response consistently.
TABLE 9.14
An L8 with an L4 Outer Array
L4 1 2 2 1
Test
L8
(on side) 1 2 1 2
No. 1 2 3 4 5 6 7 1 1 2 2
1
2
3
4
5
6
7
8
1
1
1
1
2
2
2
2
1
1
2
2
1
1
2
2
1
1
2
2
2
2
1
1
1
2
1
2
1
2
1
2
1
2
1
2
2
1
2
1
1
2
2
1
1
2
2
1
1
2
2
1
2
1
1
2
Test Results x1
x5
.
.
.
x29
x2
x6
.
.
.
x30
x3
x7
.
.
.
x31
x4
x8
.
.
.
x32
Note: The x values refer to experimental test results.
SL3151Ch09Frame Page 393 Thursday, September 12, 2002 6:05 PM
394 Six Sigma and Beyond
RANDOMIZATION OF THE EXPERIMENTAL TESTS
In the orthogonal arrays, each test setup is identied by a test number. Generally,
the tests should not be run in the order of test number. If the tests were run in that
order, all the tests with the factor assigned to column one at level one would be run
before any of the tests with that factor at level two. A quick glance at an orthogonal
array will conrm this relationship. In fact, the columns toward the left of the array
change less often than the columns toward the right of the array. If an uncontrolled
noise factor changes during the testing process, the effect of that noise factor could
be mixed in with one or more of the factor effects. This could result in an erroneous
conclusion. The possibility of this occurring can be minimized by randomizing the
order of the experiment runs. If the order of the tests is randomized, the effect of
the changing uncontrolled noise factor will be more or less spread evenly over all
the levels of the controlled factors and although the experimental error will be
increased, the effects of the controlled factors will still be identiable. Randomiza-
tion can be done as simply as writing the test numbers on slips of paper and drawing
them out of a hat.
There are two situations where randomization may not be possible or where its
importance is lessened.
1. If it is very expensive, difcult, or time-consuming to change the level of
a factor, all tests at one level of a factor may have to be run before the
level of that factor can be changed. In this case, noise factors should be
chosen for the outer array that represent the possible variation in uncon-
trolled environment as much as possible.
2. If the noise factors in the outer array are properly chosen, the condent
experimenter may elect to dispense with randomization. In most cases,
the purpose of the experiment is to learn more about the situation, and
the experimenter does not have complete condence. Therefore, the test
order should be randomized whenever the circumstances permit.
MISCELLANEOUS THOUGHTS
Dr. Taguchi stresses evaluating as many main factors as possible and lling up the
available columns. If it turns out that the experimental design will result in unas-
signed columns, some column assignment schemes are better than others in a few
situations. The rationale behind these choices is that they minimize the confounding
of unsuspected two-factor interactions with the main factors. A detailed discussion
is beyond the scope of this chapter. The user is invited to read Chapter 12 of Statistics
for Experiments, by G. Box, W. Hunter, and J.S. Hunter to learn more about this
concept.
Consider an L8 for which there are to be four two-level factors assigned. This
implies that there will be three columns that will not be assigned to a main factor.
There are 35 ways in which the four factors can be assigned to the seven columns.
The recommended assignment is to use columns 1, 2, 4, and 7 for the main factors.
The interactions to be evaluated, the linear graphs, and the column interaction table
SL3151Ch09Frame Page 394 Thursday, September 12, 2002 6:05 PM
Design of Experiments 395
determine if the recommended column assignments are usable for a particular exper-
iment. The recommended column assignments are given in Table 9.15.
Some of the linear graphs may be found in the Appendix of Volume V. However,
the user will nd that the linear graphs in other books and reference materials may
not make these assignments available. There are many equally valid ways that linear
graphs for the larger arrays can be constructed from the column interaction table. It
is not feasible for any one book to list all the possibilities. An excellent source is
Taguchi and Konishi (1987).
In many cases, the brainstorming group may not have a good feel for whether
interactions exist or not. In these cases, two alternatives are usually considered:
1. Design an experiment that allows all two-factor interactions to be esti-
mated.
2. Design an experiment in which no factor is assigned to a column that also
contains the interaction of two other factors, although pairs of two-factor
interactions may be assigned to the same column. The recommended
factor assignments given in Table 9.15 are examples of this approach.
The second approach is based on the assumption that few of the interactions will
be signicant and that later testing can be used to investigate them in more detail. The
reader is urged to seek statistical assistance in approaching this type of experiment.
Sometimes, the response is not related to the input factors in a linear fashion.
Testing each factor at two levels allows only a linear relationship to be dened and,
in this more complex situation, can give misleading results. A detailed statistical
analysis tool called response surface methodology can be used to investigate the
complex relationship of the input factors to the response in these cases.
TABLE 9.15
Recommended Factor Assignment by Column
Number
of Factors L8 Array L16 Array L32 Array
4 1, 2, 4, 7 1, 2, 4, 8
5
a
1, 2, 4, 8, 15 1, 2, 4, 8, 16
6
a
1, 2, 4, 8, 11, 13 1, 2, 4, 8, 16, 31
7
a
1, 2, 4, 7, 8, 11, 13 1, 2, 4, 8, 15, 16, 23
8 1, 2, 4, 7, 8, 11, 13, 14 1, 2, 4, 8, 15, 16, 23, 27
9
a
1, 2, 4, 8, 15, 16, 23, 27, 29
10
a
1, 2, 4, 8, 15, 16, 23, 27, 29, 30
11
a
1, 2, 4, 7, 8, 11, 13, 14, 16, 19, 21
12
a
1, 2, 4, 7, 8, 11, 13, 14, 16, 19, 21, 22
13
a
1, 2, 4, 7, 8, 11, 13, 14, 16, 19, 21, 22, 25
14
a
1, 2, 4, 7, 8, 11, 13, 14, 16, 19, 21, 22, 25, 26
15
a
1, 2, 4, 7, 8, 11, 13, 14, 16, 19, 21, 22, 25, 26, 28
a
No recommended assignment scheme.
SL3151Ch09Frame Page 395 Thursday, September 12, 2002 6:05 PM
396 Six Sigma and Beyond
All of this seems to indicate that DOEs must be lengthy and complicated when
interactions or nonlinear relationships are suspected. In most situations, time and
resources are not available to run a large experiment. Sometimes, a transformation
of the measured data or of a quantitative input factor can allow a linear model to t
within the region covered by the input factors. The linear model requires fewer data
points than a curvilinear model and is easier to interpret. Unfortunately, unless
multiple observations are made at each inner array setup, the choice of transformation
is guided mainly by the experience of the experimenter or by trying several trans-
formations and seeing which one ts best.
The choice of the proper transformation to use is related to the choice of the
proper response. As an example, two common measures of fuel usage are miles
per gallon and liters per kilometer. With the multiplication of a constant, these
two measures are inverses of each other. A model that is linear in mi/g will be
denitely non-linear in l/km. Which measurement is correct? There is no easy
answer. The experimenter should evaluate several different transformations to deter-
mine the best model. Some transformations that are useful are:
y = Y
1/2
useful for count data (Poisson distributed) such as the number of aws
in a painted surface
y = log(Y) or ln(Y)useful for comparing variances
y = Y
1/2
y = 1/Y
When there are several observations at each inner array test setup either through
replication or through testing with and outer array, another guide to choosing the right
transformation can be used. For the ANOVA to work correctly, the variances at all test
points should be equal. The observed variances should be compared as follows:
1. Calculate the average and the standard deviation(s) for each inner
array test setup.
2. Take the log or ln of each and s.
3. Plot log s (y-axis) versus log (x-axis) and estimate the slope.
4. Use the estimated slope as a rough guide to determine which transforma-
tion to use:
Slope Transformation
0.0 no transformation
0.5 y = Y
1/2
1.0 y = log(Y) or 1n(Y)
1.5 y = Y
1/2
2.0 y = 1/Y
It should be noted that the addition or subtraction of a constant before plotting
will not affect the standard deviation but will affect the relative spacing of the log
and hence the slope of the line. This approach can be used to improve the t of
the transformation. With the widespread use of computers, data analysis of this type
X
( )
X
X
X
SL3151Ch09Frame Page 396 Thursday, September 12, 2002 6:05 PM
Design of Experiments 397
should be easy and should be pursued as a means to get the most information out
of the data.
Examples of this approach will be given later in the chapter. The reader is invited
to refer to Statistics for Experiments by G. Box, W. Hunter, and J.S. Hunter to learn
more about the use of transformations in analyzing data.
LOSS FUNCTION AND SIGNAL-TO-NOISE
This section discusses:
1. The Taguchi loss function and its cost-oriented approach to product design
2. A comparison of the loss function and the traditional approach to calcu-
lating loss
3. The use of the loss function in evaluating alternative actions
4. A comparison of the loss function and C
pk
and the appropriate use of each
5. The relationship of the loss function and the signal-to-noise (S/N) calcu-
lation that Dr. Taguchi uses in design of experiments
LOSS FUNCTION AND THE TRADITIONAL APPROACH
In the traditional approach see Figure 9.12 to considering company loss, parts
produced within the spec limits perform equally well, and parts outside of the spec
limits are equally bad. This approach has a fallacy in that it assumes that parts
produced at the target and parts just inside the spec limit perform the same and that
parts just inside and just outside the spec limits perform differently.
Statistical Process Control (SPC) and process capability calculations (C
pk
) have
brought to the manufacturing oor an awareness of the importance of reducing
process variability and centering around the target. However, the question still remains,
How can this thought process carry over into product and process decision?
The loss function provides a way of considering customer satisfaction in a
quantitative manner during the development of a product and its manufacturing
process. The loss function is the cornerstone of the Taguchi philosophy. The basic
premise of the loss function is that there is a particular target value for each critical
FIGURE 9.12 Traditional approach.
Loss
In $ Scrap
Spec Spec
Limit Limit
Rework
SL3151Ch09Frame Page 397 Thursday, September 12, 2002 6:05 PM
398 Six Sigma and Beyond
characteristic that will best satisfy all customer requirements. Parts or systems that
are produced farther away from the target will not satisfy the customer as well. The
level of satisfaction decreases as the distance from the target increases. The loss
function approximates the total cost to society, including customer dissatisfaction,
of producing a part at a particular characteristic value.
Taken for a whole production run, the total cost to society is based on the
variability of the process and the distance of the distribution mean to the target.
Decisions that affect process variability and centering or the range over which the
customer will be satised can be evaluated using the common measurement of loss
to society.
The loss function can be used when considering the expenditure of resources.
Customer dissatisfaction is very difcult to quantify and is often ignored in the
traditional approach. Its inclusion in the decision process via the loss function
highlights a gold mine in customer-perceived quality and repeat purchases that would
be hidden otherwise. This gold mine is often available at a relatively minor expense
applied to improving the product or process.
Note: Use of the loss function implies a total system that starts with the deter-
mination of targets that reect the greatest level of customer satisfaction. Calculation
of losses using nominals that were set using other methods may yield erroneous
results.
CALCULATION OF THE LOSS FUNCTION
Dr. Taguchi uses a quadratic equation to describe the loss function. A quadratic form
was chosen because:
1. It is the simplest equation that fullls the requirement of increasing as it
moves away from the target.
2. Taguchi believes that, historically, costs behave in this fashion.
3. The quadratic form allows direct conversion from signal-to-noise ratios
and decomposition used in analysis of experimental results.
The general form for the loss function is:
where L(x) is the loss associated with producing a part at x value; k is a unique
constant determined for each situation; x is the measured value of the characteristic;
and m is the target of the characteristic.
When the general form is extended to a production of n items, the average
loss is:
L x k x m ( )
( )
2
L k n x m
( )

( )

/
2
SL3151Ch09Frame Page 398 Thursday, September 12, 2002 6:05 PM
Design of Experiments 399
This can be simplied to:
where
2
the population piece-to-piece variance; is the population mean; and ( m)
is the offset of the population mean from the target.
In the Nominal-the-Best (NTB) situation shown in Figure 9.13, A
0
is the cost
incurred in the eld by the customer or warranty when a part is produced from
the target. is the point at which 50% of the customers would have the part repaired
or replaced. A
0
and dene the shape of the loss function and the value of k.
The loss resulting from producing a part at m is:
In general, the loss per piece is:
The loss for the population is:
EXAMPLE
A particular component is manufactured at an internal supplier, shipped to an assem-
bly plant, and assembled into a vehicle. If this component deviates from its target
FIGURE 9.13 Nominal the best.
Cost
m + m m
A
o
L k m +
( )
,

,
]
]
]

2
2
L m k m m
( )

( )

2
A k
0
2

k A
0
2
/
L x A x m
( )

( )
0
2
2
/ *
L A offset +
( )
0
2 2 2
/ *
SL3151Ch09Frame Page 399 Thursday, September 12, 2002 6:05 PM
400 Six Sigma and Beyond
of 300 units by 10 or more, the average customer will complain, and the estimated
warranty cost will be $150.00. In this case,
k = $150.00/(10 units)
2
= $1.50 per unit
2
SPC records indicate that the process average is 295 units and the variability is eight
units
2
. The present total loss is:
Fifty thousand parts are produced per year. The total yearly loss (and opportunity
for improvement) is $6.7 million.
Situation 1
It is estimated that a redesign of the system would make the system more robust,
and the average customer would complain if the component deviated by 15 units or
more from 300. In this case:
The total loss would be:
The net yearly improvement due to redesigning the system would be:
This cost should be balanced against the cost of the redesign.
L k
per part
+
( )
,

,
]
]
]
+
( )
,

,
]
]
]


2
2
2
2
300
1 50 8 295 300
133 50
$ .
$ .
k units
per unit

( )

$ /
$ .
150 15
0 67
2
2
L
per part
+
( )
,

,
]
]
]

$ .
$ .
0 67 8 295 300
59 63
2
2
Improvement
( )

$ . . $ . *
$ , ,
1 33 50 59 63 5000
3 693 500
SL3151Ch09Frame Page 400 Thursday, September 12, 2002 6:05 PM
Design of Experiments 401
Situation 2
It is estimated that a new machine at the component manufacturing plant would
improve the mean of the distribution to 297 units and improve the process variability
to 6 units
2
. In this case, the total loss would be:
The net yearly improvement due to using the new machine would be:
This cost should be balanced against the cost of the new machine.
From these situations, it is apparent that the quality of decisions using the loss
function is heavily dependent upon the quality of the data that goes into the loss
function. The loss function emphasizes making a decision based on quantitative total
cost data. In the traditional approach, decisions are difcult because of the unknowns
and differing subjective interpretations. The loss function approach requires inves-
tigation to remove some of the unknowns. Subjective interpretations become numeric
assumptions and analyses, which are easier to discuss and can be shown to be based
on facts.
In the smaller-the-better (STB) situation illustrated in Figure 9.14, the loss
function reduces to:
FIGURE 9.14 Smaller the better.
L
per part
+
( )
,

,
]
]
]

$ .
$ .
1 50 6 297 300
67 50
2
2
Improvement
( )

$ . . $ . * ,
$ , ,
1 33 50 67 50 50 000
3 300 000
L k
[
1/nx
2
]
A
0
Cost
X
0
SL3151Ch09Frame Page 401 Thursday, September 12, 2002 6:05 PM
402 Six Sigma and Beyond
For the larger-the-better (LTB) situation illustrated in Figure 9.15, the loss
function reduces to:
COMPARISON OF THE LOSS FUNCTION AND C
pk
The loss function can be used to evaluate process performance. It provides an
emphasis on both reducing variability and centering the process, since those actions
have a net effect of reducing the value of the loss function. Process performance is
normally evaluated using C
pk
. C
pk
is calculated using the following equation:
where = the average of the process.
Both the loss function and C
pk
emphasize minimizing the variability and cen-
tering the process on the target. The relative benets of the two can be summarized
as follows:
Loss function
Provides more emphasis on the target
Relates to customer costs
Can be used to prioritize the effect of different processes
C
pk
Is easier to understand and use
Is based only on data from the process and specications
Is normalized for all processes
The loss function represents the type of thinking that must go into making
strategic management decisions regarding the product and process for critical char-
acteristics. C
pk
is an easily used tool for monitoring actual production processes.
FIGURE 9.15 Larger the better.
L k
[
1/n1/x
2
]
C minimum
upper spec limit
standard deviation
lower spec limit
standard deviation
pk


( )

( )
j
(
,
,
\
,
(
(
X X
3 3 *
,
*
X
SL3151Ch09Frame Page 402 Thursday, September 12, 2002 6:05 PM
Design of Experiments 403
Figure 9.16 shows C
pk
and the value of the loss function for ve different cases.
In each of these cases, the specication is 20 4 and the value of k in the loss
function is $2 per unit
2
.
Both C
pk
and the loss function emphasize reducing the part-to-part variability
and centering the process on target. The use of C
pk
is recommended in production
areas to monitor process performance because of the ease of understanding the clear
relationship of C
pk
and the other SPC tools. Management decisions regarding the
location of distributions with small variability within a large specication tolerance
should be based on a loss function approach. (See cases 2 and 5 in Figure 9.16.)
The loss function approach should be used to determine the target value and to
evaluate the relative merits of two or more courses of action because of the emphasis
on cost and on including customer satisfaction as a factor in making basic product
and process decisions. These questions also lend themselves to the use of design of
experiments. The relationship of the loss function to the signal-to-noise DOE cal-
culations used by Dr. Taguchi will now be discussed.
SIGNAL-TO-NOISE (S/N)
Signal-to-Noise is a calculated value that Dr. Taguchi recommends to analyze DOE
results. It incorporates both the average response and the variability of the data. S/N
is a measure of the signal strength to the strength of the noise (variability). The goal
is always to maximize the S/N. S/N ratios are so constructed that if the average
response is far from the target, re-centering the response has a greater effect on the
S/N than reducing the variability. When the average response is close to the target,
reducing the variability has a greater effect. There are three basic formulas used for
calculating S/N, as shown in Table 9.16.
S/N for a particular testing condition is calculated by considering all the data
that were run at that particular condition across all noise factors. Actual analysis
techniques will be covered later.
FIGURE 9.16 A comparison of C
pk
and loss function.
Case 1 Case 2 Case 3 Case 4 Case 5
Average
Sigma
20
1.33
18
0.67
17.2
0.4
20
2.82
20
0.67
C
pk
Loss (assume k = 2)
1
3.56
1
8.89
1
16
0.47
16
2
0.89
Case 3 Case 4 Case 5 Case 2 Case 1
16 20 24 16 20 24 16 20 24 16 20 24 16 20 24
- ---- ---- - - ---- ---- - - ---- ---- - - ---- ---- - - ---- ---- -
SL3151Ch09Frame Page 403 Thursday, September 12, 2002 6:05 PM
404 Six Sigma and Beyond
The relationships between S/N and loss function are obvious for STB and LTB.
The expressions contained in brackets are the same. When S/N is maximized, the
loss function will be minimized. For the NTB situation, the total analysis procedure
of looking at both the raw data for location effects and S/N data for dispersion effects
parallels the loss function approach. Examples of these analysis techniques are given
in the next section. S/N is used in DOE rather than the loss function because it is
more understandable from an engineering standpoint and because it is not necessary
to compute the value of k when comparing two alternate courses of action.
S/N calculations are also used in DOE to search for robust factor values. These
are values around which production variability has the least effect on the response.
MISCELLANEOUS THOUGHTS
Many statisticians disagree with the use of the previously dened S/N ratios to
analyze DOE data. They do not recognize the need to analyze both location effects
and dispersion (variance) effects but use other measures. Dr. George Boxs 1987
report is recommended to the reader who wishes to learn more about this disagree-
ment and some of the other methods that are available.
In brief, Dr. Box disagrees with the STB and LTB S/N calculations and nds
the NTB S/N to be inefcient. The approach that he supports is to calculate the log
(or ln) of the standard deviation of the data, log(s), at each inner array setup in place
of the S/N ratio. The log is used because the standard deviation tends to be log-
normally distributed. The raw data should be analyzed (with appropriate transfor-
mations) to determine which factors control the average of the response, and the
log(s) should be analyzed to determine which factors control the variance of the
response. From these two analyses, the experimenter can choose the combination
of factors that gives the response that best lls the requirements.
The data in Table 9.17 illustrate some of the concerns with the NTB S/N ratio. The
rst three tests (A through C) have the same standard deviation but very different S/N,
TABLE 9.16
Formulas for Calculating S/N
Signal-to-Noise (S/N) Loss Function (L)
Smaller the better (STB)
Larger the better (LTB)
Nominal the best (NTB)
where

[ ]
10 1
10
2
log / n x L k n x
[ ]
1
2
/

[ ]
10 1 1
10
2
log / / n x L k n x
[ ]
1 1
2
/ /

( )
[ ]
10 1
10
log / / n S V V
m o o
L k offset +
( )

2 2
S x n
m

( )
2
/
V x S n
o m

( )

( )

2
1 /
SL3151Ch09Frame Page 404 Thursday, September 12, 2002 6:05 PM
Design of Experiments 405
while the last three tests (C through E) have the same S/N but very different standard
deviations. The NTB S/N ratio places emphasis on getting a higher response value.
This approach might lead to difculties in tuning the response to a specic target.
It should be noted that Taguchi does discuss other S/N measures in some of his
works that have not been widely available in English. An alternate NTB S/N ratio
is available in the computer program ANOVA-TM, which is distributed by Advanced
Systems and Designs, Inc. (ASD) of Farmington Hills, Michigan and is based on
Taguchis approach. This S/N ratio is:
Maximizing this S/N is equivalent to minimizing log(s). Examples using this
S/N ratio will be developed later.
ANALYSIS
The purpose of this section is to:
1. Introduce graphical and numerical analysis of experimental data
2. Present a method for estimating a response value and assigning a con-
dence interval for it
3. Discuss the use and interpretation of signal-to-noise (S/N) ratio calculations
GRAPHICAL ANALYSIS
In the example in Section 2, Timothy and Christine calculated and plotted the average
response at each factor level. Since the experimental design they used (an L8) is
orthogonal, the average at each level of a factor is equally impacted by the effect
of the levels of the other factors. This allows the graphical approach to have direct
usage. This example from section 2 is shown in Table 9.18. The factor level plots
are shown in Figure 9.17.
Factors C1, C2 and C11 clearly have a different response for each of their two
levels. The difference between levels is much smaller for the other factors. If the
TABLE 9.17
Concerns with NTB S/N Ratio
Test
Raw Data (4 Reps.)
Standard
Deviation
NTB
S/N
A
B
C
D
E
1
15
18
24
42.55
2
11
21
24
42.8
4
12
19
28.12
50
5
14
22
28.12
50
1.83
1.83
1.83
2.38
4.23
3.89
17.03
20.78
20.78
20.78
NTB S N s s

/ log( ) log( ) 10 20
2
SL3151Ch09Frame Page 405 Thursday, September 12, 2002 6:05 PM
406 Six Sigma and Beyond
goal of the experiment was to identify situations that minimize or maximize the
response, C1, C2 and C11 are important while the others are not.
Graphical analysis is a valid, powerful technique that is especially useful in the
following situations:
1. When computer analysis programs are not available
2. When a quick picture of the experimental results is desired
3. As a visual aid in conjunction with computer analysis
TABLE 9.18
L8 with Test Results
Test
Levels for Each Suspected Factor for Each of 8 Tests
Number C1 C2 C7 C11 C13 C15 C16 Test Result
1
2
3
4
5
6
7
8
1
1
1
1
2
2
2
2
1
1
2
2
1
1
2
2
1
1
2
2
2
2
1
1
1
2
1
2
1
2
1
2
1
2
1
2
2
1
2
1
1
2
2
1
1
2
2
1
1
2
2
1
2
1
1
2
10
13
15
17
14
16
19
21
Note: The C numbers (e.g., C11, C13) are factor names.
FIGURE 9.17 Plots of averages (higher responses are better).
R
E
S
P
O
N
S
E
18
17
16
15
14
13
1 2
LEVEL
C1


R
E
S
P
O
N
S
E
18
17
16
15
14
13
1 2
LEVEL
C13


1 2
LEVEL
C2


1 2
LEVEL
C7


1 2
LEVEL
C11


1 2
LEVEL
C15


1 2
LEVEL
C16


SL3151Ch09Frame Page 406 Thursday, September 12, 2002 6:05 PM
Design of Experiments 407
Once the experiment has been set up correctly, the graphical analysis can be
easily used and can point the way to improvements.
ANALYSIS OF VARIANCE (ANOVA)
As was mentioned earlier, mathematical calculations and detailed discussions will
not be included in this chapter. The interested reader should consult Volume V of
this series or references listed in the Bibliography for rigorous mathematical discus-
sions. The approach given here will focus on the interpretation of the ANOVA
analysis.
ANOVA is a matrix analysis procedure that partitions the total variation mea-
sured in a set of data. These partitions are the portions that are due to the difference
in response between the levels of each factor. The number of degrees of freedom
(df) associated with an experimental setup is also the maximum number of partitions
that can be made. Consider the L8 experiment from section 2 that was illustrated
previously in the graphical analysis section. Table 9.19, which is an ANOVA table,
summarizes the analysis.
The column number shows to what column of the orthogonal array the source
(factor) was assigned. Normally, the column number is not shown in an ANOVA
table. The df column shows the df(s) associated with the factor in the source column.
The SS column contains the sums of squares. The SS is a measure of the spread of
the data due to that factor. The total SS is the sum of the SS due to all of the sources.
The MS or mean square column shows the SS/df for each source. The MS is also
known as the variance.
The row with error in the source column is left blank in this experiment. If
one of the columns had not been assigned or if the experiment had been replicated,
then the unassigned dfs would have been used to estimate error. Error is the non-
repeatability of the experiment with everything held as constant as possible. The
ANOVA technique compares the variability contribution of each factor to the variability
TABLE 9.19
ANOVA Table
Column Source df SS MS F Ratio S %
1
2
3
4
5
6
7
C1
C2
C7
C11
C13
C15
C16
1
1
1*
1
1*
1*
1*
28.125
45.125
0.125
10.125
0.125
0.125
0.125
28.125
45.125
0.125
10.125
0.125
0.125
0.125
225
361
81
28.000
45.000
10.000
33.38
53.65
11.92
Error
(pooled error) 4 0.500 0.125 0.875

1.04
Total 7 83.875 11.982 83.875
Note: df = degrees-of-freedom; MS = mean square; SS = sum of squares.
SL3151Ch09Frame Page 407 Thursday, September 12, 2002 6:05 PM
408 Six Sigma and Beyond
due to error. Factors that do not demonstrate much difference in response over the
levels tested have a variability that is not much different from the error estimate.
The df and SS from these factors are pooled into the error term. Pooling is done by
adding the df and SS into the error df and SS. Pooling the insignicant factors into
the error can provide a better estimate of the error.
Initially, no estimate of error was made in the L8 example because no unassigned
columns or repetitions were present. Because of this, a true estimate of the error
could not be made. However, the purpose of the experiment was to identify the
factors that have a usable difference in response between the levels. In this experi-
ment, the factors with relatively small MS were pooled and called error. Pooling
requires that the experimenter judge which differences are signicant from an oper-
ational standpoint. This judgment is based on the prior knowledge of the system
being studied. In the example, factors C7, C13, C15, and C16 have much lower MS
than do the other factors and are pooled to construct an error estimate. The * next
to a df indicates that the df and SS for that factor were pooled into the error term.
The F ratio column contains the ratio of the MS for a source to the MS for the
pooled error. This ratio is used to statistically test whether the variance due to that
factor is signicantly different from the error variance. As a quick rule of thumb, if
the F ratio is greater than three, the experimenter should suspect that there is a
signicant difference. Dr. Taguchi does not emphasize the use of the F ratio statistical
test in his approach to DOE. A detailed description of the use of the F test can be
found in Box, Hunter, and Hunter (1978), and a practical explanation is included in
Volume V of this series.
In the determination of the SS of a factor, the non-repeatability of the experiment
is still present. The number in the S column is an attempt to totally remove the
SS due to error and leave the pure SS that is due only to the source factor. The
error MS times the df is subtracted from the SS to leave the pure SS or S value for
a factor. The amount that is subtracted from each non-pooled factor is then added
to the pooled error SS and the total is entered as the error S. In this way the total
SS remains the same.
The % column contains the S value divided by the total SS times 100%. This
gives the percent contribution by that factor to the total variation of the data. This
information can be used directly in prioritizing the factors. In the experiment that
has been discussed, C2 makes the greatest contribution, C1 contributes less, and
C11 contributes still less. It can be argued that the graphical analysis can display
those conclusions quite well. In more complicated experiments with many factors
and factors with a large number of levels, however, the ANOVA table can display
the analysis in a more concise form and quickly lead the experimenter to the most
important factors.
ESTIMATION AT THE OPTIMUM LEVEL
The ANOVA table is used to identify important factors. The experimenter refers to
the average response at each level of the important factors to choose the best
combination of factor levels. All of the best levels can be combined to estimate the
responses at the best factor combination. Consider the case where the second level
SL3151Ch09Frame Page 408 Thursday, September 12, 2002 6:05 PM
Design of Experiments 409
of factor A (A2), the third level of factor B (B3), the rst level of factor C (C1),
and the interaction of C1 and D1 are determined to be the best combination of
factors. An estimate of the response at these conditions can be made using the
equation:
where = the average response of all the data; = the average of the data run at
A
2
; = the average of the data run at B
3
; = the average of the data run at C
1
;
and = the average of the data run at D
1
.
Each factor that is a signicant contributor appears in a manner similar to A
2
,
B
3
and C
1
above. The term in brackets [ ] addresses the optimum level of the CD
interaction and is an example of the way in which interactions are handled.
CONFIDENCE INTERVAL AROUND THE ESTIMATION
A 90% condence interval can be calculated for conrmatory tests using the equation:
where F
1,dfe,.05
is a value from an F statistical table. The F values are based on two
different degrees of freedom and the desired condence. In this case, the rst degree
of freedom is always 1 and the second is the degree of freedom of the pooled error
(dfe). The desired condence is .05 since .05 in each direction () sums to a 10%
condence. MS
e
is the mean square of the pooled error term; n
r
is the number of
conrmatory tests to be run; and n
e
is the effective number of replications and is
calculated as follows:
For the that was just considered, n
e
is calculated as follows:
Source df
A
B
C
CD
Mean
1
2
1
1
1
Total 6

( )
opt
T A T B T C T C D T C T D T + +
( )
+
( )
+
( )

( )

( )
[ ] 2 3 1 1 1 1 1
T A
2
B
3
C
1
D
1

* * / /
, ,.

opt dfe e e r
F MS n n
( )
+
( )
1 05
1 1
n
e

Total Number of Experiments
Sum of the dfs of all the factors and interactions that are
significant and appear in the equation plus 1 df for the mean.

opt
SL3151Ch09Frame Page 409 Thursday, September 12, 2002 6:05 PM
410 Six Sigma and Beyond
Consider that an L36 was run with no repetitions.
n
e
= 36/6 = 6.0
INTERPRETATION AND USE
The condence interval about the estimated value is used as a check when verication
runs are made. If the average of the verication runs does not fall within the interval,
there is strong reason to believe that a very important factor may have been left out
of the experiment.
ANOVA DECOMPOSITION OF MULTI-LEVEL FACTORS
When a factor is tested at two levels, an estimate of the linear change in response
between the two levels can be made. When a factor is tested at more than two levels,
more complex relationships must be investigated. With a three-level factor, both the
linear and quadratic relationships can be investigated. These relationships are dem-
onstrated in Figure 9.18.
This relationship is important to consider even when the factor levels are not
continuous (e.g., different machines or suppliers). Consider the situation in
Figure 9.19. The dotted line is the linear response and indicates no signicant
difference. However, Supplier 2 is different from Suppliers 1 and 3. This difference
can be found only if the quadratic relationship is considered.
The number of higher order relationships that can be investigated is determined
by the degrees of freedom of the source see Table 9.20.
FIGURE 9.18 ANOVA decomposition of multi-level factors.
FIGURE 9.19 Factors not linear.
Linear Only
1 2 3 1 2 3 1 2 3
Factor Level
R
e
s
p
o
n
s
e
Quadratic Only
Factor Level
R
e
s
p
o
n
s
e
Both Linear
and Quadratic
Factor Level
R
e
s
p
o
n
s
e
1 2 3
Supplier
R
e
s
p
o
n
s
e
SL3151Ch09Frame Page 410 Thursday, September 12, 2002 6:05 PM
Design of Experiments 411
In the ANOVA table, the number of relationships that should be investigated is
the same as the df. The total SS for factor is decomposed into parts with unit dfs.
These parts are the linear, quadratic, cubic, etc. parts of the relationship. Each part
can then be treated separately, and the parts with small MS are pooled into the error
term. The type of relationship that remains as signicant can guide the experimenter
in investigating the level averages.
S/N CALCULATIONS AND INTERPRETATIONS
Control factors and noise factors were introduced in Section 3. Control factors appear
in an orthogonal array called an inner array. Noise factors that represent the uncon-
trolled or uncontrollable environment are entered into a separate array called an
outer array. The following example of an L8 linear (control) array with an L4 outer
(noise) array was rst presented in Section 3. Actual responses and factor names
are added here see Table 9.21 in the development of the example.
This type of experimental setup and analysis evaluates each of the control factor
choices (L8 array factors) over the expected range of the uncontrollable environment
TABLE 9.20
Higher Order Relationships
Levels of a Factor df Relationships
2
3
4
5
etc.
1
2
3
4
Linear
Linear, quadratic
Linear, quadratic, cubic
Linear, quadratic, cubic, quartic
TABLE 9.21
Inner OA (L8) with Outer OA (L4) and Test Results
L8
L4 Z 1 2 2 1
Test A B C D E F G (on side) Y 1 2 1 2
No. 1 2 3 4 5 6 7 X 1 1 2 2
1
2
3
4
5
6
7
8
1
1
1
1
2
2
2
2
1
1
2
2
1
1
2
2
1
1
2
2
2
2
1
1
1
2
1
2
1
2
1
2
1
2
1
2
2
1
2
1
1
2
2
1
1
2
2
1
1
2
2
1
2
1
1
2
Test Results 25
25
18
26
15
18
20
19
27
27
21
23
11
15
17
20
30
21
19
27
12
17
21
20
26
19
22
28
14
18
18
17
SL3151Ch09Frame Page 411 Thursday, September 12, 2002 6:05 PM
412 Six Sigma and Beyond
(L4 array factors). This assures that the optimal factor levels from the L8 array will
be robust. An S/N can be calculated for each test situation. These S/N ratios are
then used in an ANOVA to identify the situation that maximizes the S/N.
Smaller-the-Better (STB)
The following S/N ratios are calculated for the STB situation using the equations
given in Section 4 and assuming that the optimum value is zero and that the responses
shown represent deviations from that target:
The S/N ratios for testing situations are then analyzed using an ANOVA table.
The STB ANOVA table for the example is shown in Table 9.22. The ANOVA table
indicates that factors A, G, and C are the most signicant contributors. Inspection
of the level averages shows that the highest S/N values (least negative), in order
of contribution, occur at A
2
, G
2
, C
2
, D
1
, B
1
. Estimation of the S/N at the optimal
levels can be made from the S/N level averages using the technique discussed
earlier in this section. Likewise, estimation of the raw data average response at
the optimal level can be made from the response level averages at the optimal S/N
factor levels.
TABLE 9.22
The STB ANOVA Table
Source df SS MS F Ratio S %
A
B
C
D
E
F
G
1
1
1
1
1*
1*
1
18.487
0.864
4.232
1.295
0.223
0.213
4.362
18.487
0.864
4.232
1.295
0.223
0.213
4.362
84.803
3.963
19.413
5.940
20.009
18.269
0.646
4.014
1.077
4.144
61.53
2.18
13.53
3.63
13.96
Error
(pooled error) 2 0.436 0.218 1.526 5.14
Total 7 29.676 4.239
Test Number STB S/N
1
2
3
4
5
6
7
8
28.65
27.32
26.05
28.32
22.34
24.63
25.61
25.59
SL3151Ch09Frame Page 412 Thursday, September 12, 2002 6:05 PM
Design of Experiments 413
Larger-the-Better (LTB)
The same data will be used to demonstrate the LTB notation. In this case, the
optimum value is innity. Examples of this include strength or fuel economy. The
following S/N ratios are calculated using the LTB equation given in Section 4.
The S/N ratios for testing situations are then analyzed using an ANOVA table.
The LTB ANOVA table for the example is shown in Table 9.23. Inspection of the
ANOVA table and the level averages shows that the highest S/N values occur at A
1
,
G
1
, C
1
, D
2
, B
2
. Interpretation of the LTB analysis is similar to that of the STB
analysis.
Nominal the Best (NTB)
Analysis of the NTB experiment is a two-part process. Again, the same data will be
used to illustrate this approach. The target value will be assumed to be 24 in this case.
TABLE 9.23
The LTB ANOVA Table
Source df SS MS F Ratio S %
A
B
C
D
E
F
G
1
1
1
1
1*
1*
1
18.292
1.121
4.160
1.271
0.396
0.264
4.947
18.292
1.121
4.160
1.271
0.396
0.264
4.947
55.442
3.397
12.605
3.852
14.991
17.966
0.791
3.830
0.941
4.617
58.99
2.60
12.58
3.09
15.16
Error
(pooled error) 2
0.660 0.330 2.310 7.59
Total 7 30.454 4.351
Test Number LTB S/N
1
2
3
4
5
6
7
8
28.57
26.98
25.94
28.23
22.08
24.54
25.48
25.52
SL3151Ch09Frame Page 413 Thursday, September 12, 2002 6:05 PM
414 Six Sigma and Beyond
The S/N values are analyzed. The following S/N are calculated:
The S/N ratios for testing situations are then analyzed using an ANOVA table.
The NTB ANOVA table for the example is shown in Table 9.24.
The ANOVA table and the level averages indicate that E
1
, G
1
, B
2
, F
1
are the
optimal choices from an S/N standpoint. These are the factor choices that
should result in the minimum variance of the response.
The ANOVA analysis and level averages of the raw data are then investigated
to determine if there are other factors that have signicantly different
responses at their different levels but are not signicant in the S/N analysis.
These factors can be used to tune the average response to the desired value
but do not appreciably affect the variability of the response. The ANOVA
table of the raw data is shown in Table 9.25. From this ANOVA table, it
can be seen that the signicant contributors to the observed variability of
the data averages are the factors A, G, C, D, and F. This can be combined
with the S/N analysis and interpreted as follows:
a. Factors that inuence variability only B, E
b. Factors that inuence both variability and average response G
c. Factors that inuence the average only A, C
d. Factors that have little or no inuence on either variability or average
response D, F
TABLE 9.24
The NTB ANOVA Table
Source df SS MS F Ratio S %
A
B
C
D
E
F
G
1*
1
1*
1*
1
1
1
0.193
9.618
0.006
0.333
17.816
2.477
10.424
0.193
9.618
0.006
0.333
17.816
2.477
10.424
54.339
100.655
13.994
58.893
9.441
17.639
2.300
10.247
23.10
43.16
5.63
25.07
Error
(pooled error) 3 0.532 0.177 1.240 3.03
Total 7 40.867 5.838
Test Number STB S/N
1
2
3
4
5
6
7
8
21.93
15.96
20.78
21.60
17.03
21.59
20.33
22.56
SL3151Ch09Frame Page 414 Thursday, September 12, 2002 6:05 PM
Design of Experiments 415
The results from this experiment indicate that factors B, E, and G should be set
to the levels with the highest S/N. Factor G should be set to the level with the highest
S/N rather than using it to tune the average since its relative contribution to S/N
variability is greater than its contribution to the variability of raw data. This decision
might change based on cost implications and the ability to use factors A and C to
tune the average response. Factors A and C should be investigated to determine if
they can be set to levels that will allow the target value of 24 to be attained. This
may be possible with factors that have continuous values. Factors with discrete
choices such as supplier or machine number cannot be interpolated. Factors D and
F should be set to the levels that are least expensive. A series of conrmation runs
should be made when the optimum levels have been determined. The average
response and S/N should be compared to the predicted values.
COMBINATION DESIGN
Combination design was mentioned in Section 3 as a way of assigning two two-
level factors to a single three-level column. This is done by assigning three of the
four combinations of the two two-level factors to the three-level factor and not testing
the fourth combination. As an example, two two-level factors are assigned to a three-
level column as in Table 9.26.
Note that the combination A
1
B
2
is not tested. In this approach, information about
the A.B interaction is not available, and many ANOVA computer programs are not
able to break apart the effect of A and B.
The sum of squares (SS) in the ANOVA table that is due to factor A.B contains
both the SS due to factor A and the SS due to factor B. These two SSs are not
additive since the factors A and B are not orthogonal. This means:
TABLE 9.25
Raw Data ANOVA Table
Source df SS MS F Ratio S %
A
B
C
D
E
F
G
X
Y
Z
1
1*
1
1
1*
1
1
1*
1*
1*
392.000
8.000
72.000
18.000
2.000
18.000
98.000
0.125
3.125
0.000
392.000
8.000
72.000
18.000
2.000
18.000
98.000
0.125
3.125
0.000
84.940
15.601
3.900
3.900
21.235
387.385
67.385
13.385
13.385
93.385
53.95
9.39
1.86
1.86
13.01
Error
(pooled error)
21
26
106.750
120.000
5.083
4.615 143.075 19.93
Total 31 718.000 23.161
SL3151Ch09Frame Page 415 Thursday, September 12, 2002 6:05 PM
416 Six Sigma and Beyond
The SS of A and B can be calculated separately as follows:
where T
AB1
= the sum of all responses run at the rst level of AB; T
AB2
=

the sum
of all responses run at the second level of AB; T
AB3
= the sum of all responses run
at the third level of AB; and r = the number of data points run at each level of AB.
The MS of A and B then can be separately compared to the error MS to determine
if either or both factors are signicant. The df for both A and B is 1. If one of the
factors is signicant and the other is not, the ANOVA should be rerun with the
signicant factor shown with a dummy treatment and the other factor excluded from
the analysis.
EXAMPLE
The following factors will be evaluated using an L9 orthogonal array:
A and B will be combined into a single three-level column. The test array and results
are shown in Table 9.27.
The sum of the data at each level of AB is: for AB = 1, the sum is 17 + 9 + 8 =
34; for AB = 2, the sum is 40 + 28 + 17 = 85; for AB = 3, the sum is 28 + 22 + 27 = 77.
TABLE 9.26
Combination Design
Factor A Factor B
Three Level Column
Combined Factor (A.B)
1
2
2
1
1
2
1
2
3
Factor Number of Levels
A
B
C
D
E
2
2
3
3
3
SS SS SS
AB A B
+
SS T T r
SS T T r
A AB AB
B AB AB

( ) ( )

( ) ( )
1 2
2
2 3
2
2
2
/ *
/ *
SL3151Ch09Frame Page 416 Thursday, September 12, 2002 6:05 PM
Design of Experiments 417
The ANOVA table is for the data shown see Table 9.28. The decomposed SS for
A and B are shown in parentheses and are not added into the total SS.
TABLE 9.27
L9 OA with Test Results
A B A.B C D E Test Results
Sum of the
Test Results
1
1
1
2
2
2
2
2
2
1
1
1
1
1
1
2
2
2
1
1
1
2
2
2
3
3
3
1
2
3
1
2
3
1
2
3
1
2
3
2
3
1
3
1
2
1
2
3
3
1
2
2
3
1
7
3
5
22
13
9
12
12
15
10
6
3
18
15
8
16
10
12
17
9
8
40
28
17
28
22
27
TABLE 9.28
ANOVA Table
Source df SS MS F Ratio S %
A.B
(A)
(B)
C
D
E
2
1
1
2
2
2
250.778
(216.750)
(5.333)
100.778
33.778
32.444
125.389
216.750
5.333
50.389
16.889
16.222
31.347
54.188
1.333
12.597
4.222
4.056
242.778
92.778
25.778
24.444
53.50
20.45
5.68
5.39
Error
(pooled error)
9
9
36.000
36.000
4.000
4.000 68.000 14.99
Total 17 453.778 26.693
SS
SS
SS
SS
A
A
B
B

( ) ( )


( ) ( )

24 85 2 6
216 75
85 77 2 6
5 33
2
2
/ *
.
/ *
.
SL3151Ch09Frame Page 417 Thursday, September 12, 2002 6:05 PM
418 Six Sigma and Beyond
The F ratio for factor B indicates that the effect of the change in factor B on the
response is insignicant. Factor B is excluded from the analysis and factor A is
analyzed with a dummy treatment. The ANOVA table for this analysis is shown in
Table 9.29. The analysis continues using the techniques described in this section.
MISCELLANEOUS THOUGHTS
The purpose of most DOEs is to predict what the response will be at the optimum
condition. Conrmatory tests should be run to assure the experimenter that the
projected results are valid. Sometimes, the results of the conrmatory tests are
signicantly different from the projected results. This can be due to one or more of
the following:
There was an error in the basic assumptions made in setting up the
experiment.
Not all of the important factors were controlled in the experiment.
The factors interacted in a manner that was not accounted for.
The response that was measured was not the proper response or was only
a symptom of something more basic (see Section 2).
An important noise factor was not included in the experiment (e.g., the
experimental tests were run on sunny days while the conrmatory tests
were run on a rainy day).
The experimental test equipment is not capable of providing consistent,
repeatable test results.
A mistake was made in setting up one or more of the experimental tests.
The experimenter who is faced with data that does not support the prediction is
forced to ask which of these problems affected the results. It is important that all
of these problems be considered and investigated, if appropriate. If two or more of
these problems coexisted, correcting only one problem may not improve the exper-
imental results.
Even though it may seem that the experiment was a failure, that is not necessarily
true. Experimentation should be considered an organized approach to uncovering a
TABLE 9.29
Second Run of ANOVA
Source df SS MS F Ratio S %
A
C
D
E
1
2
2
2
245.444
100.778
33.778
32.444
245.444
50.389
16.889
16.222
59.386
12.192
4.086
3.925
241.311
92.512
25.512
24.178
53.18
20.39
5.62
5.33
Error
(pooled error)
10
10
41.334
41.334
4.133
4.133 70.265 15.48
Total 17 453.778 26.693
SL3151Ch09Frame Page 418 Thursday, September 12, 2002 6:05 PM
Design of Experiments 419
working knowledge about a situation. The failed experiment does provide new
knowledge about the situation that should be used in setting up the next iteration of
experimental testing.
The prior statement may sound too idealistic for the real world where deadlines
are very important. A failed experiment may cause some people to doubt the use-
fulness of the DOE approach and extol the virtues of traditional one-factor-at-a-time
testing. However, all of the problems listed above that could cause a DOE to fail
will also cause a one-factor-at-a-time experiment to fail. In DOE, the problem will
be found fairly early since relatively few tests are run. In one-factor-at-a-time testing,
the problem may not surface until many tests have been run, or the problem may
not even be identied in the testing program. In this case, the problem may not show
up until production or eld use.
The importance of meeting real-world deadlines makes the planning stage of
the experiment critical. Proper planning, including consideration of the experience
and knowledge of experts, will enable the experimenter to avoid many of the possible
problems. Deadlines are never a good excuse for not taking the time to adequately
plan an experiment.
AN EXAMPLE
The data used to demonstrate the S/N calculations in this section will be analyzed
here using the approach, NTB
II
S/N = 10 log (s
2
) = 20 log (s). This approach was
discussed earlier in this chapter. The data set is repeated in Table 9.30.
The S/N ratios for the testing situations are then analyzed using an ANOVA table.
The NTB
II
ANOVA table for the example is shown in Table 9.31. To help interpret
the ANOVA table, the level standard deviation averages and the level S/N averages
are shown for the signicant factors in Table 9.32.
To give a visual impact of the spread of the data and what the above table really
means, it would be wise to plot the data for each factor level. The plots of the average
standard deviation by factor level are shown in Figure 9.20.
TABLE 9.30
L8 with Test Results and S/N Values
L8
Z 1 2 2 1
Test A B C D E F G Y 1 2 1 2 20
No. 1 2 3 4 5 6 7 X 1 1 2 2 s log(s)
1
2
3
4
5
6
7
8
1
1
1
1
2
2
2
2
1
1
2
2
1
1
2
2
1
1
2
2
2
2
1
1
1
2
1
2
1
2
1
2
1
2
1
2
2
1
2
1
1
2
2
1
1
2
2
1
1
2
2
1
2
1
1
2
Test
Results
25
25
18
26
15
18
20
19
27
27
21
23
11
15
17
20
30
21
19
27
12
17
21
20
26
19
22
28
14
18
18
17
2.16
3.65
1.83
2.16
1.83
1.41
1.83
1.41
6.690
11.249
5.229
6.690
5.229
3.010
5.229
3.010
SL3151Ch09Frame Page 419 Thursday, September 12, 2002 6:05 PM
420 Six Sigma and Beyond
The ANOVA table and the level average standard deviations indicate that
A
2
B
2
C
2
E
1
are the optimal choices from an NTB
II
S/N standpoint. The analysis of
the raw data remains the same as shown in the chapter. The average level of the
response should be targeted using the results of the raw data analysis. This is true
regardless of whether the goal is as small as possible, as large as possible, or to meet
a specic value. The variance should be minimized by maximizing the NTB
II
S/N.
The experimenter must make the trade-off between the choice of factor levels that
adjust the response average and the choice of factor levels that minimize the variance
of the response.
A comparison of the results of the two methods shows clear differences. As an
example, for the situation where a specic value is targeted (NTB), the factor level
choices are: NTB B
2
E
1
G
1
to minimize variability, A and C set to achieve target;
NTB
II
B
2
E
1
to minimize variability, G set to achieve target. If the target is
attainable using factor G, use A
2
C
2
to minimize variability, otherwise, set C and/or
A to achieve target.
TABLE 9.31
ANOVA Table for Data from Table 9.30
Source df SS MS F Ratio S %
A
B
C
D
E
F
G
1
1
1
1*
1
1*
1*
22.379
4.531
4.531
0.313
13.670
1.200
1.200
22.379
4.531
4.531
0.313
13.670
1.200
1.200
24.746
5.010
5.010
15.117
21.474
3.627
3.627
12.766
44.90
7.58
7.58
26.69
Error
(pooled error) 3 2.713 0.904 6.330 13.24
Total 7 47.823 6.832
TABLE 9.32
Signicant Figures from Table 9.31
Factor Level
Average
Standard
Deviation NTB
II
S/N
A 1
2
2.36
1.61
7.465
4.120
B 1
2
2.12
1.79
6.545
5.039
C 1
2
2.12
1.79
6.545
5.039
E 1
2
1.67
2.26
4.485
7.099
SL3151Ch09Frame Page 420 Thursday, September 12, 2002 6:05 PM
Design of Experiments 421
There is no complete agreement among statisticians and DOE practitioners as
to which approach gives better results. As a general rule, the reader is encouraged to:
1. Plot the data including raw and/or transformed values, level averages and
standard deviations, and any other information that seems appropriate.
One picture is worth a thousand words.
2. Analyze the data using the appropriate analysis techniques.
3. Compare the results to the data plots in order to determine which set of
results makes the most sense. Perform this comparison fairly and resist
the temptation to choose the results solely on whether they support con-
venient conclusions.
4. Run conrmation tests.
DOE is a powerful tool that can help the experimenter get the most out of scarce
testing resources. However, as with any powerful tool, care must be taken to under-
stand how to use the tool and how to interpret the results.
ANALYSIS OF CLASSIFIED DATA
The purpose of this section is to:
1. Discuss the classied attribute analysis and classied variable analysis
approaches to analyzing classied responses.
2. Present examples of how these techniques are used.
FIGURE 9.20 Plots of the average standard deviation by factor level.
1.5
2
2.5
1 2 1 2 1 2 1 2
S
t
a
n
d
a
r
d

D
e
v
i
a
t
i
o
n
Factor A Factor B Factor C Factor E
SL3151Ch09Frame Page 421 Thursday, September 12, 2002 6:05 PM
422 Six Sigma and Beyond
CLASSIFIED RESPONSES
Some experimental responses cannot be measured on a continuous scale although
they can be divided into sequential classes. Examples include appearance and per-
formance ratings. In these situations, three to ve rating classes are generally the
optimum number because this number allows major differences in the responses to
be identied and yet does not require the rater to identify differences that are too
subtle. Two related techniques are used to analyze classied responses:
1. Classied attribute analysis is used when the total number of items rated
is the same for every test matrix setup.
2. Classied variable analysis is used when the total number of items rated
is not the same for every test matrix setup.
Three to ve responses at each experimental setup are recommended to give a
good evaluation of the class distribution of responses at that setup. As with contin-
uous measurements, more responses at each setup allow smaller differences to be
identied.
CLASSIFIED ATTRIBUTE ANALYSIS
This technique converts the observed frequency in each class into a cumulative
frequency for the classes. As an example, if there are three classes, the observed
and cumulative frequencies might be as shown in Table 9.33.
It is assumed that the user will use a computer program to analyze the classied
data. The specic input format will depend on the computer program used. The
mathematical derivations and philosophies of this approach will not be presented
here. For more information see Volume V of this series as well as Taguchi (1987)
and Wu and Moore (1985).
EXAMPLE
Three grades are used to evaluate paint appearance of a product. They are Bad,
OK, and Good. Seven factors (A through G), each at two levels, are evaluated
to determine the combination of factor levels that optimizes paint appearance. Five
products are evaluated at each testing situation in an L8 orthogonal array. Test results
are shown in Table 9.34.
TABLE 9.33
Observed Versus Cumulative Frequency
Observed Frequency Cumulative Frequency
Class I
Class II
Class III
2
1
1
2
3
4
SL3151Ch09Frame Page 422 Thursday, September 12, 2002 6:05 PM
Design of Experiments 423
The ANOVA analysis for this set of data is shown in Table 9.35. Note that the
degrees of freedom are calculated differently from the non-classied situation. The
df of each source is:
(the number of levels of that factor 1) * (the number of classes 1)
In this example, the number of levels of each factor is two and the number of classes
is three. For each factor,
The total df = (the total number of rated items 1) * (the number of classes 1).
Thus, the total df for this example is:
TABLE 9.34
Attribute Test Setup and Results
Frequency in Each Grade
A B C D E F G Bad OK Good
1
1
1
1
2
2
2
2
1
1
2
2
1
1
2
2
1
1
2
2
2
2
1
1
1
2
1
2
1
2
1
2
1
2
1
2
2
1
2
1
1
2
2
1
1
2
2
1
1
2
2
1
2
1
1
2
2
3
4
0
0
1
0
0
3
2
1
2
4
3
3
1
0
0
0
3
1
1
2
4
TABLE 9.35
ANOVA Table (for Cumulative Frequency)
Source df SS MS F Ratio S %
A
B
C
D
E
F
G
2
2
2*
2*
2*
2
2*
11.668
6.678
0.125
3.668
2.259
7.935
2.259
5.834
3.39
0.063
1.834
1.130
3.986
1.130
7.820
4.476
5.319
10.179
5.186
6.443
12.72
6.48
8.05
Error
(pooled error)
64
72
45.409
53.720
0.710
0.746 58.196 72.75
Total 78 80.000 1.026
df
( )

( )
2 1 3 1 2 *
SL3151Ch09Frame Page 423 Thursday, September 12, 2002 6:05 PM
424 Six Sigma and Beyond
The error df is the total df minus the df of each of the factors.
From the ANOVA table, factors A, B, and F are identied as signicant. The
effects of these factors are shown in Table 9.36 and Figure 9.21.
Although interpretation and use of the ANOVA table in classied attribute
analysis is the same as for the non-classied situation, a signicant difference does
TABLE 9.36
The Effect of the Signicant Factors
Observed
Frequency
% Rate of
Occurrence (R.O.)
Cumulative
Frequency
Cumulative
% R.O.
Bad OK Good Bad OK Good Bad OK Good Bad OK Good
A1 9 8 3 45 40 15 9 17 20 45 85 100
A2 1 11 8 5 55 40 1 12 20 5 60 100
B1 6 12 2 30 60 10 6 18 20 30 90 100
B2 4 7 9 20 35 45 4 11 20 20 55 100
F1 2 10 8 10 50 40 2 12 20 10 60 100
F2 8 9 3 40 45 15 8 17 20 40 85 100
Total 10 19 11 25 73 100
FIGURE 9.21 Factor effects.
0
10
20
30
40
50
60
70
80
90
100
A-1 A-2 B-1 B-2 F-1 F-2
C
u
m
u
l
a
t
i
v
e

R
a
t
e

o
f
O
c
c
u
r
r
e
n
c
e

-

%
Bad OK Good
Factor - Level
Factor Effects
df
( )

( )
40 1 3 1 78 *
SL3151Ch09Frame Page 424 Thursday, September 12, 2002 6:05 PM
Design of Experiments 425
exist in estimating the cumulative rate of occurrence for each class under the opti-
mum condition.
Percentages near 0% or 100% are not additive. The cumulative of occurrence
can be transformed using the omega method to obtain values that are additive. In
the omega method, the cumulative percentage (p) is transformed to a new value ()
as follows:
[the units of are decibels (db).]
Using this transformation, the estimated cumulative rate of occurrence for each
class at the optimum condition (A
2
B
2
F
1
) is calculated as follows:
The estimated cumulative rate of occurrence for each class for the optimum
condition is:
Class 1
FIGURE 9.22 Factor effects.
0
10
20
30
40
50
60
70
80
90
100
C
u
m
u
l
a
t
i
v
e

R
a
t
e

o
f
O
c
c
u
r
r
e
n
c
e

-

%
A-1 A-2 A-3 B-1 B-2 B-3 C-1 C-2 C-3
Factor - Level
Factor Effects
Class 1 Class 2 Class 3

( )
10 1
10
log / l p
db of db of T db of A db of T db of B db of T db of F db of T

+
( )
+
( )
+
( )
2 2 1
db of db of db of db of db of db of
db of db of

. . . . .
. .
. . . . . . .
.

+
( )
+
( )
+
( )
+ +
( )
+ +
( )
+ +
( )

25 05 25 20 25
10 25
4 77 12 79 4 77 6 02 4 77 9 54 4 77
18 81
1
SL3151Ch09Frame Page 425 Thursday, September 12, 2002 6:05 PM
426 Six Sigma and Beyond
Class 2
These results are summarized in Table 9.37.
CLASSIFIED VARIABLE ANALYSIS
Classied variable analysis is used when the number of items evaluated is not the
same for all test matrix setups. As with classied attribute analysis, the computer
analyzes the cumulative frequencies.
EXAMPLE
Four factors (A, B, C and D) are suspected of inuencing door closing efforts for a
particular car model. An experiment was set up that evaluated each of these factors
at three levels. An L9 orthogonal array was used to evaluate the factor levels. Door
closing effort ratings were made by a group of typical customers. Each customer
was asked to evaluate the doors on a scale of one to three as follows:
The experimental setup and test results are shown in Table 9.38 and Figure 9.22.
The ANOVA analysis for this set of data is shown in Table 9.39.
From the ANOVA table, factors A, B and C are identied as signicant. The
effects of these factors are shown in Table 9.40.
TABLE 9.37
Rate of Occurrence at the Optimum Settings
Class
Cumulative
Rate of Occurrence Rate of Occurrence
Bad
OK
Good
1%
27%
100%
1%
26%
73%
Class Description of Effort
1
2
3
Unacceptable
Barely acceptable
Very good feel
db of db of db of db of db of db of
db of db of

. . . . .
. .
.

+
( )
+
( )
+
( )

73 60 73 55 73
60 73
4 25
27
SL3151Ch09Frame Page 426 Thursday, September 12, 2002 6:05 PM
Design of Experiments 427
The choice of the optimum levels is clear for factors A and B. A
2
and B
1
are the
best choices. Two different choices are possible for factor C, depending on the overall
goal of the design. If the goal is to minimize the occurrence of unacceptable efforts,
C
1
is the best choice. If the goal is to maximize the number of customer ratings of
very good, then C
2
is the best choice. For this example, C
1
will be chosen as the
preferred factor setting. The estimated rate of occurrence for each class for the optimum
setting, A
2
B
1
C
1
, can be calculated using the omega method. The estimated rates are
shown in Table 9.41. The df for the factors are calculated in the same way as with the
Classied Attribute Analysis, i.e., df = (the number of levels of that factor 1) * (the
number of classes 1).
In Classied Variable Analysis, the total number of items evaluated at each condition
is not equal. To normalize these sample sizes, percentages are analyzed and the
sample size for each test setup becomes 100 (for 100%). The total df is (the number
of sample sizes 1) * (the number of classes 1). For this example, the total df is:
TABLE 9.38
Door Closing Effort: Test Setup and Results

Number
of
Ratings
by Class
Class% Rate
of Occurrence
Class Cumulative
Frequency (%)
A B C D Ratings 1 2 3 1 2 3 1 2 3
1
1
1
2
2
2
3
3
3
1
2
3
1
2
3
1
2
3
1
2
3
2
3
1
3
1
2
1
2
3
3
1
2
2
3
1
5
4
5
4
4
4
5
5
4
1
2
2
0
0
0
3
4
3
3
1
3
0
1
1
2
1
1
1
1
0
4
3
3
0
0
0
20
50
40
0
0
0
60
80
75
60
25
60
0
25
25
40
20
25
20
25
0
100
75
75
0
0
0
20
50
40
0
0
0
60
80
75
80
75
100
0
25
25
100
100
100
100
100
100
100
100
100
100
100
100
TABLE 9.39
ANOVA Table for Door Closing Effort
Source df SS MS F Ratio S %
A
B
C
D
4
4
4
4*
871.296
34.404
25.125
4.827
217.824
8.601
6.296
1.207
447.277
17.661
12.928
869.348
32.456
23.234
48.30
1.80
1.29
Error
(pooled error)
1782
1786
864.291
869.118
0.485
0.487 874.962 48.61
Total 1798 1800.000 1.001
SL3151Ch09Frame Page 427 Thursday, September 12, 2002 6:05 PM
428 Six Sigma and Beyond
The error df is the total df minus the df of each of the factors.
DISCUSSION OF THE DEGREES OF FREEDOM
In both classied attribute analysis and classied variable analysis, the total degrees
of freedom are much greater than the number of items evaluated. The interpretation
of the F ratios and the calculation of a condence interval are complicated by the
large number of degrees of freedom and will not be addressed here. The analysis
techniques for classied responses are not as completely developed as are the
techniques for the analysis of continuous data. In Dr. Taguchis approach, the empha-
sis is on using the percent contribution to prioritize alternative choices. Although
better statistical techniques may be developed to handle classied data, classied
attribute and classied variable analyses can be used to identify the large contributors
to variation in classied responses.
TABLE 9.40
The Effects of the Door Closing Effort
Factor
% Rate of Occurrence Cumulative% Rate of Occurrence
& Level Class 1 Class 2 Class 3 Class 1 Class 2 Class 3
A1
A2
A3
36.7
0
71.7
48.3
16.7
28.3
15.0
83.3
0
36.7
0
71.7
85.0
16.7
100.0
100
100
100
B1
B2
B3
26.7
43.3
38.3
33.3
23.3
36.7
40.0
33.3
25.0
26.7
43.3
38.3
60.0
66.6
75.0
100
100
100
C1
C2
C3
33.3
41.7
33.3
35.0
16.7
41.7
31.7
41.7
25.0
33.3
41.7
33.3
68.3
58.4
75.0
100
100
100
Total 36.1 31.1 32.8 36.1 67.2 100
TABLE 9.41
Rate of Occurrence at the Optimum Settings
Class
Cumulative
Rate of Occurrence Rate of Occurrence
1 (unacceptable)
2 (barely acceptable)
3 (very good feel)
0%
13.4%
100%
0%
13.4%
86.6%
df
( )

( )

900 1 3 1
1798
*
SL3151Ch09Frame Page 428 Thursday, September 12, 2002 6:05 PM
Design of Experiments 429
MISCELLANEOUS THOUGHTS
As we just mentioned in the discussion of the degrees of freedom, there is no
consensus among statisticians regarding the best method to use to analyze classied
data. A method that is an alternate to the ones described in this section is to transform
the classied data into variable data and analyze the data as described in Section 5.
A drawback to this approach is that the relative difference in the transformed values
should reect the relative difference in the classications, and this is sometimes
difcult to achieve. A simple example from the medical eld will illustrate this. Four
different groups of patients suffering from the same disease are each given a different
medicine. The purpose is to determine which medicine is best. The response classes
are shown in below:
If Class A is given a value of 1 and Class B is given a value of 2, what should
Class C be given? Is the difference between Classes B and C the same as the
difference between Classes A and B? Twice the difference? Three times?
Dr. George Box is of the opinion that this difculty can be overcome by analyzing
the variable data using several different transformations from the classications. In
most instances, the choice of the best response will not be affected by the different
relative values placed on the classications and, in every case, the data will be much
easier to analyze and interpret. The example given earlier dealing with classied
attribute data will be worked as an example.
EXAMPLE
Three grades are used to evaluate paint appearance of a product. They are Bad,
OK, and Good. The classied data are transformed into variable data as follows:
Bad = 1; OK = 3; Good = 4. This puts emphasis on avoiding situations that result
in bad responses. Seven factors (A through G), each at two levels, are evaluated
to determine the combination of factor levels that optimizes paint appearance. Five
products are evaluated at each testing situation in an L8 orthogonal array. Test results
are shown in Table 9.42. The ANOVA analysis for the raw data is shown in
Table 9.43. Plotting of the data and inspection of the level averages reveal that the
best factor choices are: A
2
B
2
F
1
. The ANOVA analysis for the NTB S/N ratios is
shown in Table 9.44.
Plotting of the S/N data and inspection of the level averages reveals that the best
factor choices are: A
2
B
2
E
2
F
1
. The best choices overall are: A
2
B
2
E
2
F
1
. This compares
with the best choice of A
2
B
2
F
1
from the accumulation analysis on page 425.
Each of the methods has one further disadvantage. Using the transformation
approach, it is not possible to make a projection of what the distribution of classes would
look like at the optimum settings. The accumulation analysis was not able to identify
the effect on the standard deviation of the ratings due to factor E. Each approach tells
a different part of the story and both should be used to get the full picture.
Class Description of Effect
A
B
C
Patient improves
No change in patient
Patient dies
SL3151Ch09Frame Page 429 Thursday, September 12, 2002 6:05 PM
430 Six Sigma and Beyond
DYNAMIC SITUATIONS
This section discusses:
1. What dynamic test situations are
2. How a test plan should be set up in a dynamic situation
3. The analysis of test data
DEFINITION
In many instances, the experimenter knows that the optimum response for a system
changes with levels of an input signal. Using the signal-to-noise techniques described
TABLE 9.42
OA and Test Setup and Results
Test Setup and Results

Frequency in Each Grade
A B C D E F G Bad OK Good
Transformed Data
1
1
1
1
2
2
2
2
1
1
2
2
1
1
2
2
1
1
2
2
2
2
1
1
1
2
1
2
1
2
1
2
1
2
1
2
2
1
2
1
1
2
2
1
1
2
2
1
1
2
2
1
2
1
1
2
2
3
4
0
0
1
0
0
3
2
1
2
4
3
3
1
0
0
0
3
1
1
2
4
1
1
1
3
3
1
3
3
1
1
1
3
3
3
3
4
3
1
1
4
3
3
3
4
3
3
3
4
4
4
4
4
TABLE 9.43
ANOVA for the Raw Data
Source df SS MS F Ratio S %
A
B
C
D
E
F
G
1
1
1*
1*
1*
1
1*
11.03
3.03
0.03
2.03
2.03
7.23
2.03
11.03
3.03
0.03
2.03
2.03
7.23
2.03
14.33
3.93
9.39
10.26
2.26
6.46
20.94
4.61
13.18
Error
(pooled error)
32
36
21.60
27.70
0.68
0.77 30.01 61.27
Total 39 48.98 1.26
SL3151Ch09Frame Page 430 Thursday, September 12, 2002 6:05 PM
Design of Experiments 431
in the previous sections would yield incorrect results. These techniques emphasize
repeatability across all levels of the noise factors. In a dynamic situation, the exper-
imenter wants different responses depending upon the level of an input signal factor.
Two examples are:
1. If two or more length measurement devices are compared, the standard
lengths to be measured become signal factor levels for comparison. The
experimenter would want a measurement device that gives a reading that
is relative to the different standard lengths measured and is repeatable at
each standard measured.
2. Several factors are to be included in an experiment to determine the combi-
nation that optimizes vehicle braking distance. The tests are run at two
different vehicle speeds. The vehicle speed would be treated as a signal factor.
The experimenter would want the braking distance to be repeatable at each
vehicle speed and reect the customers needs and desires for braking dis-
tance at each vehicle speed. These needs and desires would not be the same
for all vehicle speeds. (Note: It might seem that the goal should be to
minimize the braking distance at each vehicle speed; however, if the braking
were too abrupt, the driver might lose control of the vehicle.)
DISCUSSION
The analysis of dynamic test data can be complicated. The following conditions exist
in the examples that follow. If these conditions are not present in a dynamic experiment,
help from a statistician should be sought before setting up the experiment.
Conditions
1. Signal factors will be assigned to an outer array in the experimental setup.
2. The signal factor(s) will have either two or three levels.
TABLE 9.44
ANOVA Table for the NTB S/N Ratios
Source df SS MS F Ratio S %
A
B
C
D
E
F
G
1
1
1*
1*
1
1
1*
23.81
23.81
0.39
0.39
13.21
23.81
3.49
23.81
23.81
0.39
0.39
13.21
23.81
3.49
16.76
16.76
9.29
16.76
22.39
22.39
11.79
22.39
25.18
25.18
13.26
25.18
Error
(pooled error)
3
4.26 1.42 9.95 11.19
Total 7 88.91 12.70
SL3151Ch09Frame Page 431 Thursday, September 12, 2002 6:05 PM
432 Six Sigma and Beyond
3. If there are three levels for a signal factor, the intervals between the
adjacent levels will be equal.
4. The experimental test includes either noise factors in an outer array or
repetitions so that for each inner array control factor setup, two or more
runs are made for each signal factor.
Analysis
The general approach that is used to analyze the data is:
1. The test results for each inner array setup (test number) are separately
analyzed using analysis of variance (ANOVA). The ANOVA table for
these analyses will be shown in a typical format as Table 9.45.
2. A nominal-the-best signal-to-noise ratio is calculated for each inner array
setup from these ANOVA tables as follows:
where r = the number of data in each level of the signal factor for this
inner array setup; s = 0.5 if the signal factor has two levels or s = 2.0 if
the signal factor has three levels; and h = the interval between the adjacent
levels of the signal factor.
4. The calculated S/N ratio for each inner array setup is then used in a
nominal-the-best (NTB) S/N analysis of variance to determine which
control factor settings should be used to reduce variability and which
should be used to tune the response to the desired output.
The application of these steps will be developed more fully through the following
examples.
EXAMPLE 1
Two different types of automatic optical measurement devices are to be compared.
Two orientations of the devices are possible, horizontal or vertical. These are assigned
to an L4 inner array as follows:
TABLE 9.45
Typical ANOVA Table Setup
Source df SS V
Signal
error
df
s
df
1
df
s
SS
s
SS
1
SS
s
V
s
V
e

Total df
1
SS
1
S N
SS V
V r s h
s e
e
/ log
* * *


j
(
,
\
,
(
10
10
2
SL3151Ch09Frame Page 432 Thursday, September 12, 2002 6:05 PM
Design of Experiments 433
Items with two different surface nishes will be measured by the devices. Surface
nish (F) will be a noise factor. Two standard lengths of 10 and 20 mm will be
evaluated. These will be the two levels of the signal factor (S). The test matrix and
test results for the experiment are shown in Table 9.46.
For test number 1, the S/N ratio is calculated from the ANOVA table for just the
runs in test number 1.
The ANOVA table for these data is shown in Table 9.47. The S/N ratio is
calculated as:
TABLE 9.46
L4 OA with Test Results
Test
Test Matrix S
1
S
2 NTB
Number T O T O F
1
F
2
F
1
F
2
S/N
1
2
3
4
1
1
2
2
1
2
1
2
1
2
2
1
9.8
10.2
9.6
10.2
9.7
9.9
9.9
9.8
20.4
20.3
19.6
19.7
20.2
20.1
20.0
19.5
19.33
14.94
12.05
12.65
Factor Column
Type (T)
Orientation (O)
T O Interaction
1
2
3
S F Test Result
1
1
2
2
1
2
1
2
9.8
9.7
20.4
20.2
S N
SS V
V r s h
S N
S N
s e
e
/ log
* * *
/ log
. .
. * * . *
/ .


j
(
,
\
,
(


j
(
,
\
,
(

10
10
111 303 0 013
0 013 2 0 5 10
19 33
10
2
10
2
SL3151Ch09Frame Page 433 Thursday, September 12, 2002 6:05 PM
434 Six Sigma and Beyond
An S/N ratio for each of the other test setups is calculated in a similar manner.
These S/N ratios are then analyzed using the S/N ratio as a single response for each
test setup see Table 9.48. The level averages for the data are:
Inspection of the data shows that the setting of T that gives the highest S/N is level
1. Although there are not enough test setups to allow the statistical identication of
level 1 of factor O as the optimum, the data suggests that orientation 1 might work
the best with device 1 and should be further investigated.
The level averages of the raw data are shown in Table 9.49.
The predicted averages are calculated using the techniques given in Section 5
using the interaction of T
1
and O
1
(assumed) as the optimum setting.
TABLE 9.47
ANOVA Table Raw Data
Source df SS V
Signal
error
1
2
111.303
0.026
111.303
0.013
Total 3 111.328
TABLE 9.48
ANOVA Table (S/N Ratio Used as Raw Data)
Source df SS MS F Ratio S %
T
O
T O
Error
(pooled error)
1
1*
1*

2
20.794
4.494
5.153
9.647
20.794
4.494
5.153
4.824
4.311 15.970
14.471
52.46
47.54
Total 3 30.647 10.147
Level Averages
NTB S/N Data
O
1
O
2
Overall
T
1
T
2
Overall
19.33
12.05
15.69
14.98
12.65
13.82
17.16
12.35
14.76
SL3151Ch09Frame Page 434 Thursday, September 12, 2002 6:05 PM
Design of Experiments 435
Note that the readings at the optimum do not average out to the standard exactly.
This assumes that the output reading can be calibrated to reect the standard mea-
sured. The emphasis in the approach is to provide readings with low variability at
each standard level output.
This example was very simple and it may seem that the ANOVA was not really
necessary. In many cases, the inner array will be more complicated than an L4, and the
technique shown in this example will help the experimenter make an informed choice.
EXAMPLE 2
The effect of several factors on vehicle braking distance is to be investigated. The
control factors to be investigated are assigned to an L8 orthogonal inner array as
follows:
TABLE 9.49
Level Averages Raw Data
S
1
S
2
Overall
T
1
T
2
O
1
O
2
Average of S
Overall
Average at T
1
O
1
9.90
9.88
9.75
10.03
9.89
20.25
19.70
20.05
19.90
19.98
15.08
14.79
14.90
14.97
14.93
15.03
Column
1
2
3
4
5
6
7
A Content of material Z in the brake pads
B Content of material Y in the brake rotors
A B interaction
C Hydraulic uid type
Unassigned
Unassigned
D Brake pad design
for S mm
mm
for S mm

( )
+
( )

( )

( )
[ ]
+
( )

( )
+
( )

( )

( )
[ ]
+
1 10
14 93 15 03 14 93 15 08 14 93 14 90 14 93 9 98 14 93
9 87
2 20
14 93 15 03 14 93 15 08 14 93 14 90 14 93 9 98 14

. . . . . . . . .

. . . . . . . .

..

.
93
19 96
( )
mm
SL3151Ch09Frame Page 435 Thursday, September 12, 2002 6:05 PM
436 Six Sigma and Beyond
Noise and signal factors are assigned to an L8 outer array as follows:
In this example, vehicle speed is a signal factor. It is not possible that the braking
distance would be the same when starting from 30 mi/h vs. 60 mi/h and therefore,
different responses are expected. The experimenter has determined through market
research that for this type of vehicle, the customer would prefer that the braking
distance be 35 feet from 30 mi/h and 130 feet from 60 mi/h.
The test setup and results are shown in Table 9.50.
The unassigned columns are not shown to conserve space and to make the table
more presentable. The outer array is also shown somewhat differently, with column
1 (factor S) at the top, column 2 (factor T) in the middle, and column 4 (factor P)
at the bottom. Although this arrangement can be used to present the data, the
unassigned columns should be added back to the arrays to aid the experimenters
understanding of the analysis and the application of the inner and outer L8 orthogonal
arrays.
For the rst test setup, the S/N ratio is calculated from the ANOVA table for the
data in the rst row. The ANOVA table for these data is:
TABLE 9.50
OA Setup and Test Results for Example 2
A
S
1
S
2
X T
1
T
2
T
1
T
2 NTB
A B B C D P
1
P
2
P
1
P
2
P
1
P
2
P
1
P
2
S/N
1
1
1
1
2
2
2
2
1
1
2
2
1
1
2
2
1
1
2
2
2
2
1
1
1
2
1
2
1
2
1
2
1
2
2
1
2
1
1
2
39.9
42.5
45.0
40.7
39.4
37.5
36.0
39.6
40.6
42.8
41.3
39.7
40.1
37.3
38.4
40.4
40.4
42.4
41.0
40.5
39.7
37.6
36.9
40.5
40.6
42.7
44.8
40.7
38.1
37.3
37.9
40.3
140.9
143.0
141.2
141.3
139.9
137.6
135.1
139.7
141.2
142.4
143.1
140.7
139.7
138.0
139.3
140.1
140.7
141.1
143.4
140.9
141.1
137.1
138.4
142.0
139.6
142.8
142.7
139.7
139.7
137.2
136.1
138.5
15.73
14.62
5.90
15.25
12.74
20.64
6.58
9.89
Column
1
2
3
4
5
6
7
S Vehicle speed (30 mi/h or 60 mi/h)
T Tire size
Unassigned
P Pavement type (asphalt or concrete)
Unassigned
Unassigned
Unassigned
ANOVA Table (Raw Data)
Source df SS V
Signal
Error
1
6
20090.101
1.786
20090.101
0.298
Total 7 20091.888
SL3151Ch09Frame Page 436 Thursday, September 12, 2002 6:05 PM
Design of Experiments 437
The S/N ratio is calculated as follows:
An S/N ratio for each of the other test setups is calculated in a similar manner. These
S/N ratios are then analyzed using the S/N ratio as a single response for each test
setup see Table 9.51.
The ANOVA table indicates that factors B, C, D, and the interaction of factors A
and B are signicant. The level averages for these factors are:
The ANOVA table and the level averages indicate that B
1
, C
2
and D
1
are the optimal
choices from an S/N standpoint. These are the factor choices that should result in
the minimum variance of the response.
An analysis of the raw data would identify the signal factor as the most signicant
contributor to the variation of the data. However, this information is not useful. To
increase the ability of the analysis to clearly show the signicant control factors, the
target braking distance for each of the signal factor levels is subtracted from all of
the data collected at that signal factor level. This reduces the percent level of
TABLE 9.51
ANOVA Table (S/N Ratio Used as Raw Data)
Source df SS MS F Ratio S %
A
B
A B
C
Unassigned
Unassigned
D
Error
(pooled error)
1*
1
1
1
1*
1*
1

3
0.340
85.217
7.431
47.288
1.103
4.307
28.313
5.750
0.340
85.217
7.431
47.288
1.103
4.307
28.313
1.917
44.453
3.876
24.668
14.769
83.300
5.514
45.371
26.396
13.418
47.87
3.17
26.08
15.17
7.71
Total 7 173.998 24.857
Level Averages
B
1
B
2
A
1
A
2
A
1
A
2
C
1
C
2
D
1
D
2
15.18 16.69 10.58 8.24 10.24 15.10 14.55 10.79
S N
SS V
V r s h
S N
S N
s e
e
/ log
* * *
/ log
. .
. * * . *
/ .


j
(
,
\
,
(


j
(
,
\
,
(

10
10
20090 101 0 298
0 298 4 0 5 30
15 73
10
2
10
2
SL3151Ch09Frame Page 437 Thursday, September 12, 2002 6:05 PM
438 Six Sigma and Beyond
contribution of the signal factor and increases the percent level of contribution of
the control factors while maintaining their relative order of contribution. This trans-
formation makes the effects of the control factors more visible but does not affect
their signicance. The transformed data are shown in Table 9.52.
The ANOVA table for these data is shown in Table 9.53.
TABLE 9.52
Transformed Data
A
S
1
S
2
X
T
1
T
2
T
1
T
2
A B B C D P
1
P
2
P
1
P
2
P
1
P
2
P
1
P
2
1
1
1
1
2
2
2
2
1
1
2
2
1
1
2
2
1
1
2
2
2
2
1
1
1
2
1
2
1
2
1
2
1
2
2
1
2
1
1
2
4.9
7.5
10.0
5.7
4.4
2.5
1.1
4.6
5.6
7.8
6.3
4.7
5.0
2.3
3.4
5.4
5.4
7.4
6.0
5.5
4.7
2.6
1.9
5.5
5.6
7.7
9.8
5.7
3.1
2.3
2.9
5.3
10.9
13.0
11.2
11.3
9.9
7.6
5.1
9.7
11.2
12.4
13.1
10.7
9.7
8.0
9.3
10.1
10.7
11.1
13.4
10.9
11.1
7.1
8.4
12.0
9.6
12.8
12.7
9.7
9.7
7.2
6.1
8.5
TABLE 9.53
ANOVA Table for the Transformed Data
Source df SS MS F Ratio S %
A
B
A B
C
Unassigned
Unassigned
D
S
T
Unassigned
P
Unassigned
Unassigned
Unassigned
A S
D S
Error
(pooled error)
1
1*
1*
1*
1*
1*
1
1
1*
1*
1*
1*
1*
1*
1*
1*
47*
30
150.369
1.56E-4
0.473
0.833
2.600
2.213
101.758
382.691
0.083
0.170
0.375
1.658
0.508
2.520
0.083
0.098
16.441
58.055
150.369
1.56E-4
0.473
0.833
2.600
2.213
101.758
382.691
0.083
0.170
0.375
1.658
0.508
2.520
0.083
0.098
0.988
0.968
155.340
105.122
395.342
149.401
100.790
381.723
60.959
21.56
14.55
55.09
8.80
Total 63 692.874 10.998
SL3151Ch09Frame Page 438 Thursday, September 12, 2002 6:05 PM
Design of Experiments 439
The interactions between all the columns of the inner array and all the columns
of the outer array are available for investigation. For this example, only the A S
and D S interactions are investigated to give an indication of whether factors A
and D behave consistently at the two levels of the signal factor. The analysis
indicates that control factors A and D are signicant contributors to the variation of
the data. The difference in responses between the two levels of these factors is
independent of the signal level. The analysis also identied the signal factor, S, as
an important contributor to the data variation. This, of course, was already known.
The level averages are:
The predicted averages are calculated using the techniques given in Section 5 using
A
2
and D
1
as the optimum settings and adding the values that were subtracted prior
to the ANOVA.
Factor B should be set to level 1 and factor C should be set to level 2 to maximize
the S/N ratio.
Since the target values were not obtained at the optimum settings, the experi-
menter must either continue to investigate other ways of reducing the stopping
distance or accept the consequences of failing to fully satisfy the customers require-
ments.
MISCELLANEOUS THOUGHTS
Let us close this section with a discussion of the two examples dealing with the
NTB
II
S/N approach. The NTB S/N ratio for a dynamic situation is:
S
1
S
2
Overall
A
1
A
2
D
1
D
2
Average of S
Overall Average
6.58
3.59
3.86
6.30
5.08
11.54
8.41
8.68
11.28
9.98
9.06
6.00
6.27
8.79
7.53
for S mph
feet
for S mph

( )
+
( )

( )

( )
[ ]
+
( )
+

( )
+
( )

( )

( )
[ ]
+
1 30
7 53 6 00 7 53 6 27 7 53 14 90 14 93 5 08 7 53 35
37 29
2 60
7 53 6 00 7 53 6 27 7 53 14 90 14 93 5

. . . . . . . . .

. . . . . . . .

08 08 7 53 130
137 19

( )
+

. feet
NTB S N
SS V
V r s h
s e
e
/ log
* * *


j
(
,
\
,
(
10
10
2
SL3151Ch09Frame Page 439 Thursday, September 12, 2002 6:05 PM
440 Six Sigma and Beyond
This equation was explained earlier. Using the same terminology, the NTB
II
S/N
= NTB
II
S/N = 10 log (V
e
) which equals 20 log (error standard deviation).
For Example 1
The calculations for the NTB S/Ns were discussed earlier. The same steps are
followed for the NTB approach until the nal S/N calculation. The two sets of S/N
ratios are contrasted below:
When the NTB
II
S/N ratios were analyzed, the ANOVA table and the interpre-
tation of the level averages were essentially the same as those for the NTB S/N.
For Example 2
The calculations for the NTB S/N were discussed earlier. The NTB
II
analysis had
suggested that the standard deviation of the data might be related to the average of
the data. In other words, the spread of the stopping distances might be greater at
standard one (30 mi/h) than at standard two (60 mi/h). Using the procedure given
in pages 395397, the averages and standard deviations were compared as follows:
1. For each vehicle speed, the average stopping distance and the standard
deviation were calculated (16 averages and 16 standard deviations total).
2. The log (standard deviation) was plotted versus the log (average).
3. The slope was estimated to be in the range of 0.2 to 0.3 with large scatter
in the data. By comparing this value to Item 4 on page 396, it was
determined that there was not a strong need to transform the data.
The NTB
II
S/N ratios were calculated for the untransformed data. The two sets
of S/N ratios are compared below:
Test Number NTB S/N NTB
II
S/N
1
2
3
4
19.33
14.94
12.05
12.65
19.03
14.88
12.04
13.01
Test Number NTB S/N NTB
II
S/N
1 15.73 5.26
2 14.62 4.19
3 5.90 4.51
4 15.25 4.62
5 12.74 2.21
6 20.64 10.18
7 6.58 3.87
8 9.89 0.56
SL3151Ch09Frame Page 440 Thursday, September 12, 2002 6:05 PM
Design of Experiments 441
When the NTB
II
S/N ratios were analyzed, the ANOVA table and the interpre-
tation of the level averages were essentially the same as those for the NTB S/N. The
reader is encouraged to run the analysis to conrm this. The analysis of the raw data
did not change. The conclusions also remained the same as before.
For these two examples, the NTB and NTB
II
methods give equivalent results.
However, this does not prove equivalency of the methods. On other sets of data,
differences in the results obtained have been demonstrated. Of the two methods, the
NTB
II
approach is easier to understand since maximizing the NTB
II
S/N is the same
as minimizing the error variance for the chosen combination of factor levels. (The
experimenter should always analyze the data completely, plot the data, compare the
results to the data plots, and run conrmation tests.)
PARAMETER DESIGN
This section provides an example of how the DOE technique is used to determine
design factor target levels. This approach is an upstream attempt to develop a robust
product that will avoid problems later in production. The emphasis at this stage is
on using wide tolerance levels to provide a product that is easy to manufacture and
still meets all requirements.
DISCUSSION
After the basic design of a product is determined, the next step is to determine to
what levels the components of that product should be set to ensure that the target
will be met. The experience of the designer or design team is useful in establishing
the starting values for the investigation. This investigation begins by determining
what the component target values should be using wide component tolerances. This
is called parameter design. If the resultant variability around the product response
target is too great, the next step is to determine which tolerances should be tightened.
This approach, Tolerance Design, will be discussed in Section 9.
Example
A particular product has been designed with ve components (A through E). The
target response for the product is 59.0 units. Field experience has indicated that
when the response differs from the target by ve units, the average customer would
complain and the repair cost would be $150. From this information, the k value in
the loss function can be calculated.
A brainstorming group that consisted of the designer and other experts in this
area determined that the response is linear over the component range of interest and
that the components should be evaluated at the levels shown in Table 9.54.
k
per unit

$ /
$ .
150 5
6 00
2
2
SL3151Ch09Frame Page 441 Thursday, September 12, 2002 6:05 PM
442 Six Sigma and Beyond
Note: Factor A is more expensive to manufacture at the higher level than at the
lower level.
These factors will be assigned to an L8 inner array. The two unassigned columns
will be used for an estimate of the experimental error.
An L8 outer array contains the low cost tolerances as follows:
The tolerance amounts are added to/subtracted from the control level as indicated
by the outer array. The brainstorming group suspects that two other noise factors
are signicant, namely, the temperature (T) and humidity (H) of the assembly
environment. The noise and tolerance factors are combined into an L8 outer array.
The testing setup and test results are shown in Table 9.55.
An understanding of the way the testing matrix is interpreted can be reached by
considering the factor A. When the inner array column associated with factor A has
a value of 1, the value of A is 1000. When there is a 2 in that column, the value of
A is 1500. The actual test values of A are also determined by the tolerance value of
A in the outer array. If the outer array value of A is 1, then 50 is subtracted from
the value of A determined in the inner array. If the outer array value is 2, then 50
is added to it. This can be summarized as follows:
TABLE 9.54
Components and Their Levels
Levels
Component
(Units are those appropriate
to each component)
(Factor) Low High
A
B
C
D
E
1000
400
50
1300
1200
1500
700
70
2200
1600
Low Cost Tolerance
(Factor) Low High
A
B
C
D
E
50
15
5
200
100
+50
+15
+5
+200
+100
Actual Test Values of A
Inner Array
Outer Array Tolerance Value of A
Value of A 1 2
1
2
950
1450
1050
1550
SL3151Ch09Frame Page 442 Thursday, September 12, 2002 6:05 PM
Design of Experiments 443
The ANOVA table and level averages for the most signicant factors for the S/N
and raw data are shown in Table 9.56.
From the S/N level averages, D
2
B
1
E
2
is clearly the best setting for S/N. The
estimated S/N at that setting is:
Since A
1
is preferred from a cost standpoint and D
2
is preferred from the S/N
analysis, the next step is to determine if the value of C can be adjusted to attain the
target of 59. The average response at A
1
D
2
is:
Average Response = 50.6 + (56.23 50.6) + (53.18 50.6) = 58.8
To reach a target of 59, the value of C that is included in the average response
calculation must have a level average of 50.8 since:
Target = Average Response at A
1
D
2
+ Effect due to C
59.0 = 58.8 + (50.8 50.6)
TABLE 9.55
L8 Inner OA with L8 Outer OA and Test Results
Outer Array
T 1 2 2 1 2 1 1 2
H 1 2 2 1 1 2 2 1
E 1 2 1 2 2 1 2 1
D 1 2 1 2 1 2 1 2
C 1 1 2 2 2 2 1 1
L8 Inner Array
B 1 1 2 2 1 1 2 2 NTB
A B C D E X Y A 1 1 1 1 2 2 2 2 S/N
1
1
1
1
2
2
2
2
1
1
2
2
1
1
2
2
1
1
2
2
2
2
1
1
1
2
1
2
1
2
1
2
1
2
1
2
2
1
2
1
1
2
2
1
1
2
2
1
1
2
2
1
2
1
1
2
51.4
58.6
52.1
62.6
47.2
50.1
40.1
46.6
49.5
56.7
59.1
60.0
45.3
48.8
38.6
43.2
48.9
56.8
58.9
61.6
45.2
48.4
37.7
42.6
56.6
60.3
62.7
67.2
51.3
54.6
44.6
49.4
52.7
58.2
61.5
62.5
47.3
51.0
40.7
46.6
50.8
56.3
51.1
59.4
45.4
49.1
38.7
39.2
46.0
52.8
47.0
56.0
41.3
44.5
34.8
39.2
51.9
56.7
50.9
62.7
47.4
50.4
39.7
45.4
24.34
28.36
19.58
25.59
24.28
24.86
22.99
23.12
Note: X and Y are the unassigned columns that will be used to estimate error.
S N / . . . . . . .
.
+
( )
+
( )
+
( )

24 14 25 48 24 14 25 46 24 14 25 30 24 14
27 96
SL3151Ch09Frame Page 443 Thursday, September 12, 2002 6:05 PM
444 Six Sigma and Beyond
TABLE 9.56
ANOVA Table (NTB) and Level Averages
for the Most Signicant Factors
Source df SS MS F Ratio S %
A
B
C
D
E
X
Y
Error
(pooled error)
1*
1
1
1
1
1*
1*

3
0.860
13.965
2.533
14.455
10.845
2.477
10.424
1.906
0.860
13.965
2.533
14.455
10.845
2.477
10.424
0.635
21.992
3.989
22.764
17.079
13.330
1.898
13.820
10.210
4.446
30.50
4.34
31.62
23.36
10.17
Total 7 43.704 6.243
S/N Level Averages
Factor Level 1 Level 2
D
B
E
22.80
25.46
22.98
25.48
22.82
25.30
Average of all data = 24.14
Raw Data ANOVA Table
Source df SS MS F Ratio S %
A
B
C
D
E
X
Y
A-Tol.
B-Tol.
C-Tol.
D-Tol.
E-Tol.
H
T
Error
(pooled error)
1
1
1
1
1
1*
1*
1
1*
1
1
1*
1
1*
49*
54
2032.883
9.533
435.244
427.973
13.231
3.658
3.563
88.125
1.995
113.156
49.879
3.754
239.089
30.754
140.969
161.297
2032.883
9.533
435.244
427.973
13.231
3.658
3.563
88.125
1.995
113.156
49.879
3.754
239.089
30.754
140.969
161.297
680.577
3.192
145.713
143.279
4.430
29.503
37.883
16.699
80.043
2029.896
6.546
432.257
424.986
10.244
85.138
110.169
46.892
236.102
188.180
56.85
0.18
12.11
11.90
0.29
2.38
3.09
1.31
6.61
5.27
Total 63 3570.410 56.673
Level Averages
Factor Level 1 Level 2
A
C
D
56.23
47.99
48.01
44.96
53.21
53.18
Average of all data = 50.60
SL3151Ch09Frame Page 444 Thursday, September 12, 2002 6:05 PM
Design of Experiments 445
The target value for C can be interpreted from the tested levels and the level
averages as follows:
Note:
Value of C Response
50 47.99
Target 50.8
70 53.21
This value, 60.77, is at the same percentile between 50 and 70 as 50.8 is at
between 47.99 and 53.21.
In summary, the recommended target values are:
The estimated average is 59.0 and the estimated S/N is 27.96.
The 90% condence on the average is:
A set of verication runs is not made using the recommended factor target values
given previously. The tolerance levels from the outer array are used to dene an L8
verication run experiment as shown in Table 9.57.
The average response is 59.5 and the S/N is 27.3. Since the average response
and the S/N are close to the predicted values, the verication runs conrm the
prediction. If the average response and S/N did not conrm the predictions, the data
could be analyzed to determine which factors have response characteristics different
from those predicted.
The information from the verication runs cannot be used directly in the loss
function, since the observed variability may be affected by testing only at the
tolerance limits. The center portions of the factor distributions are not represented
in these tests. For the situation where the change in response is assumed to be a
linear increase or decrease across the tolerance levels, the loss function can be easily
calculated as follows:
Factor Target Value
A
B
C
D
E
1000
400
60.77
2200
1600
Target +


( )

50
50 8 47 99
53 21 47 99
70 50
60 77
. .
. .
.
59 0 4 02 2 987 1 16 1 8
59 0 1 50
. . * . * / /
,
. .
+
( )

or
SL3151Ch09Frame Page 445 Thursday, September 12, 2002 6:05 PM
446 Six Sigma and Beyond
1. If it can be assumed that the C
pk
in production will be 1.0 or greater for
all specied tolerances, then the difference between the tolerance limits
will be equal to or greater than six times the production standard deviation
for each component parameter.
2. The difference between the response level averages for the two tolerance
limits will equal six times the production response standard deviation
since the product response is linearly related to the component parameter
level.
3. The response variance due to each tolerance is additive since the response
effect of each component tolerance is additive. (Variance = Std. Dev.
2
)
4. The effect of noise factors can be treated in a similar manner.
In this example, the levels of humidity were set at the average humidity 2
times the humidity standard deviation. The change in response is assumed to be
linear across the change in humidity. The difference in response between the two
levels represents four times the response standard deviation. The response variance
can be calculated as shown in Table 9.58.
The response variance will be 3.9970. The loss function can be calculated from
the equation:
For a production run of 50,000 pieces, the total loss would be $1,274,100.
In the situation where the change in response is non-linear across the tolerance
levels or noise factor levels, a computer simulation can be used to determine the
distribution of product response for each component taken singly and for the total
TABLE 9.57
Variation Runs Using Recommended Factor Target Values
A-Tol B-Tol C-Tol D-Tol E-Tol H T Test Result
1
1
1
1
2
2
2
2
1
1
2
2
1
1
2
2
1
1
2
2
2
2
1
1
1
2
1
2
1
2
1
2
1
2
1
2
2
1
2
1
1
2
2
1
1
2
2
1
1
2
2
1
2
1
1
2
60.2
57.9
59.5
64.8
59.4
58.6
55.7
59.9
Loss k offset
Loss
per piece
+
( )
+
( )
j
(
,
\
,
(

2 2
2
6 00 3 9970 59 5 59 0
25 482
$ . . . .
$ .
SL3151Ch09Frame Page 446 Thursday, September 12, 2002 6:05 PM
Design of Experiments 447
assembly. This situation occurs when the highest (lowest) response occurs at the
component nominal and the response decreases (increases) as the distance from the
component nominal increases. The purpose of these calculations is to estimate the
response variance for the total assembly population so that the loss function can be
calculated. Once the value of the loss function has been calculated, it can be
compared to the cost of tightening the tolerances so as to determine the optimal
tolerance limits. This technique will be discussed in the following section.
TOLERANCE DESIGN
This section illustrates:
1. How tolerance limits can be set so that the product will meet customer
requirements repeatedly with the widest possible tolerances. The goal is
to choose the most cost efcient tolerance levels.
2. How prior knowledge about the response characteristics of the component
levels of a product can be efciently used.
DISCUSSION
After the target level for each component has been determined using parameter
design, the loss function value of the product design is compared to design guidelines
and to the cost of improving the production processes to meet tightened tolerances.
If it costs less to tighten the tolerance than the resulting reduction in the loss function,
then for the long run it is better to tighten the tolerance. The evaluation of the
tolerance limits and the selective tightening of the limits is called tolerance design.
As in parameter design, an example will illustrate this approach. The reader may
need to review Volume V of this series.
TABLE 9.58
Calculated Response Variance
Tolerance
for Factor
Response Difference
Between Tol Limits
Response Production
Std. Dev. Variance
A
B
C
D
E
2.2
1.0
2.2
1.6
0.1
0.37
0.16
0.37
0.27
0.02
0.1334
0.0278
0.1344
0.0711
0.0003
0.3680
Humidity (H) 3.2 0.80 0.6400
Error Variance 2.9890
3.9970
SL3151Ch09Frame Page 447 Thursday, September 12, 2002 6:05 PM
448 Six Sigma and Beyond
Example
Continuing the example from the previous section, the loss function with low cost
tolerance was calculated to be $25.482 per piece or $1,274,100 for the production
run of 50,000 pieces. This calculation was based on the assumptions that:
1. The tolerance spread is equal to six times the production standard devi-
ation (C
pk
= 1).
2. The response changes linearly across the tolerance limits.
3. The sum of the variance contributions for the components is the total
assembly response variance.
4. Humidity was set at levels that are the average humidity 2 times the
standard deviation, and the response changes linearly across these levels.
If any of these assumptions cannot be made, a computer simulation using the
appropriate assumptions can be used to determine the total assembly response
variance.
The calculation of the response variance was shown in Table 9.58, and we are
going to repeat it again. From the response variance contribution table and the
calculations, it can be seen that the tolerances for factors A, C, and, to a lesser
degree, D are the largest component tolerance contributors to the total response
variance. The cost of reducing those tolerances is shown in Table 9.59.
Since the response is linearly related to the component levels, a reduction of
20% in the tolerance spread will result in a reduction of 20% in the response spread.
In the situation where the response is not linear, it would be necessary to run a
computer simulation, as was mentioned previously.
The impact of tightening the tolerance of each of the three components is
summarized in Table 9.60. The variance % reduction per $1000 cost indicates that
the reduction of the tolerance of component A should be investigated rst since the
% reduction per $1000 cost is the greatest.
The situation for a reduction of 20% in the tolerance limits of component A is
summarized in Table 9.61. The response variance will be 3.9484. If the same 0.5
offset is assumed, the loss function can be calculated as:
TABLE 9.59
Cost of Reducing Tolerances
Low Cost
Tolerance
High Cost
Tolerance
% Cost to Change the Tolerance
Component Low High Low High Reduction for 50,000 Pieces (dollars)
A
C
D
50
5
200
+50
+5
+200
40
4
150
+40
+4
+150
20
20
25
5,000
15,000
9,000
SL3151Ch09Frame Page 448 Thursday, September 12, 2002 6:05 PM
Design of Experiments 449
TABLE 9.60
The Impact of Tightening the Tolerance
Component
Response Difference
Between Tightened
Tolerance Limits
Response
Production
Std. Dev.
Tightened
Response
Variance
Cost of Tightened
Tolerance
(dollars)
A
C
D
1.76
1.76
1.20
0.293
0.293
0.200
0.0858
0.0858
0.0400
5000
15,000
9000
Component
Original
Variance
Tightened
Variance
Variance %
Reduction
Variance %
Reduction per
$1000 Cost
A
C
D
0.1344
0.1344
0.0711
0.0858
0.0858
0.0400
36.16
36.16
43.74
7.23
2.41
4.86
TABLE 9.61
Reduction of 20% in the Tolerance Limits of Component A
Tolerance for
Component
Response Difference
Between Tol. Limits
Response Production
Std. Dev.
Response
Variance
A
B
C
D
E
1.76
1.00
2.20
1.60
0.10
0.293
0.160
0.670
0.270
0.020
0.0858
0.0278
0.1344
0.0711
0.0003
0.3194
Humidity (H) 3.2 0.80 0.6400
Error variance 2.9890
3.9484
Loss k offset
Loss
per piece
+
( )
+
( )
j
(
,
\
,
(

2 2
2
6 00 3 9484 59 5 59 0
25 1904
$ . . . .
$ .
SL3151Ch09Frame Page 449 Thursday, September 12, 2002 6:05 PM
450 Six Sigma and Beyond
For a production run of 50,000 pieces, the total loss would be $1,259,520. This
is a $14,580 decrease in the loss function from the low cost tolerance situation.
Since the decrease in the loss function is more than the $5000 cost of tightening the
tolerance on A, it is advantageous in the long run to tighten that tolerance.
The 0.50 offset is assumed to be a constant to provide a basis to compare
improvement in only the variance part of the equation. In some situations, the actions
taken to reduce the response variance may also result in a better-centered response
distribution.
The next step is to evaluate the loss function with the tolerance limits reduced
for component D see Table 9.62. The response variance will be 3.9173. If the
same 0.5 offset is assumed, the loss function can be calculated as:
For a production run of 50,000 pieces, the total loss would be $1,250,190. This
is a $9330 decrease in the loss function from the situation with only the tolerance
of A tightened. Since the decrease in the loss function is more than the $9000 cost
of tightening the tolerance on D, it is advantageous in the long run to tighten that
tolerance.
We can do the same for component C see Table 9.63.
The response variance will be 3.8687. If the same 0.5 offset is assumed, the loss
function can be calculated as:
TABLE 9.62
Reduction of Tolerance Limits for Component D
Tolerance for
Component
Response Difference
Between Tol. Limits
Response Production
Std. Dev.
Response
Variance
A
B
C
D
E
1.76
1.00
2.20
1.60
0.10
0.293
0.160
0.370
0.200
0.020
0.0858
0.0278
0.1344
0.0400
0.0003
0.2883
Humidity (H) 3.2 0.80 0.6400
Error variance 0.2883
3.9173
Loss k offset
Loss
per piece
+
( )
+
( )
j
(
,
\
,
(

2 2
2
6 00 3 9173 59 5 59 0
25 0038
$ . . . .
$ .
SL3151Ch09Frame Page 450 Thursday, September 12, 2002 6:05 PM
Design of Experiments 451
For a production run of 50,000 pieces, the total loss would be $1,235,610. This
is a $14,580 decrease in the loss function from the situation with only the tolerances
of A and D tightened. Since the cost of tightening the tolerance on component C is
$15,000, it would not be advantageous to tighten that tolerance.
So far, the tolerance design has been entirely a paper exercise based on the tests
run during the parameter design and the assumptions about the relationships between
the component levels and the response. A set of conrmation runs should be made
with the tolerance limits for components A and D tightened. An L8 orthogonal array
is used for the conrmation runs with the levels set, test setup, ANOVA table, and
level averages as shown in Table 9.64.
The test setup and results:
TABLE 9.63
Reduction of Tolerance Limits for Component C
Tolerance for
Component
Response Difference
Between Tol. Limits
Response Production
Std. Dev.
Respons
Variance
A
B
C
D
E
1.76
1.00
1.76
1.20
0.10
0.293
0.160
0.293
0.200
0.020
0.0858
0.0278
0.0858
0.0400
0.0003
0.2397
Humidity (H) 3.2 0.80 0.6400
Error Variance 2.9890
3.8687
A-Tol B-Tol C-Tol D-Tol E-Tol H Test Result
1
1
1
1
2
2
2
2
1
1
2
2
1
1
2
2
1
1
2
2
2
2
1
1
1
2
1
2
1
2
1
2
1
2
1
2
2
1
2
1
1
2
2
1
1
2
2
1
1
2
2
1
2
1
1
2
59.7
57.5
59.5
63.6
59.3
58.6
55.8
59.3
Loss k offset
Loss
per piece
+
( )
+
( )
j
(
,
\
,
(

2 2
2
6 00 3 8687 59 5 59 0
25 1904
$ . . . .
$ .
SL3151Ch09Frame Page 451 Thursday, September 12, 2002 6:05 PM
452 Six Sigma and Beyond
The average response is 59.2 and the S/N is 28.5. The ANOVA table and level
averages for all of the factors from the verication runs are:
As mentioned in the last section, this information cannot be used directly in the
loss function since the observed variability may be affected by testing only at the
TABLE 9.64
L8 OA used for the Conrmation Runs with the Levels
Set, Test Setup, ANOVA Table, and Level Averages
Tolerance Levels
Low Level High Level
Column (1) (2) Nominal
A-Tol.
B-Tol.
C-Tol.
D-Tol.
E-Tol.
H (Humidity)
Unassigned
1
2
3
4
5
6
7
45
15
5
150
100
Low
+45
+15
+5
+150
+100
High
1000
400
60.77
2200
1600

ANOVA Table
Source df SS MS F Ratio S %
A-Tol.
B-Tol.
C-Tol.
D-Tol.
E-Tol.
H
T
Error
(pooled error)
1
1
1
1
1*
1
1*

2
6.661
1.201
9.461
2.761
0.101
13.781
0.551
0.652
6.661
1.201
9.461
2.761
0.101
13.781
0.551
0.652
20.433
3.684
29.022
8.469
42.273
6.335
0.875
9.135
2.435
13.455
2.282
18.35
2.53
26.46
7.05
38.98
6.61
Total 7 34.519 4.931
Level Averages
Factor Level 1 Level 2
A-Tol
B-Tol
C-Tol
D-Tol
E-Tol
H
60.1
58.8
58.1
58.6
59.3
60.5
58.3
59.6
60.3
59.8
59.1
57.9
SL3151Ch09Frame Page 452 Thursday, September 12, 2002 6:05 PM
Design of Experiments 453
tolerance limits. The center portions of the distributions are not represented in these
tests.
Since the change in response is assumed to be a linear increase or decrease
across tolerance levels, the loss functions can be easily calculated. The C
pk
in
production is assumed to be 1.0 or greater for all specied tolerances. The difference
between the tolerance limits will be equal to six times the production standard
deviation for each component parameter for a C
pk
of 1. Since the product response
is linearly related to the component parameter level, the difference between the
response level averages for the two tolerance limits will equal six times the produc-
tion response standard deviation. Since the response effect of each component
tolerance is additive, the response variance due to each tolerance is additive. In a
similar manner, the difference in response between the two humidity levels represents
four times the response standard deviation. For this example, the response variance
can be calculated as shown in Table 9.65.
The response variance will be 1.0318. The loss function can be calculated from
the equation:
For a production run of 50,000 pieces, the total loss is estimated to be $384,540
compared to $1,274,100 before the tolerance design. This is a $889,560 reduction
from the original estimate of the value of the loss function.
TABLE 9.65
Response Variance
Tolerance
for Factor
Response Difference
Between Tol. Limits
Response Production
Std. Dev. Variance
A
B
C
D
E
1.8
0.8
2.2
1.2
0.2
0.30
0.13
0.37
0.20
0.03
0.0900
0.0178
0.1344
0.0400
0.001
0.2833
Humidity (H) 2.6 0.65 0.4225
Error variance 0.3260
1.0318
Loss k offset
Loss
per piece
+
( )
+
( )
j
(
,
\
,
(

2 2
2
6 00 1 0318 59 5 59 0
7 6908
$ . . . .
$ .
SL3151Ch09Frame Page 453 Thursday, September 12, 2002 6:05 PM
454 Six Sigma and Beyond
The reduction is due to three difference elements:
1. The mean was relocated from 59.5 to 59.2.
2. The error variance was reduced from 2.989 to 0.326.
3. The tolerances for components A and D were tightened.
Humidity
Note that humidity was identied as an important contributor throughout this exam-
ple. The experimenter should investigate the possibility of controlling humidity to
further reduce the loss function. If either the effect of humidity on the design can
be minimized or the humidity can be controlled, the loss function could be greatly
reduced.
Testing
Eighty tests were used in the example for the last and present sections. These tests
were used as follows:
Determine the target levels 64 tests.
Conrm the choice of targets 8 tests.
Determine the tolerances to tighten 0 tests (based on prior knowledge
and simulation).
Conrm the performance with tightened tolerances 8 tests.
DOE CHECKLIST
Action Complete
Describe in measurable terms how the present situation deviates from what is desired.
Identify the proper people to be involved in the investigation and the leader of the
investigation.
Obtain agreement from those involved on:
Scope of the investigation
Other constraints, such as time or resources
Obtain agreement on the goal of the investigation.
Determine if DOE is appropriate or if other research should be done rst.
Use brainstorming to determine what factors may be important and which of them could
interact.
Choose a response and measurement technique that:
Relates to the underlying cause and is not a symptom
Is measurable
Is repeatable
Determine the test procedure to be used.
Determine which of the factors are controllable and which are noise.
Determine the levels to be tested for each factor.
Choose the appropriate experimental design for the control and noise factors.
SL3151Ch09Frame Page 454 Thursday, September 12, 2002 6:05 PM
Design of Experiments 455
SELECTED BIBLIOGRAPHY
Bowker, A.H. and Lieberman, G.J., Engineering Statistics, Prentice-Hall, Inc., Englewood
Cliffs, NJ, 1972.
Box, G.E.P., Report No. 26, Studies in Quality Improvement: Signal-to-Noise Ratios, Perfor-
mance Criteria and Transformation, The College of Engineering, University of
Wisconsin Madison, 1987.
Box, G.E.P. and Draper, N.R., Empirical Model Building and Response Surfaces, John Wiley
& Sons, New York, 1987.
Box, G.E.P., Hunter, W.G., and Hunter, J.S., Statistics for Experimenters, John Wiley & Sons,
New York, 1978.
Brown, R.M. and Burke, M.I., Framing of Design of Experiments (DOE), Proceedings from
the American Society for Quality Control 42nd Annual Quality Congress, May 1988.
Fleiss, J.L., Statistical Methods for Rates and Proportions, John Wiley & Sons, New York,
1981.
Hicks, C.R., Fundamental Concepts in the Design of Experiments, Holt, Rinehart and Winston,
New York, 1982.
Ishikawa, K., Guide to Quality Control, Asian Productivity Organization, Tokyo, 1983.
Kapur, K.C. and Lamberson, L.R., Reliability in Engineering Design, John Wiley & Sons,
New York, 1977.
Taguchi, G., System of Experimental Design, Volumes 1 and 2, UNIPUB: Kraus International,
White Plains, NY, and American Supplier Institute, Dearborn, MI, 1987.
Taguchi, G. and Konishi, S., Orthogonal Arrays and Linear Graphs: Tools for Quality
Engineering, American Supplier Institute, Dearborn, MI, 1987.
Taguchi, G. and Wu, Y., Introduction to Off-Line Quality Control, Central Japan Quality
Control Association, Nagaya, Japan, 1979.
Wu, Y. and Moore, W.H., Quality Engineering Product and Process Design Optimization,
American Supplier Institute, Dearborn, MI, 1985.
Japanese Industrial Standard, General Tolerancing Rules for Plastics Dimensions JIS K
71091986, Japanese Standards Association, Tokyo, 1986.
Obtain nal agreement from all involved parties on the:
Goal
Test procedure
Approach
Timing of the work plan
Allocation of roles
Arrange to obtain appropriate parts, machines and testing facilities.
Monitor the testing to assure proper procedures are followed.
Use the appropriate techniques to analyze the data.
Run conrmatory experiments.
Prepare a summary report of the experiment with conclusions and recommendations.
Action Complete
SL3151Ch09Frame Page 455 Thursday, September 12, 2002 6:05 PM
SL3151Ch09Frame Page 456 Thursday, September 12, 2002 6:05 PM

457

10

Miscellaneous Topics
Methodologies

THEORY OF CONSTRAINTS (TOC)

Every organization that wishes to achieve signicant improvement with modest
capital investment must address ve critical questions:
1. What are the key areas within the organization for competitive improve-
ment?
2. What are the key technologies and techniques that will improve these key
competitive areas at least cost to the organization?
3. How do these improvement and investment opportunities relate (i.e., how
can they be applied in an integrated, supportive and logical manner)?
4. In what sequence should these opportunities be addressed?
5. What are the real nancial benets going to be?
Certainly, many other questions must be dealt with, but these are the ve issues
that frequently cause the most difculty in industrial and business planning today.
So, how can the theory of constraints (TOC) help? Before we answer this, let us
examine the fundamental concepts of TOC.

T

HE

G

OAL

The fundamental goal of any for-prot organization is to make money. In fact,
practical experience tells us that the owners/shareholders of such organizations
demand this end result performance. However, is this denition of the goal complete?
The notion of continuous or ongoing improvement has proven to be extremely
powerful in all aspects of life. Therefore, does the application of this notion to our
denition of the goal have any important impact? Most organizations strive to
improve their money-making performance year after year. So, the goal is really to

make more money now and in the future.

In this manner, it is impossible to make
short-term decisions that bolster short-term protability while compromising longer-
term protability without violating our goal denition.
If the owners are responsible for determining the goal of the organization, what
is the role of management? Clearly, management must develop strategies and tactics
that are appropriate for achieving the goal. Unlike the goal, these strategies and
tactics must be exible and responsive to changing conditions. The goal of for-prot
organizations has been the same for over 1000 years and shows all indications that
it will continue in good health.

SL3151Ch10Frame Page 457 Thursday, September 12, 2002 6:03 PM

458

Six Sigma and Beyond

Now, before an organization can develop its own customized strategies and tactics,
it must rst address at least one prerequisite. How would the management team of an
organization know whether a particular strategy or tactic was effective (i.e., contributed
to making more money)? Some set of measures would have to be used to gauge the
degree of success. As a matter of fact, the implementation of a few strategic and/or
tactical candidates may not be measurable. These candidates would likely not be
selected for actual implementation. So, what should be the high-level measures that
lead us to understand the impact of our strategic and tactical efforts on our goal?

S

TRATEGIC

M

EASURES

For TOC, three measures have become the pillars of the methodology. They are:
1.

Throughput

(T)



The



net rate at which the organization generates and
contributes new money, primarily through sales.
2.

Investment

(I)



The



money the organization spends on stuff, short-
and long-term assets, which can ultimately be converted into T.
3.

Operating Expense (OE)

The



money the organization spends convert-
ing I into T.
What do we really mean by these strategic measures, starting with T? T implies
that no one within the organization can get a gold star or rest easy until the product
has been sold. Simply designing or producing the product is not enough. Instead,
for anyones efforts to count toward the generation of new money, products must
not be just designed and produced but sold as well. Therefore, we are not going to
allow elements of the organization to play output performance games. No longer
will we allow elements of the organization to hide behind

inventory prots.

With regard to I, TOC combines all the materials (e.g., short-term asset invest-
ments in raw materials, work-in-process materials, and unsold nished goods mate-
rials) captured within the organization, together with the traditional capital assets
(e.g., plant, equipment, land) of the organization. It is easy to visualize how materials
are converted into T, but how can these traditional capital assets of the organization
be converted into T? The 1980s were known as the decade of acquisitions and
takeovers. During this period, many owners sold portions of companies (e.g., plants,
divisions) to someone else in return for money. In their eyes, they converted specic
capital assets into T.

OE

seems to possess a similar composition. In

OE

,



we have traditionally thrown
direct labor expenses, indirect labor expenses, overhead expenses, sales expenses,
and general and administrative expenses into one big pot. Why? We have noticed
that over the past 20 or 30 years, the direct correlation between the level of total
operating expense of a typical organization versus the level of business it enjoys has
gradually eroded. In fact, today it is not uncommon to nd organizations whose
level of business can uctuate greatly upwards or downwards, while their true out-
of-pocket operating expense spending hardly budges. Thus, many expenses that we
traditionally view as variable (i.e., proportional to level of business activity) are no
longer proportional to level of business activity.

SL3151Ch10Frame Page 458 Thursday, September 12, 2002 6:03 PM

Miscellaneous Topics Methodologies

459

N

ET

P

ROFIT

, R

ETURN



ON

I

NVESTMENT

,

AND

P

RODUCTIVITY

At this point, it is not unreasonable to ask the question: How do T, I, and

OE

provide clear indications of the impact of our actions on the goal? Before we can
directly answer that question, we should examine T a little more closely. What did
we mean in our denition of T, new money generated and contributed primarily
through sales? Is T simply equal to sales revenue generated for each time period?
If you were the proprietor of a small business, say, a dry cleaner, you would not
pay yourself a monthly salary. Rather, you would pay yourself in accordance with
what money was left at the end of the month after all your expenses were paid (e.g.,
labor, insurance, raw materials suppliers).
As a small business proprietor, you may encounter periods of time where busi-
ness is so bad that you not only do not make a prot (i.e., no money left over at the
end of the month to pay yourself), but you do not have enough money to pay all
your expenses. What do you do? Well, you probably carefully ration the money you
did generate that month. Which expenses will you pay rst? Normally, you are not
excited about the prospect of asking your suppliers to wait for their payments. If
presented with this situation, they may elect to no longer service you. So, you will
probably pay them rst.
It is interesting to notice that the types of expenses that you would pay rst

are

directly proportional to the level of your business. In other words, the quantity of
raw materials and component parts you purchase from your suppliers, together with
the level of subcontracting



services you buy from



your subcontractors, go up when
business levels go up and vice versa. Expenses that clearly demonstrate variability
proportional to level of business activity can be thought of as the true variable
expenses.
Once you have paid these true expenses from the money your business generated
through sales, you can use the money left to cover your own internal recurring and
xed operating expenses. Therefore, the money left after all these expenses are
covered is contributed back to the company to cover its

OE.

Anything left over at
this point is pre-tax net prot. In summary:
T = sales true variable expenses (TVE)
Net prot (NP) = T

OE

The net prot generated per period in relation to the total investment base of
the company employed is an important relative measure. This relative measure is
frequently thought of as return on investment and can be summarized as follows:
Return on Investment (ROI) = NP/I = (T

OE

)/I
Finally, how could we strategically measure the productivity of the organization?
The traditional denition of productivity compares the value of the output generated
to the money spent in the generation process. From the T, I, and

OE

perspective,
this would lead us to the following conclusion:

SL3151Ch10Frame Page 459 Thursday, September 12, 2002 6:03 PM

460

Six Sigma and Beyond

Productivity = T/

OE

In other words, the productivity of the organization can be described as simply
the T dollars generated for each

OE

dollar spent.

M

EASUREMENT

F

OCUS

It has often been



said that if you focus on everything, you will end up focusing on
nothing. So in this context, which of the three strategic measures (T, I, or

OE

)



should
an organization focus its improvement efforts on? Before we answer this question,
it may be helpful to determine which measure typical organizations traditionally
have focused on. It is our experience that traditional focus is on

OE.

Why?



Here are
some popular reasons:


OE

is perceived to be the easiest to control.
It is thought that

OE

changes can be made quickly.


OE

is where all of our people costs reside.
Productivity improvement justications usually focus on

OE

reduction.
Traditional product costs are partially derived by allocating

OE

to the per
product level.
Now that we have a perspective as to why conventional improvement efforts are
commonly focused on

OE,

is this the proper focal point for the modern, lean, quick-
response enterprise? In order to answer this question, we should rst examine the
desired improvement trends for T, I, and

OE

from the perspective of long-term
continual improvement:
From this perspective, reducing I and

OE

while increasing T would have positive
impact on NP and ROI. However, how far can we go in our efforts to reduce I and

OE

? From a pure academic perspective, our I and

OE

improvement efforts are bound
by zero, and in reality, we cannot go below some non-zero threshold without driving
the enterprise out of business. Also, each step we take closer to our I and

OE

practical
reduction limits, the more energy, effort, resources, and time it usually takes to
generate real nancial benets. In other words, long-term I- and

OE-

focused reduc-
tion efforts are prone to the laws of diminishing returns.
On the other hand, T appears to have no obstacle in the path of its long-term
continual improvement. Thus, it would appear that over the long term, our improve-
ment efforts should be focused primarily on T. Now, does this imply that I and

OE

are unimportant? Of course not. First, the net prot and ROI relationships rely heavily
on I and

OE.

Therefore, we cannot implement any T improvement opportunity
without rst understanding its impact on I

, OE,

and therefore, net prot and ROI.
Second, obvious opportunities for real and meaningful I and

OE

reduction should
always be pursued. However, this also implies that our efforts to identify improve-
ment opportunities will not be preoccupied by I and

OE.

Rather, we will marshal
and focus our efforts on identifying and implementing T improvement opportunities.
Now, what is the implication of shifting continual improvement focus from the
traditional

OE

perspective to the more modern perspective of T? Another analogy

SL3151Ch10Frame Page 460 Thursday, September 12, 2002 6:03 PM

Miscellaneous Topics Methodologies

461

may aid our investigation. Assume you can go to your local optician and purchase
a pair of

OE

glasses. When you get back to your company and put these glasses
on, you are able to clearly, easily, and rapidly identify and prioritize opportunities
for

OE

reduction, and as a result, you amass a list of

OE

reduction projects (A, B,
C, D, and E). A few days later, you return to that same optician and purchase a pair
of T prescription glasses. Upon returning to your company, you put on your new
T glasses and are able to clearly, easily, and rapidly identify and prioritize opportu-
nities for increasing T. Using your instincts, do you think the list of T projects would
be the same or different from

OE

reduction project list? Over the past ve years
and thousands of respondents, the answer has unanimously been different. In other
words, our instincts tell us that perspective is critical in guiding continual improve-
ment efforts. Therefore, viewing the organization through the eyes of T may be
the most effective method of driving continual improvement within the modern,
lean, quick-response enterprise.

T

HROUGHPUT



VERSUS

C

OST

W

ORLD

A great deal of attention has been paid to the comparison between cost world and
throughput world decision making. Let us take time to simply examine this rela-
tionship from the perspective of strategic or organization-wide productivity (T/

OE

).
Generally, the traditional focus on

OE

reduction generates a consistent degra-
dation in T. However, the rate of

OE

reduction is greater than the resulting rate of
T decrease, thus leading to a misleading improvement in productivity. Obviously,
this false sense of security and improvement cannot be sustained for a prolonged
period of time. In addition, most organizations are capable of maintaining produc-
tivity levels even in the midst of signicant T losses. These are classic symptoms
of what has become known as the

death spiral.

On the other hand, in the situation where throughput world T,

OE,

and T/

OE

condition in which T will typically increase steadily while

OE

increases at a slower
rate or is held constant, the result is increased organization-wide productivity per-
formance. The attention is clearly being paid to increasing T, even if

OE

must be
increased as well. The best nancial arbitrator of such improvement options is
incremental ROI.

O

BSTACLES



TO

M

OVING



INTO



THE

T

HROUGHPUT

W

ORLD

When information is passed from the bottom of the organization to the top, it rarely
gets there without signicant interpretation and summarization along the way. Some
may even say that information rarely gets to the top of the organization without
distortion. Likewise, as policies ow down the organization from the top to each
lower level, rarely are these policies not interpreted prior to their implementation.
This distortion, some may say, also occurs to policies as they cascade down the
organization. Why does this distortion occur? Usually, individuals and departments
at each level of the organizations pyramid will interpret information owing up or
policies cascading down to their own best advantage. So, what perspective or frame-
work do they use in determining how to perform this interpretive process? There is

SL3151Ch10Frame Page 461 Thursday, September 12, 2002 6:03 PM

462

Six Sigma and Beyond

an old saying that goes, Tell me how you will measure me, and I will tell you how
I will behave. Therefore, the manner in which each group within the organization
is measured directly impacts that groups actions.
To see how this phenomenon occurs, let us examine a hypothetical organization
with sales, nance, manufacturing, product development (PD), and quality depart-
ments within that organization:
Typical measurable characteristics are:
Finance
Cash ow management
Return on assets
Return on investment
Sales
Volumes
Product development
Development budgets
Development schedules
Quality
Defect rates
Returns
Scrap rates
Warrantee claims
Manufacturing
On-time delivery
What do you notice about these individual department or local measures? First,
many of them exist. Second, they are all different. But there is something else. Let
us say that I manage manufacturing and my on-time delivery is getting worse. Such
local measures as on-time delivery are important to me; they are like my professional
report card. So I conclude that in order for me to improve my local measure, I need
to add production capacity by purchasing and installing a new piece of equipment.
However, when I take my recommendation to the nance people, they veto the
proposal not because they are inherently nasty people, but because the proposal will
make their return on assets local measures worse. So I rethink my strategy and
conclude that I could improve on-time delivery by simply speeding up my production
equipment. However, when I do that, the quality manager reacts negatively, to say
the least, because the defect rate local measure gets worse. It appears that frequently
my efforts to improve those measures for which I am held accountable hurt other
local measures for which others in the organization are held accountable. In other
words,

our traditional local measures are frequently in conict with one another.

In the quick response, lean, modern organization, can such a fundamental conict
be allowed to continue? Most believe it cannot. Can we overcome this situation?
The simplest solution would be to measure all departments and functions with the
same measures. What measures come to mind rst? How about T, I, and

OE

?
Of these fundamental measures, which should take precedence within the quick-
response, lean, modern enterprise? We concluded earlier that T provided the best

SL3151Ch10Frame Page 462 Thursday, September 12, 2002 6:03 PM

Miscellaneous Topics Methodologies

463

perspective regarding continual improvement. Therefore, we are not suggesting that
we treat each department within the enterprise as an enterprise unto itself focusing
on its own T. Instead, we are suggesting that each local area be measured on its
effort and how those efforts contribute directly to improving the global T of the
entire enterprise.

T

HE

F

OUNDATION

E

LEMENTS



OF

TOC

When we view organizational opportunities for improvement through the eyes of T,
we are really asking ourselves a simple question, What limits our ability to improve
the T of our organization? This simple question leads quickly to the conclusion
that the typical organization is nothing more than a

system

composed of tightly
interlinked or interdependent subsystem components (e.g., departments and func-
tions). A common analogy describing a system is that of a chain. Each link in the
chain represents individual organization departments and/or functions. Each depart-
ment is dependent upon the succeeding and preceding departments or links. The
overall performance of the chain is usually described in terms of tensile strength.
Therefore, when the question, What limits our ability to improve the overall per-
formance of the chain? is asked, the answer becomes obvious: its weakest link.
The concept of an organization as a collection of interdependent subsystem
components whose improved performance is based upon its single weakest link is
fundamental to systems thinking and is critical to our effort to view the organization
through the eyes of T. TOC calls the organizations weakest link its

constraint.



The
organizations constraint is that element of the organization that limits its ability to
improve performance relative to T.
Therefore, the two fundamental concepts of TOC are:
1. View the organization through the eyes of T.
2. Develop a common performance measurement system derived from T.
These two elements are related to design for the six sigma (DFSS) methodology
because:
1. Design engineering knows the critical features of the products that are
processed through the specic manufacturing process A.
2. Manufacturing engineering knows the critical manufacturing process steps
performed by the specic manufacturing process A.
3. Integrating the above two insights (the essence of DFSS) frequently allows
the joint design/manufacturing engineering team to ofoad a few non-
critical process steps now performed by process A to some other non-
constraint manufacturing process such as process B.

T

HE

T

HEORY



OF

N

ON

-C

ONSTRAINTS

Most managers in their everyday life quantify their daily projects in the following
prioritization scheme:

SL3151Ch10Frame Page 463 Thursday, September 12, 2002 6:03 PM

464

Six Sigma and Beyond

Tier I most important projects to a particular department
Tier II moderately important projects taking priority after Tier I
projects
Tier III low-priority projects that are scheduled but rarely addressed
First, let us examine how these lists would be constructed from the traditional
management perspective. What type of projects would make it into Tier I for man-
ufacturing process (MP) #1 department? Obviously, process improvement projects
focused primarily, if not exclusively, at MP #1 department. How about MP #3
department? The same. And MP #5, #6, and so on? The same. By looking at the
organizations various lists of Tier I process improvement projects, can we determine
where the leverage point of the organization is located? Can we determine what the
organizations key improvement thrust is? No! In fact many, many different issues
are key to the organization. It appears that the organization is focusing on everything.
Now, from the T perspective, what type of projects would make it into Tier I
for MP #1 department? Obviously, process improvement projects focused primarily,
if not exclusively, at MP #4 department. How about MP #3 department? The empha-
sis is also focused primarily on process improvement projects at MP #4 department.
And for MP #5, #6, and so on? The same. By looking at the organizations various
lists of Tier I process improvement projects, can we determine where the leverage
point of the organization is located? Can we determine what the organizations key
improvement thrust is? Yes! In fact, it appears that the organization has been able
to synchronize the process improvement efforts of all of its non-constraint resources
from the T perspective.
If any logistical systems T performance is limited by its constraint resource and
there can only be one constraint resource within a system at a single point in time,
then the rest of the organizations resources must be non-constraints. Therefore, in
terms of sheer numbers, non-constraints dominate the organization. From this per-
spective coupled with the Tier I process improvement example above, we have
discovered that we may have made a mistake naming this methodology the theory
of constraints. In reality TOC is not primarily about the poor overworked people in
the constraint department. TOC is not primarily about whether the constraint people
can invent a 25th hour in the day or an eighth day in the week. Rather, one of the
most powerful elements of TOC is the synchronization of the organizations non-
constraints so as to improve the T performance of the entire system.

T

HE

F

IVE

-S

TEP

FRAMEWORK OF TOC
A great deal of ground has been covered in this section. Let us summarize the lessons
derived in this section by listing the ve-step implementation process known as The
Five-Step Framework of the Theory of Constraints (TOC):
1. Identify the organizations constraint.
2. Develop plans to exploit the organizations constraint (e.g., squeeze out
as much T performance improvement as possible from the existing con-
straint resource).
SL3151Ch10Frame Page 464 Thursday, September 12, 2002 6:03 PM
Miscellaneous Topics Methodologies 465
3. Subordinate the actions of non-constraint resources to the implementation
of step #2 (e.g., ensure that all non-constraint functions support and are
synchronized in the implementation efforts of #2).
4. Elevate the organizations constraint (e.g., augment the constraint).
5. Once the constraint has been broken, go back to Step #1, but beware of
organizational inertia (e.g., be sure to clearly communicate to the entire
organization that it is time to search for the next constraint).
This common-sense process of improvement is neither complex to understand
nor difcult to implement. In fact, practical implementation experience tells us that
TOC logically integrates many of the traditional tools in the improvement toolbox
in an effort to improve the T performance of the entire organization. Such tools as
8D, design of experiments, and work team now become more effective in their
application when used through the eyes of T.
Finally, the power of TOC in the DFSS process is the understanding of constraint
and the action that must be taken to remove the constraint. In essence, TOC is a
viable methodology to eliminate the hidden factory from a design perspective.
SELECTED BIBLIOGRAPHY
Goldratt, E., Theory of Constraints, North River Press, Inc., Great Barrington, MA, 1990.
Goldratt, E. and Cox, J., The Goal, 2nd ed., North River Press, Inc., Great Barrington, MA,
1992.
Goldratt, E., Satellite Program: Facilitators Handbook. North River Press, Inc., Great Bar-
rington, MA, 1999.
Goldratt, E. Late Night Discussions on the Theory of Constraints, North River Press, Inc.,
Great Barrington, MA., 1998.
DESIGN REVIEW
A typical design review is a process, and it must be (a) multi-phased and (b) involved
in the different design phases. In fact, the reviews should also extend to the operations
and support phases. It is of paramount importance in any given design review to
consider the feedback of customer information because quite often it reveals factors
of concern that may have been forgotten or considered too lightly. If design reviews
are not taken seriously in the sequential design phases, warranty costs can well
exceed any early budgetary considerations. A very typical sequential design review
is given in the SAE R&M Guideline (1999, p. 16) shown in Table 10.1.
Even though Table 10.1 identies the core objectives of design review, there is
more to it than just a cursory outline of requirements (Stamatis, 2002). For example:
System Requirements Review This is the rst review with the customer
where the customer species the level of cost-effectiveness that the manu-
facturer is expected to meet. It is at this meeting(s) that customer and
manufacturer come to some agreement not only on the reliability and
SL3151Ch10Frame Page 465 Thursday, September 12, 2002 6:03 PM
466 Six Sigma and Beyond
maintainability (R&M) characteristics but also the adjunct attributes of
availability, dependability, and capability.
System Initial Design Review This meeting(s) provides the nal denition
of the system functional requirements, rming up what was discussed
earlier. The allocation of R&M values accompanied by the attributes that
support these is usually accomplished at this review.
Preliminary Design Review This is where the conguration items are
reviewed and the complexities and technologies are discussed. The objects
here are to (1) establish design adequacy and determine risks involved with
the proposed design methods and techniques, (2) harmonize the proposed
design with the specications, and (3) ensure the compatibility of the
physical and functional characteristics with each other and with the oper-
ating and maintenance environments. Resolution of the entire system could
be accomplished at this review where there is some nalization on at least
rst intentions. This would include initial sketches and drawings, mock-
ups, simulation models, and prototypes.
Interim Design Review Formality of meetings continue by the manufacturer
reviewing progress with the customer. Data feedback on all testing and
analysis starts to get more intense at this stage. The customer and supplier
review the status of the milestones to ensure being on target.
Critical Design Review By this stage the design should be rm enough to
lock in the design parameters and make preparations for the qualication
testing. The customer by all means should be present at this design review
to assure that all particulars of the contract are agreed upon.
Formal Qualication Review This is the nal review to precede full-scale
production. The qualication models were fabricated and assembled with
production tooling, but at this stage customer and manufacturer should be
ready for the production line. At this meeting the review team conrms that
the total package is as agreed upon and that it meets all the terms of the
contract.
TABLE 10.1
Design Review Objectives
Design Phase Review Objectives
1. Concept Concept review: Focus on feasibility of
proposed design approach
2. Development and design Preliminary design review: Validate
the capability of the evolving design to
meet all technical requirements
3. Build and install Build: Address issues resulting from
machine build and runoff testing
In-plant installation: Conduct failure
investigation of problem areas for
continuous improvement
SL3151Ch10Frame Page 466 Thursday, September 12, 2002 6:03 PM
Miscellaneous Topics Methodologies 467
In-house Reviews In the event problems occur after production starts, addi-
tional reviews may be necessary. Most R&M texts do not discuss this type
of review because theoretically the design is frozen and the only prob-
lems, if there are problems, are production types. But in the real world
anomalies do occur, and they have to be addressed. The design review
concept should continue until the customer is totally convinced that that
the product is of high quality.
Many questions arise in the course of design review meetings. Some organiza-
tions have a checklist they use to ensure that nothing falls through the cracks and
that they cover all the potential problems that may occur. A generic checklist, which
can be used at design review meetings or by the designer to ensure the integrity of
the design, is shown as Table 10.2. For obvious reasons, this list is not an exhaustive
list to cover all situations. Rather, it is a list that may be modied and act as a
catalyst for further discussion.
Here, we must give the reader a very strong caution. Concurrent or simultaneous
engineering is not a design review function, but it is closely related to it. Concurrent
engineering is the process by which all disciplines that design, manufacture, inspect,
sell, use, and maintain the product work together to develop and produce it. Mile-
stones are established where the various disciplines accomplish specic tasks simul-
taneously before proceeding on to the next task. Traditionally, each step in the design
process occurred one step at a time and extended over a relatively long duration. In
concurrent engineering, several tasks such as manufacturing engineering work with
the quality engineer and design to set up their responsibilities while the designer is
still working with the design. A comparison between traditional and concurrent
engineering is shown in Table 10.3.
FAILURE MODE AND EFFECT ANALYSIS (FMEA)
Even though the FMEA is discussed in Chapter 6, it is important to discuss here
the relationship between FMEA and design review. FMEA, as we already have
said, is a methodology that helps in identifying potential and known failure modes
and then arriving at a probability of occurrence and detection. In addition, a good
FMEA should recognize interfacing failures in components, subsystems, and sys-
tems themselves. Typical interfacing problems may occur in the form of proximity,
information, material, and energy transfer. In all cases, the ultimate goal of any
FMEA (Concept/System, Design, Process, or Machinery) is to reduce as many
failure modes as possible, or to reduce the probability of the failure modes as
much as possible. The process of doing an FMEA is a down-up approach. For a
very detailed explanation, see Stamatis (1995).
An FMEA in design review should not be confused with a Fault Tree Analysis
(FTA). Whereas the FMEA is a down-up approach, the FTA is a top-down approach
to failure analysis starting with the undesirable top event and progressively deter-
mining all the ways the failure may happen. The analysis starts early in the concept
phase and is continually monitored through the subsequent stages. A comparison
between FMEA and FTA is given below:
SL3151Ch10Frame Page 467 Thursday, September 12, 2002 6:03 PM
468 Six Sigma and Beyond
TABLE 10.2
Design Review Checklist
Do specied components meet their reliability requirements?
Can off-the-shelf items be used for particular functions?
Does the design meet functional requirements?
Were standard components and assemblies considered and used where possible?
Were all environmental impacts considered?
Did all components pass the environmental testing? Were corrections made where necessary?
Have critical characteristics been considered?
Have failure histories been investigated?
Did each component and material meet its requirements under environmental extremes of
the specication?
Were there enough data for reliability calculations?
Was the complete unit tested?
Were the weak links in the design corrected?
Does demonstrated reliability meet required specication or is redesign indicated?
Are predicted and allocated reliabilities compatible? Are trade-offs necessary?
Were manufacturing and quality assurance (QA) considered in the design?
Is redundancy necessary to meet reliability requirements?
Can the environment be changed or be protected? Is heating, cooling, shock mounting,
shielding, or better insulation required?
Were all failure modes corrected to prevent recurrence? Has storage capability been studied?
Should specications be written to ensure 100% test and inspection?
Are there suitable manufacturing and QA procedures to ensure good quality?
Is the item designed as simply as possible?
Have all human factors been considered?
Have sharp corners been considered? Have all potential stress risers been eliminated?
Can other cognizant disciplines help in writing specications or offer improvements in the
design?
Can suppliers provide reliability values for their components? If so, are the values compatible
with the overall system?
Has enough testing been performed to validate the required reliability for designated com-
ponents?
Will early testing or screening help eliminate infant mortality type failures?
Have maintainability requirements been considered?
TABLE 10.3
Comparison Between Traditional and Concurrent Engineering
Function Traditional Engineering Concurrent Engineering
Organization Engineering is unique and separate
from other departments
Engineering is part of multifunctional
team with team objectives
Timing of outputs
and inputs
Each department waits for output from
previous department
Tasks progress simultaneously working
with engineering
SL3151Ch10Frame Page 468 Thursday, September 12, 2002 6:03 PM
Miscellaneous Topics Methodologies 469
The steps of the FMEA are:
1. Dene the team.
2. Dene the system, design, process, or machinery (block diagram, process
ow diagram).
3. Construct the P-diagram.
4. Dene the function.
5. Identify the failure(s).
6. Complete the tabular form.
7. Analyze (evaluate) the completed FMEA.
8. Recommend any corrections for design changes.
9. Document the analysis and its results.
To help in the generation of FTA, the Rome Air Development Center (RADC)
has formulated a seven-step approach. These steps obviously are very generic, and
each organization that is using them must modify them to t their purpose. The
seven steps are:
1. Dene the system, ground rules, and any assumptions to be used in the
analysis.
2. Develop a simple block diagram of the system showing inputs, outputs,
and interfaces.
3. Dene the top event (ultimate failure effect or undesirable event) of
interest.
4. Construct the fault tree for the top event using the rules of formal logic.
5. Analyze the completed fault tree.
6. Recommend any corrections for design changes.
7. Document the analysis and its results.
FTA is one of the few tools that can depict the interaction of many factors and
manage to consider the event that would trigger the failure or undesirable event.
An FMEA starts with the origin of a failure mode
and then proceeds to nd the causes, probabilities,
and corrective actions.
An FTA starts with an accident or undesirable
failure event, determines the causes, then the
origin of the causes, then what can be
accomplished to avoid the failure.
An FMEA considers all potential failure modes
that can be produced through mental generation.
An FTA studies only negative outcomes that
warrant further analysis.
An FMEA uses an inductive approach. The FTA uses a deductive approach.
Less engineering is needed for an FMEA. Skilled personnel are required for an FTA.
There is limited safety assessment in an FMEA. The FTA manages risk assessment and safety
concerns.
The failure paths are not delineated in an FMEA. The FTA provides a good assessment of the failure
paths, and the control points are well enhanced.
An FMEA looks at each failure mode separately. An FTA demonstrates a more selective method of
showing the relationship among events that
interact with one another.
SL3151Ch10Frame Page 469 Thursday, September 12, 2002 6:03 PM
470 Six Sigma and Beyond
Designers should consider the safety aspects of their designs early in the concept
stage so that necessary changes can be accomplished before they have a chance of
occurring.
REFERENCES
Anon. Reliability and maintainability guideline for manufacturing machinery and equipment.
M-110.2. 2nd ed. Society of Automotive Engineers, Warrendale, PA. and National
Center for Manufacturing Sciences, Ann Arbor, MI, 1999.
Stamatis, D.H. Guidelines for Six Sigma design review Part 1. Quality Digest. pp. 2731.
Part 2. pp. 2630, April and May 2002.
Stamatis, D.H. Failure Mode and Effect Analysis: From Theory to Execution. Quality Press,
Milwaukee, WI, 1995.
SELECTED BIBLIOGRAPHY
Hu, J. et al., Role of Failure Mechanism Identication in Accelerated Testing, Journal of the
IES. July/Aug. 1993, 3945.
Keceioglu, D., Reliability and Life Testing Handbook, vols. 1 and 2, Prentice-Hall, Upper
Saddle River, NJ, 1993.
King, J., New Directions for Reliability, QualityEngineering-1, 7989, 1988.
OConnor, P., Practical Reliability Engineering, John Wiley & Sons, New York, 1991.
Phadke, M.S., Quality Engineering Using Robust Design, Prentice-Hall, Upper Saddle River,
NJ, 1989.
TRADE-OFF STUDIES
Trade-off studies are designed for balancing both business and technical issues and
optimizing the product for the customer, whether internal or external to the organi-
zation. Trade-off studies:
Are a structured, analytical method for objectively identifying, dening,
and evaluating alternatives
Are designed for analytically presenting, evaluating, and weighting deci-
sion information based upon program targets, objectives, goals, and tech-
nical requirements
Ensure that the selected alternative is the best at meeting the program
objectives, goals and technical requirements
We conduct them to:
Promote an objective evaluation and minimize subjective selection
Force requirements to drive the evaluation of the alternative
Ensure that sufcient information for making a decision is provided
SL3151Ch10Frame Page 470 Thursday, September 12, 2002 6:03 PM
Miscellaneous Topics Methodologies 471
Demonstrate that the alternatives satisfy the requirements (as they are
understood at the time of the evaluation)
Document the evaluation
Of course, the question quite often is, How do I know if I need to conduct a
trade-off study? The answer depends on the following questions:
Does the decision require input and concurrence from several organizations?
Does the decision require balancing inputs that may conict and/or are
inversely related?
Is there a choice between several viable/acceptable alternatives?
Is a quick, comprehensive, and defensible decision needed?
If the answer is yes to one or more of these questions, a trade-off study may be
the best approach for selecting the optimum solution. To conduct the trade-off study
appropriately, there are some preliminary requirements. They are:
Prior to declaring or encountering a critical design freeze
When balancing major systems or their components functional performance
When considering several design alternatives at any level (e.g., systems,
subsystems, and components and so on).
When conict exists among targets, objectives, and requirements (e.g.,
maximizing one has negative effects on the others)
When establishing dominant attributes or prioritizing customer requirements
As with any other methodology and tool, with a trade-off study we expect to
have some kind of deliverables at the end of the analysis. Typical deliverables are:
An alternative selected largely on the basis of fact, which is acceptable
to and defensible by all the stakeholders
Complete documentation (Evidence Book) outlining how and why the
decision was made
A risk list identifying areas of concern for all the alternatives investigated
Sensitivity analysis showing the stability of the selected alternative
HOW TO CONDUCT A TRADE-OFF STUDY: THE PROCESS
Several steps are required when conducting a complete trade-off study. Each step
aids in ensuring that the end decision best meets the stated customer requirements.
Each step is discussed in detail below. A checklist is also provided to assist you in
conducting your trade-off study. The steps are:
1. Construct the Preliminary Matrix
The trade-off study matrix consists of two major components the alternative list
and the category list:
SL3151Ch10Frame Page 471 Thursday, September 12, 2002 6:03 PM
472 Six Sigma and Beyond
The alternative list: This list is simply a listing of each alternative being
considered. The alternatives are listed across the top of the trade-off study
matrix, with one alternative per column.
The category list: The category list consists of musts and wants arranged by
assessment items. Each assessment item is broken down into various mea-
surable discriminators.
An example:
Attribute category: safety
Assessment item: frontal impact
Discriminator: 5 mph bumpers; bumper material; crumple space
The rst step in the trade-off study process is creating a preliminary matrix. You
must identify both the alternatives being examined and the list of assessment items and
discriminators. Draw assessment items and discriminators from program requirements,
corporate data, QFD studies, CAE analysis, etc. The preliminary matrix acts as a
discussion catalyst at the rst team meeting. After assembling the assessment list, sort
it into those that are imperatives (or musts) and those that are desirables (or wants).
2. Select and Assemble the Cross-Functional Team
The goal of this step is to ensure that affected parties are adequately represented. It
is better for a group to decline participation than to be overlooked in the team
assembly process. The team is composed of representatives from each group
impacted by the decision being made (it must be cross-functional and multidisci-
plinary). Team size varies depending on the subject and scope of the project (initial
meetings should include no more than nine to twelve people). Team membership is
based on contribution potential not approval needs. Approval takes place during the
presentation of results at the end of the process.
3. Assign Team Members Roles and Responsibilities
Although all team members will play a role in the trade-off study process, three key
positions must be lled to ensure process success, as follows:
Team champion
Is usually a program manager, project manager, or someone empowered
by those individuals to carry out the selection of an alternative
Is the individual who must design, build, or approve the selected alternative
Supports and participates in the process, and accepts (and backs) the
teams consensus decision
Provides the resources to accomplish the task at hand
Lead facilitator
Guards against duplicating efforts, provides information to individuals be-
tween meetings, generally coordinates the entire process
SL3151Ch10Frame Page 472 Thursday, September 12, 2002 6:03 PM
Miscellaneous Topics Methodologies 473
Is an empowered member of the team whose role is to coordinate the work
of the ranking process
Resolves any overlaps in evaluation that occur during the ranking process
Coordinates the open issues identified by the team
Compiles the ranking process data/results into the evidence book
Process coordinator
Ensures that all affected parties are represented and that each team mem-
ber understands the process
Is responsible for aiding the team leader in assembling the team
Schedules and runs all core team meetings
Provides and explains the trade-off study methodology and supporting tools
4. Assign Ranking Teams To Evaluate the Alternatives
Ranking teams are designed to evaluate each alternative within a particular category
or assessment item. Individuals are assigned to these teams based on their particular
specialty. For example, a transmission design engineer would be assigned to the task
of assessing (or ranking) an alternatives ability to handle a particular level of input
torque; an individual from marketing would determine an alternatives potential
volumes; a nancial expert may be assigned to develop marginal costs, and so on.
After the core team modies and approves the preliminary matrix, determine
the personnel necessary for conducting the ranking within each category. Assign
team members to ranking teams based on their specialty or to the category that
affects their area/product. Due to the critical aspect of the ranking teams, team
members require specic direction on:
1. Ranking/evaluation methodologies
2. Documentation format and content
3. Reporting their ndings, conclusions, and issues
Identication of Ranking Methods
The ranking teams primary function is ranking each alternatives ability to achieve
the discriminator (for example, the alternative providing the best crash test gives the
highest rank). Measurable discriminators are a must, and the ranking team must
devise a method for determining each alternatives ability to meet that discriminator
(for example, is mileage, bumper material, or crumple space the discriminator with
the best measurability and effectiveness for our expected result). This ranking is
accomplished as follows:
As each ranking team delves into the ranking process, the team members may
expand or contract the discriminator list to better evaluate the alternatives
performance for a given assessment item.
Each team selects the ranking method it feels is appropriate and is defensible
to the core team.
Whenever possible, the rankings should be based on actual test results, CAE
analysis, or numerical analysis (i.e., facts or directly observable data).
SL3151Ch10Frame Page 473 Thursday, September 12, 2002 6:03 PM
474 Six Sigma and Beyond
Expert opinion and subjective rankings should be used as a last resort when
time, cost, knowledge, or value limit the reliance on tests, rigorous analysis,
or computer simulation.
An alternative that fails in achieving any assessment item dened as a project
must is eliminated from the evaluation. The ranking team informs the lead
facilitator who stops further evaluation of that alternative by all other
ranking teams (this is done to limit resource expenditure).
When ranking the alternatives, the one (or more) alternatives best satisfying
a particular discriminator get the highest numerical rank for that discrimi-
nator. This is usually done by counting the number of alternatives and using
that value as the highest rank (i.e. with four alternatives, the best would
receive a rank of four, the next a three, and so on). Alternatives do not have
to be forced ranked, nor does one have to receive the top score. If all or
some of the alternatives have equal ability to satisfy the discriminator, they
would receive equal rank. If that ability is high, they would all receive a
top rank; if that ability is poor, they all may receive a low rank
Development of Standardized Documentation
A secondary result of a trade-off study is the evidence book documenting the entire
decision-making process. Although secondary, this activity needs to be taken very
seriously in order to defend the core teams decision to those both inside and outside
the group. Documentation being produced by the ranking teams must be consistent
in format and content to ensure ease in assembling the evidence book. To ensure this:
Provide each ranking team a standard format for reporting their ndings.
Make sure that the format includes, at a minimum:
The ranking results
The ranking method
Advantages and disadvantages of each alternative (as defined by the rank-
ing teams)
Any risks associated with the selection of each alternative
Any issues identified during the ranking process
Include all supporting documentation generated during the ranking process in
an appendix, attachment, or separately dened section.
Timing for Report out of Selection Process
Once the members of a ranking team complete the ranking process, they forward
their completed documentation to the lead facilitator to place in the evidence book.
Each team is then responsible for preparing a presentation of its ndings to the entire
core team. The presentation includes a summary of the rankings, methods, issues,
and risks associated with the selected alternative. This presentation is then made to
the entire core team at the nal core meeting.
5. Weight the Various Categories
While the ranking teams proceed with the ranking process, the process coordinator
and team leader pull together the necessary or key personnel to assign weightings
to the various assessment items.
SL3151Ch10Frame Page 474 Thursday, September 12, 2002 6:03 PM
Miscellaneous Topics Methodologies 475
The weighting rule: Assign weightings according to the assessment items impor-
tance or impact on satisfying the customer and company needs/requirements and
ensuring the optimum decision (for this point in the program).
With key personnel developing the weightings in parallel to the ranking process,
we are able to:
Assign the weightings at a higher corporate level assuring better alignment
with corporate vision.
Work more quickly and efciently toward balancing the weightings.
6. Compile the Evidence Book
Now that the weightings have been assigned, the next step is for the lead facilitator
to compile the evidence book. There are several steps to this process:
Organize the ranking teams documentation in category sequence as it appears
on the trade-off study matrix.
Calculate each alternatives score within the assessment item by determining
an assessment item average. Simply sum an alternatives rank for the various
discriminators and divide by the number of discriminators.
Continue calculating alternatives scores as additional ranking teams report out.
Once each ranking team has reported out, the lead facilitator develops a summary
of the evidence book for distribution during the nal presentation. This summary
contains:
The completed trade-off study matrix, including tallied alternative scores
A section outlining the identied advantages and disadvantages of each alternative
Identied risks associated with each alternative
7. Present the Results
When each ranking team has reported its results to the lead facilitator, the process
coordinator reassembles the core team for a presentation of the results. Copies of
the summary document are distributed to each core team member three to ve days
prior to the meeting. Each ranking team then presents its ndings, methods, and
issues to the entire team.
Sensitivity Analysis: The purpose of the sensitivity analysis is to determine the
robustness of the selected alternative. The process allows the group to ask various
what if questions regarding a particular ranking or weighting and receive an
immediate answer, such as:
What if the ranking of that assessment category were inverted, would the
alternative still be chosen?
What if the weighing of that assessment item were lowered, would it change
the selection?
SL3151Ch10Frame Page 475 Thursday, September 12, 2002 6:03 PM
476 Six Sigma and Beyond
It is recommended that a laptop computer, loaded with the trade-off study matrix,
be brought to the presentation by the lead facilitator. Modications can be made to
the rankings within an assessment item or weightings on a category to see how that
would affect the overall decision. This will identify how sensitive the decision is to
certain changes and give the group an immediate feel for the selected alternatives
robustness.
Here is a typical trade-off study process checklist:
1. Constructing the preliminary matrix
Consideration has been given to all attribute categories.
All discriminators are measurable, now.
Assessment items considered musts are truly musts.
Assessment items considered wants are only wants.
2. Selection and assembly of the cross-functional team
All affected activities have been invited to participate.
All participants are empowered by their management.
3. Assigning team members roles and responsibilitie.
Team champion will design, build, or approve the selection.
Lead facilitator is in a position to coordinate the ranking teams.
Process coordinator is willing to schedule and run meetings.
4. Assigning team members to ranking teams
Ranking teams have the right specialists to accomplish their task.
Standardized report format has been established and agreed upon.
Acceptable ranking methodologies have been agreed upon.
Freedom in expanding/contracting the discriminator list has been con-
veyed.
Timing and report out procedures are understood by each team.
5. Weightings of the various categories
Identication of key personnel is complete.
Weightings are being conducted in parallel to ranking teams evalua-
tion.
Assigned weightings align with customer and corporate wants.
6. Compilation of evidence book
Ranking team documentation organized to trade-off study matrix
Alternative scores calculated (weight rank = score)
Trade-off study summary completed, for nal presentation
7. Presentation of results
Entire core team reassembled
Laptop with trade-off study matrix available for sensitivity analysis
Consensus decision reached as to which alternative to pursue
GLOSSARY OF TERMS
Alternative rank Denes how well an alternative compares to other alter-
natives in achieving a particular assessment item or discriminator.
Assessment item A particular attribute of a given category.
SL3151Ch10Frame Page 476 Thursday, September 12, 2002 6:03 PM
Miscellaneous Topics Methodologies 477
Categories Major classes of assessment items
Discriminator A specic portion of an attribute.
Evidence book The book containing the complete set of documentation
created by the core team, ranking teams, and weighting teams during their
evaluation of the alternatives.
Musts and wants Each assessment item is grouped according to whether
it is a must or a want.
Musts are defined as those items that an alternative has to meet in order for
it to garner further consideration. When an alternative does not meet a
must, it is dropped from the study, unless it can be brought in line with
the must or the must is modified.
Wants are those items that are needed to reach maximum customer and
corporate satisfaction, but an alternative would not be discarded for
failing to meet them. These wants are weighted and determine which al-
ternative gets selected from those that meet all of the musts.
Ranking team The sub-core group/team that is assembled to evaluate (and
rank) the various alternatives within a particular assessment item. This team
also identies risks, advantages, disadvantages, and issues encountered
during the alternative evaluation.
Trade-off study matrix A tabular chart used to list alternatives being
evaluated (across the top) and assessment items being used to differentiate
the alternatives (down the left-hand side). Alternative rankings, assessment
item weightings and alternatives score are also tallied on the matrix. (A
matrix can easily be created using any spreadsheet program.)
Weightings A value given to an assessment item, categorized as a want,
to show its relative importance to the other assessment items. Techniques
used to develop weightings include pair-wise comparison, 100% weightings
(sum of weightings add to 100), and many others.
SELECTED BIBLIOGRAPHY
Bain, L., Statistical Analysis of Reliability and Life Testing Models: Theory and Methods,
Marcel Dekker, New York, 1991.
Hubka, V., Principles of Engineering Design, Butterworth Scientic, London, 1982.
Kapur, K.C. and Lamberson, L.R., Reliability in Engineering Design, John Wiley & Sons,
New York, 1977.
COST OF QUALITY
The purpose of costs in quality is to establish the method for collecting, main-
taining, and using quality cost data so that they become the conscience (the driving
force) of the organization for continual improvement. Once this conscience has been
realized, then a real effort is put in place in the area of quality improvement
opportunity (QIO) for quality audit, product procedures process, and the overall
system.
SL3151Ch10Frame Page 477 Thursday, September 12, 2002 6:03 PM
478 Six Sigma and Beyond
This is based on the notion that quality is dened as satisfying the customers
needs. How does the cost of quality/quality improvement opportunities satisfy the
customers needs through the manufacturing organization? See Figures 10.1 and 10.2.
COST MONITORING SYSTEM
An organization is expected to use methods that accurately monitor all cost elements.
The cost of doing business (labor, materials, overhead) cannot be effectively con-
trolled without a systematic method that effectively monitors how costs are incurred.
Specically, the organization should develop costs relative to quality that will serve
as a guide for measuring plant efciency.
Standard Cost
The organization should develop a method that will allow for efciency in labor
(direct and indirect); identication of material content (parts and components);
appropriate measures of overhead; and documentation/development, review, and
revision of these standards.
Actual Costs
A supplier should maintain an accurate system to record, monitor, and control labor,
material and overhead costs. For example:
Compute labor efciency reports for a period.
Establish a tolerance limit for efciency.
Ensure that reports are received by management.
FIGURE 10.1 Quality cost: The quality control system.
Reject
Accept
Suppliers
Receiving
CQ
Inputs
Production
Process
Process
QC
Final
product
QC
Output
Scrap or
rework
Quality
management
Customer reports
Technology
costs
Contracts
Standards
Drawings
SL3151Ch10Frame Page 478 Thursday, September 12, 2002 6:03 PM
Miscellaneous Topics Methodologies 479
Generate monthly summary reports.
Ensure that raw materials are ordered in economical quantities to reduce
and/or control the cost of material.
Develop a system that will track materials on hand.
Budget for overtime.
Charge premiums to the applicable departments as overhead.
Charge straight time portions as direct labor.
FIGURE 10.2 Costs.
COST
EXPECTATIONS
Cost
Monitoring
System
Cost
Reduction
Efforts
Variance Cost Estimates
Actual Cost
Competitive
Product
Development
Standard
Cost
Continual
Cost
Improvement
Efforts
Continual Improvement
SL3151Ch10Frame Page 479 Thursday, September 12, 2002 6:03 PM
480 Six Sigma and Beyond
Perform surveys to accurately analyze and distribute indirect inventory
overhead and service department costs.
Provide adequate records for support.
Variance
A supplier should be able to identify and control cost variances. For example:
Cost comparison report: Reviewed by the responsible departments in pre-
dened cycles weekly, monthly, etc.
Tolerance limit: A system to address the variance and provide for appropriate
action.
Plant report: A regular reporting of costs and variance published and reviewed
by management.
Level of comparison: A specic level appropriate for the commodity being
produced (part number, department, cost center, etc.).
Cost Reduction Efforts
The organization and the supplier should cooperate fully with each other in an effort
to reduce costs. Continuous efforts to reduce costs and therefore selling price are
essential for the organization and the supplier, if both are to remain competitive in
the market place. For a supplier to reduce costs, the efforts should be directed in
the following areas:
1. Continual improvement program
2. Competitive product development and target cost achievement
3. Cost estimates
CONCEPTS OF QUALITY COSTS
All the gurus of quality have identied the costs as an essential part of overall quality
improvement. In a summary format let us see some of them:
J. Juran
Among other concerns, Juran emphasizes that
1. Quality is an issue of cost.
2. To control cost, management must be equipped with experience and training.
3. Quality cost must become a part of the strategic business plan of the
organization.
W.E. Deming
Perhaps one of the most prolic gurus in quality issues of the 20th century, Deming
spent a lifetime explaining the issue of cost as one of the driving forces in a dynamic
SL3151Ch10Frame Page 480 Thursday, September 12, 2002 6:03 PM
Miscellaneous Topics Methodologies 481
organization by always explaining the need to end the practice of awarding business
on the basis of price tag, eliminate numerical goals, and eliminate work standards.
P. Crosby
Crosby was by far the best salesperson of quality. He was the rst to associate the
bottom line with the effect of costs. He made a point of differentiating quality of
conformance and nonconformance, to quantify the waste of poor quality as a per
cent of sales, push for the concept of zero defects and the attributes of prevention
quality as opposed to appraisal.
G. Taguchi
Taguchis contribution to the cost of quality is with the tolerance design cost account-
ability on specications setting and the loss function.
DEFINITION OF QUALITY COMPONENTS
Depending on whom you listen to, approximately 6 to 15% of all quality problems
are related to special causes (labor). The other 85 to 94% arise from faults in the
companys system. This larger percentage will continue until management changes
the system. Both special and local issues are contributors to the cost of quality (CQ).
Two questions follow:
What is really meant by quality cost?
What are the steps of quantifying the costs?
1. There are two major categories of quality costs.
a. Inputs
b. Outputs
The inputs are made up of the appraisal costs, which are the costs incurred
(first time through), to discover the condition of the product. These in-
clude
Incoming material inspection
Inspection and test
Maintaining accuracy of test equipment
Materials and services consumed
Evaluation of stocks
Product quality audits
Another component of the inputs to quality costs is the prevention cost.
These are costs incurred to keep output and appraisal costs to a mini-
mum. They include:
Quality planning
New products review
Training
Process control
Quality data requisition and analysis
SL3151Ch10Frame Page 481 Thursday, September 12, 2002 6:03 PM
482 Six Sigma and Beyond
Quality reporting
Improvement projects
Other prevention costs (general ofce expenses)
The outputs are made up of the internal failure costs. These result when
quality issues are discovered outside an organization by the customer.
They include
Scrap
Rework
Retest
Downtime
Yield losses
Disposition
Failure analysis
Fault of suppliers
Another component of the outputs is the external failure costs, which are
Complaint adjustment
Returned material
Warranty changes
Allowances
Repair
Errors
Liability
2. The use of cost of quality can be quantied by giving attention, prioritizing,
justifying, recognizing, and driving decision making deeper into the organi-
zation. Screening the costs throughout the organization occurs through:
a. Analyzing the ingredients of established accounts
b. Resorting to basic accounting documents
c. Creating records for documentation
d. Estimating costs using statistical tools
The analysis of these four categories can be performed by various means,
i.e., through descriptive statistics, graphical techniques, or advanced
statistical analyses.
Here we must emphasize that cost of quality is not a system that encour-
ages, fortifies, or perpetrates the adversary position of one department
against another or one company against another. Rather, it is a system
that allows management to look at a specific situation compared against
itself over time. The ultimate in this thinking is planning for growth.
The relationship among CQ, planning, growth, and quality is heavily
dependent upon managements attitude and the employer involvement
improvement opportunity of a given organization. The underlying as-
sumption of this concept is that as one controls quality, one reduces cost
and this increases profit. This assumption of CQ is important because it
becomes the catalyst that causes management to address the issue of
quality. Profit is the universal language of all management. The ques-
tion becomes: What does CQ provide to management that serves as a
significant indicator to them? It provides:
SL3151Ch10Frame Page 482 Thursday, September 12, 2002 6:03 PM
Miscellaneous Topics Methodologies 483
A systematic method of assessing the overall effectiveness of the
quality program
A means of establishing programs to meet overall needs
A method of determining problem areas and action proles
A technique to determine the optimum amount of effort for each of
the various quality activities
METHODS OF MEASURING QUALITY
The operating quality costs of prevention and appraisal are considered to be con-
trollable quality costs, while the internal and external failure costs are uncontrollable.
Juran has demonstrated the relationship between the controllable and uncontrollable
QC curves and the direct quality cost curve over time. As the controllable costs of
prevention and appraisal increase, the uncontrollable costs of internal and external
failure decrease. The point where the cost of preventing and appraising exceeds the
cost of correcting the product failure is the optimum operating quality cost. Math-
ematically, the optimization is:
Let f(q) = total (internal and external) failure costs
p(q) = total (appraised and prevention) prevention costs
T(q) = total quality costs = f(q) + p(q)
q = quality level (0 to 100% good product)
T(q) = dT/dq = 0 or dp/dq = df/dq, which is the minimum
This means that an additional dollar invested in prevention will produce exactly one
dollars worth of reduced failure costs. Below the optimum it provides more than
one dollar and above the optimum the opposite is true. Therefore:
1. Optimum quality depends on incremental not total elementary costs.
2. There is nothing that demands the optimum be at q = 100%. There might
be a minimum rather than an optimum, and it could very well be at q = 100%.
The optimum (minimum) quality cost could lie at zero defects, q = 100%, if the
incremental cost of approaching zero defects is less than the incremental return from
the resulting improvement. Juran asserts that prevention costs rise asymptotically,
becoming innite at 100% conformance. This implies that the incremental cost is
also innite. Since the incremental return is not, it follows from his assertion and
the above mathematics that the optimum lies below 100%. The question is: Does it
really take innite investment to reach zero defects?
For Crosby, on the other hand, the cost of quality bases are:
1. Total contract sales
2. Total cost
3. Manpower
4. Manpower by skill
5. Budgeted costs
SL3151Ch10Frame Page 483 Thursday, September 12, 2002 6:03 PM
484 Six Sigma and Beyond
6. Income after taxes
7. Operating prot
8. Equity earnings
9. Strategic managed cost
10. Constant dollars
COMPLAINT INDICES
The user costs associated with failures can be grouped into ve categories.
R = repair cost
E = effectiveness loss (idle labor)
C = extra capacity required because of product downtime
D = damage caused by failure
L = lost income (prot)
If these costs are measured each year over the life of the product, then the failure
cost (C
f
) is
C
f
=
where n = life of the product and i = the yearly interest rate.
PROCESSING AND RESOLUTION OF CUSTOMER COMPLAINTS
1. Satisfying the complaint
2. Preventing a recurrence of isolated complaints
3. Pareto analysis
4. An in-depth analysis of the vital few
5. Further statistical analysis
TECHNIQUES FOR ANALYZING DATA
1. The seven tools of total quality costs
a. Pareto chart
b. Course and effect diagram
c. Stratication chart
d. Check sheet
e. Histogram
f. Scatter plot
g. Graphs and control charts
L
L i j
R E C D L
j j j j j
j
n
( )
( )
+
+ + + +
=

1
SL3151Ch10Frame Page 484 Thursday, September 12, 2002 6:03 PM
Miscellaneous Topics Methodologies 485
2. Defect matrices
3. Cost analysis
4. Spare parts use growth curves
5. Probability paper
6. Simulation studies
7. Statistical modeling
a. Abnormality control chart. The abnormality chart is a chart that
addresses the following questions/concerns:
i. How did it happen?
Date: Place: Lot number:
Item name: Found By: Description:
ii. How it was found?
iii. Emergency measure taken
iv. Investigation of the causes
v. Cause(s)
vi. Measures taken to prevent recurrence
vii. How will these measures affect similar processes?
viii. How to proceed next?
FORMAT FOR PRESENTATION OF COSTS
This format must serve as a catalyst to management to provide attention, prior-
itization, justications, recognition, and corrective action.
LAWS OF COST OF QUALITY
1. We cannot reduce cost without affecting quality.
2. We can improve quality without increasing cost.
3. We can reduce cost by improving quality.
4. Cost of quality drives the system.
5. If quality costs money, do not do it.
Type of Standard Standard is Based on
Managers Use the Report to
Answer the Question
Engineered Studies made by engineers, e.g., material
usage, labor hours
Are we attaining the results that the
engineering studies showed were
obtainable?
Historical Statistical computation of past
performance
Are we getting better or worse?
Market Market studies to discover performance
of competitors
Were do we stand compared to our
competitors?
Planned Broad program of nal results needed and
allocation to subprograms, e.g.,
reliability goals
Are we going to be able to attain the
overall planned goal?
SL3151Ch10Frame Page 485 Thursday, September 12, 2002 6:03 PM
486 Six Sigma and Beyond
TABLE 10.4
Typical Monthly Quality Cost Report (Values in Thousands of Dollars)
October Year to Date
Actual Variance Category Actual Variance
A. Prevention Cost
18.3 3.2 1. Quality engineering 190.1 10.1
4.6 0 6 2. Design and development 61 8 7.5
2.6 0.9 3. Quality planning by others 20 7 7 3
7.3 2.1 4. Quality training 46.8 20.3
2.4 3.4 5. Other 312 25.0
35.2 10.2 Total prevention cost 350 6 55 2
7.7% % of total quality cost 9.4%
B. Appraisal Cost
9.6 1.8 1. Inspect and test incoming materials 87.3 7.1
32.5 15.4 2. Inspection and test 323.0 105.0
14.1 27.4 3. Product quality audits 140.9 269.7
1.4 1.1 4. Materials and services consumed 16.5 8.8
4.1 1.6 5. Equipment calibration and maintenance 23.4 0 0
61.7 9.7 Total appraisal cost 591.1 166.4
13.5% % of total quality cost 15.9%
9.6 C. Internal Failure Cost
14 6 124.3 1. Scrap 50.0 8.0
197.2 8.1 2. Rework 1305.6 557 6
25.2 2.3 3. Failure analysis 185.1 0.4
6.8 6.6 4. Reinspection 88.0 3.0
14.1 0.2 5. Fault of supplier 152.1 77.2
0.8 6. Downgrading 8.1 1 9
258.7 129.9 Total internal cost 1788.9 621.5
56.4% % of total quality cost 48.1
D. External Failure Cost
8.6 1.6 1. Complaints 75.3 5.3
41.8 1.2 2. Rejected and returned 403.6 26.4
25.6 0.3 3. Repair 256.5 3.5
21.9 27.0 4. Warranty Charges 226.6 263.4
4.9 4.0 5. Errors 28.5 10.2
0.0 0.0 6. Liability 0.0 0.0
102.8 30.3 Total external cost 990.5 291.2
22.4% % of total quality cost 26.6%
458.4 79.7 Total operating cost 3721.1 108.7
Measurement Bases
6.5 1. Direct labor ($/man-hour) 5.3
8.8 2. Sales (%) 9.0
16.7 3. Manufacturing Costs (%) 16.3
SL3151Ch10Frame Page 486 Thursday, September 12, 2002 6:03 PM
Miscellaneous Topics Methodologies 487
DATA SOURCES
Typical sources for cost of quality (CQ) are:
INSPECTION DECISIONS
PREVENTION COSTS (SEE TABLE 10.5)
APPRAISAL COSTS (SEE TABLE 10.6)
INTERNAL FAILURE COSTS (SEE TABLE 10.7)
EXTERNAL FAILURE COSTS (SEE TABLE 10.8)
Types of Data Sources
Field performance data Customer service department
Trend of sales by model customer, etc. Internal sales analysis
Trends of competitor activities, dealer reactions, and
other eld intelligence
Reports of eld sales force
Extent of eld replacements due to failures in service Sales of spare parts
Competitive quality ratings Customers who buy from multiple sources
Independent quality ratings Independent laboratories
Results of research on quality research Government departments; institutions
Cost summary Monthly quality cost report see Table 10.4
What to inspect Raw materials
Processes
Products
When to inspect Prior to supplier shipment
Upon receipt from suppliers
Before start of processes
During processes
Prior to costly processes
Prior to irreversible processes
Prior to covering processes (painting)
After processes
Before shipping to customers
How much to inspect 100% inspection
Sampling inspection
Type of measurements Variable measurement (continuous)
Attribute measurement (discrete)
Who inspects External suppliers
Workers themselves
Quality inspectors
Where to inspect Work stations
Inspection stations
Laboratories
SL3151Ch10Frame Page 487 Thursday, September 12, 2002 6:03 PM
488 Six Sigma and Beyond
TABLE 10.5
Prevention Costs
Cost Element Description/Denition
Where to Obtain/
How to Calculate Cost
1. Quality planning
By quality department
Quality engineers
SQA engineers
Reliability engineers
Statisticians
Other planning
By other departments
Manufacturing engineering
Controllers ofce
Systems
Administrative
Purchasing/other
All costs (salary and
administrative) related to the
planning of an effective quality
system that translates customer
requirements into the
manufacturing process; test
and inspection planning costs
are reported separately (see #2)
Allocated costs for time spent in
quality planning by personnel
not reporting to the quality
department
Salary budget reports
Expense budget reports
Estimates
Department budget reports
(allocated)
Time sheets
Purchase orders
Estimates
2. Test and inspection planning Costs of planning and procuring
developing test and inspection
equipment (excluding actual
equipment costs, which are part
of appraisal costs)
Development costs for test and
inspection processes
Department budget reports
(allocated)
Purchase orders
Estimates
3. Qualication of new
products/
processes/equipment
Costs for qualifying new
products, processes, and
equipment (including of test
and inspection) to meet
customer requirements
Department budgets (allocated)
Launch budget (allocated)
Purchase orders
Estimates
4. Quality training All costs for developing,
implementing, operating, and
maintaining formal quality
training (including statistical
training)
Training budget
Purchase orders
Estimates
5. Other prevention expenses All other costs associated with
planning, implementing, and
maintaining a quality system
not specically included
elsewhere
Estimates
Adjustments (including negative
costs)
SL3151Ch10Frame Page 488 Thursday, September 12, 2002 6:03 PM
Miscellaneous Topics Methodologies 489
DIAGNOSTIC GUIDELINES TO IDENTIFY MANUFACTURING PROCESS
IMPROVEMENT OPPORTUNITIES
1. Identify the process to be evaluated.
2. Become acquainted with the process by reviewing process sheets and
through discussion with line supervision.
3. Visit each operation to review for type of cost incurred, appraisal, internal
failure, etc.
4. Talk to individual operators to dene further what goes wrong at each
operation; mis-assembly, wrong tools, poor setup, etc.; note machine
numbers, part numbers, and shift.
5. Identify and quantify failures at each operation; scrap, damage, rework,
etc., by shift.
6. Use the existing nancial system to assign the cost of direct/indirect labor,
benets, material, etc., to each operation within the process.
7. For each operation calculate the cost of scrap, rework, testing, inspection,
production checks, sorting, and audits. Also calculate the costs associated
with return sales, warranty, and customer loyalty.
TABLE 10.6
Appraisal Costs
Cost Element Description/Denition
Where to Obtain/
How to Calculate Cost
1. Incoming and
receiving
inspection and
test
All costs of inspectors, supervision, lab,
and clerical personnel working on
incoming material; includes costs to visit
or station personnel at supplier locations
Department budgets (allocated)
Process sheet standards
Inspection sheets standards
Estimates
2. In-process
inspection and
test
Salaries and associated costs of all staff
performing in-process inspection and
testing either 100% or sampling;
includes materials consumed during tests
Same as # 1
3. Test and
inspection
equipment
Costs of tests, inspection, and lab
equipment; equipment maintenance and
purchased services also included
Department budgets (allocated)
Purchase orders/maintenance
contracts
Estimates
4. Product quality
reviews
Personnel expenses for performing quality
reviews on in-process or nished
products
Department budgets (allocated)
Estimates
5. Field
performance
evaluations
Costs incurred in eld testing for product
acceptance at a customers site, prior to
releasing the product
Field inspection reports
Department budgets (allocated)
Estimates
6. Other appraisal
costs
All other appraisal costs not specically
covered elsewhere
Estimates
Adjustments
(including negative costs)
SL3151Ch10Frame Page 489 Thursday, September 12, 2002 6:03 PM
490 Six Sigma and Beyond
8. Sum these costs to obtain the total cost of quality within the process.
9. State this cost as a fraction of the total cost of the process or as a dollar
amount that represents the opportunity for improvement in the process.
10. Ensure continuous improvement through ongoing process analysis (plan,
do, check, act).
DIAGNOSTIC GUIDELINES TO IDENTIFY ADMINISTRATIVE PROCESS
IMPROVEMENT OPPORTUNITIES
1. Identify the process or procedure to be evaluated.
2. Become acquainted with the process or procedure by reviewing instruction
sheets and procedure manuals and by generating a unique process ow
diagram; discuss with local supervision.
TABLE 10.7
Internal Failure Costs
Cost Element Description/Denition
Where to Obtain/How
to Calculate Cost
1. Rework and repair
internal fault internal
fault supplier fault
Costs of reworking defective product;
includes costs associated with the renew and
dispositioning of non-conforming purchased
products
Cost accounting reports
Detective material reports
Department budgets
(allocated)
Estimates
2. Scrap All scrap losses incurred resulting defective
purchased materials/products and incorrectly
performing manufacturing operations; costs
charged to suppliers are not included; scrap
value, less handling charges, may be
included as an offset
Salvage reports
Defective material
reports
Estimates
3. Troubleshooting and
failure analysis
Costs incurred in analyzing non-conforming
product to determine causes
Department budgets
(allocated)
Problem reports
4. Reinspect and retest Costs to reinspect or retest products that
previously failed
Department budgets
(allocated)
Estimates
5. Excess inventory Inventory costs resulting from producing
defective products; includes storage of
defective product and added inventory of
good product to cover production shortfalls
Cost accounting reports
Department budgets
(allocated)
Estimates
6. Design and process
changes
Costs to revise a product or process due to
production of defective product
Estimates
7. Other internal failure
costs/offsets
All other costs related to the production of
defective product nor specically included
elsewhere
Estimates
Adjustments
SL3151Ch10Frame Page 490 Thursday, September 12, 2002 6:03 PM
Miscellaneous Topics Methodologies 491
3. Review each operation for the type of cost incurred; appraisal (checks,
reviews, etc.), internal failure (blueprint errors, incomplete forms, etc.).
4. Talk to individual employees to dene further what goes wrong at each
operation; redundant operations, misling, improper direction, delays, etc.
5. Identify and quantify failures at each operation and their effect on subse-
quent operations.
6. Use the existing nancial system to assign the cost of labor and material
to each operation.
TABLE 10.8
External Failure Costs
Cost Element Description/Denition
Where to Obtain/
How to Calculate Cost
1. Warranty All warranty costs that can be
allocated to a manufacturing
location due to the production of
defective product or incoming
material; includes internal
processing and investigation of
warranty
Warrant reports (allocated)
Department budgets (allocated)
Estimates
2. Recalls and product
liability claims
All costs associated with
manufacturing location fault for
recall campaigns or liability claims
Recall reports
Corporate liability settlement reports
Department budgets (allocated)
3. Products returned
or rejected
Costs of handling and accounting for
defective product returned or
rejected by the consuming plant or
customer
Department budgets (allocated)
Returned material reports
Sales and service reports
4. Reinspection and
retest
Costs to reinspect or retest defective
product at the customers site
Department budgets (allocated)
Estimates
5. Customer and eld
contacts
Salary and administrative costs to
handle meetings, visits, etc. with
customer personnel resulting from
the receipt of defective product
Department budgets (allocated)
Estimates
6. Design and process
changes
Costs to revise the product or process
to satisfy the customer who received
defective product
Department budgets (allocated)
Estimates
7. Customer goodwill Extraordinary costs that result from
attempting to satisfy a customer
whose expectations were not met
with previously received defective
product
Travel and expense reports
Department budgets (allocated)
Estimates
8. Other external
failure costs
All other costs related to defective
product reaching the customer not
specically covered elsewhere
Estimates
Adjustments
(including negative costs)
SL3151Ch10Frame Page 491 Thursday, September 12, 2002 6:03 PM
492 Six Sigma and Beyond
7. Calculate the cost of losses associated with the items identied in steps
4 and 5.
8. Sum these costs to obtain the total cost of quality within the process.
9. State this cost as a fraction of the total cost of the process or as a dollar
amount that represents the opportunity for improvement in the process.
10. Ensure continuous improvement through ongoing process analysis (plan,
do, check, act).
STEPS FOR QUALITY IMPROVEMENT USING COST OF QUALITY
Procedure
1. Organize the team.
2. Describe the problem.
Estimate the magnitude of quality costs.
Identify the key business processes that have the greatest impact on the
costs.
3. Dene root causes.
Identify and prioritize the root causes of process problems.
4. Implement interim corrective action.
Establish control of the business process.
5. Implement permanent corrective action.
Improve the capability of the business process.
6. Verify effectiveness of actions.
Measure effect of actions identified in (4) and (5).
7. Prevent recurrence.
Modify management and operating systems, practices, procedures, and
processes.
8. Congratulate team.
Examples
Non-manufacturing measurements, which are sometimes difcult to establish, might
include the following:
1. Accounting
Percent of late reports
Computer input incorrect
Errors in specic reports as audited
Percentage of signicant errors in reports; total number of reports
Percentage of late reports; total number of reports; average reduction
in time spans associated with important reports
Pinpointing high-cost manufacturing elements for correction
Pinpointing jobs yielding low or no prot for correction
Providing various departments with the specic cost tools they need
to manage their operations for lowest cost
SL3151Ch10Frame Page 492 Thursday, September 12, 2002 6:03 PM
Miscellaneous Topics Methodologies 493
2. Administrative
Success in maximizing discount opportunities through consolidated
ordering
Success in eliminating security violations
Success in effecting pricing actions so as to preclude subsequent
upward revisions
Success in estimating inventory requirements
Success in responses to customer inquiries so as to maximize customer
satisfaction
Decimal points correctly placed
Correct calculations in bills, purchase orders, journal entries, payrolls,
bills of lading, etc.
Time spent in locating led material
Percentage of correct punches in paper used during a given period
versus actual output in nished pages
3. Clerical
Accurate typing, spelling, hyphenation
Decimal points correctly placed
Correct calculations in bills, purchase orders, journal entries, payrolls,
bills of lading, etc.
Time spent in locating led material
Percentage of correct punches
Paper used during a given period versus actual output in nished pages
4. Data processing
Keypunch (KP) cards thrown out for error
Computer downtime due to error
Rerun time
Promptness in output delivery
Effectiveness of scheduling
Depth of investigations by programmers
Program debugging time
KP (data entering) efciency
5. Engineering: design
Adequacy of systems specications
Accuracy of system block diagrams
Thoroughness of system concepts
Simulation results compared to original design or prediction
Success in creating engineering designs that do not require change in
order to make them perform as intended
Success in developing engineering cost estimates versus actual accruals
Success in meeting self-imposed schedules
Success in reducing drafting errors
Success in maximizing capture rates on RFPs for which the company
was a contender
Success in meeting engineering test objectives
Number of error-free designs
SL3151Ch10Frame Page 493 Thursday, September 12, 2002 6:03 PM
494 Six Sigma and Beyond
Correct readings of gages and test devices
Accurate specications and standards
Proper reporting and control of time schedules
Reduction of engineering design changes
Changes in tests or in illustrations of reports
Rework resulting from errors in computer program input
Advance material list accuracy
Design compliance to specications
Customer acceptance of proposals
Meeting schedules
Thoroughness of systems concepts
Accuracy and thoroughness of reports
Adequacy of design reviews
Compliance to specications
Adequacy of design reviews
Accuracy of computations
Accuracy of drawings
Reduction in number of engineering non-conformances to correct
errors
6. Engineering: manufacturing
Accuracy of manufacturing processes
Timely delivery of manufacturing processes to the shop
Accuracy of time study data
Accuracy of time estimates
Timely response to bid requests
Schedule compliance
Asset utilization
Accuracy and thoroughness of test processes
Adequacy and promptness of program facilitation
Application of work simplication criteria
Minimum tool and xture authorization
Labor utilization index
Methods improvement (in hours or dollars)
Contract cost
Lost business due to price
Process change notices due to error
Tool rework to correct design
Methods improvement
7. Engineering: plant
Effectiveness of preventive maintenance program
Accuracy of estimates (dollars and details)
Accuracy of layouts
Cost of building services
Completeness of plant engineering drawings
Adequacy of scheduling
Fixed versus variable portions of overhead
SL3151Ch10Frame Page 494 Thursday, September 12, 2002 6:03 PM
Miscellaneous Topics Methodologies 495
Maintenance cost versus oor space, manpower, etc.
Lost time due to equipment failures
Janitorial service
Success in meeting or beating budgets
Instrument calibration error
Fire equipment found defective
Lost time due to equipment failures
Purchase requisition errors
Schedule compliance
Timely response to bid requests
Adherence to contract specications
Effectiveness of customer liaison
Effectiveness of cost negotiations
Status ship not bill
Change orders due to errors
Drafting errors found by checkers
Late releases
Time lost due to equipment failure
Callbacks on repairs
8. Finance
Billing errors (check accounts receivable overdues)
Accounts payable deductions missed
Vouchers prepared with no defects
Clock card or payroll transcription errors
Data entering errors
Computer downtime
Timeliness of nancial reports
Effectiveness of scheduling program debugging time
Rerun time
Accuracy of predicted budgets
Clerical errors on entries
Inventory objectives met
Payroll errors
Discount missed
Amounts payable records
Billing error
9. Forecasting
Can departments function with maximum effectiveness with budgets
set for them?
Can the company buy needed capital equipment, keep inventories sup-
plied, pay its bills?
Do projects meet time schedules?
Assistance to line organizations (scheduling, planning, and control
functions)
Methods for nance and cost control
Timeliness of nancial reports
SL3151Ch10Frame Page 495 Thursday, September 12, 2002 6:03 PM
496 Six Sigma and Beyond
Assets control
Minimizing capital expenditures
Realistic budgets
Clear and concise operating policies; timely submission of realistic
cost proposals
Completeness of nancial reports
Effectiveness of disposition of government property
Effectiveness of cost negotiations
10. Legal
Amount of paper used versus nished pages produced
Misdelivered mail
Misled documents
Delays in execution of documents
Teletype errors
Patent claims omitted
Response time on request for legal opinion
11. Management
Output of staff elements, overall defects rates, budgets and schedule
controls, and other factors that reect on managerial effectiveness (In
other words, the accomplishments of a manager are the sum totals of
those working under him or her)
Success in developing estimates of costs versus actual accruals
Success in meeting schedules
Performance record of employees under the managers supervision
Success in developing realistic estimates on a PERT or PERT/cost chart
Success in minimizing use of overtime operations
All nonproduction departments can be measured
Each department should be measured against itself, using time com-
parisons, and preferably by itself.
The best primary goals are those that measure cost performance, deliv-
ery performance, and quality performance of the of the department.
Secondary goals can be derived from these primary goals.
There should be a base against which quality, cost or delivery perfor-
mance can be measured as a percentage improvement. Examples of
such a base would be direct labor, the sales dollar, the material dollars,
or the budget dollar. A dollar base is more meaningful to management
than a physical quantity of output.
Success in effecting pricing actions so as to preclude subsequent revi-
sions.
Pages of data compiled with no defects
Clarity and conciseness of operating procedures
Evaluations of capital investment
Errors in applying standards on process sheets
Accuracy of estimates; actual costs versus estimated costs
Effectiveness of work measurement programs
SL3151Ch10Frame Page 496 Thursday, September 12, 2002 6:03 PM
Miscellaneous Topics Methodologies 497
12. Marketing
Success in reduction of defects through suggestion submittal
Success in capturing new business versus quotations
Responsiveness to customer inquiries
Accuracy of marketing forecasts
Response from news releases and advertisements
Effectiveness of cost and price negotiations
Success in response to customer inquiries (customer identication)
Customer liaison
Effectiveness of market intelligence
Attainment of new order targets
Operation within budgets
Effectiveness of proposals
Exercise of selectivity
Control of cost of sales
Meeting proposal submittal dates
Timely preparation of priced spare parts list
Aggressiveness
Utilization of eld marketing services
Dissemination of customer information
Bookings budget met
Accuracy of predictions, planning and selections
Accurate and well-managed contracts
Exploitation of business potential
Effectiveness of proposals
Control of printing costs
Application of standard proposal material
Standardization of proposals
Reduction of reproduction expense
Contract errors
Order description error
Sales order errors
13. Material
Saving made
Late deliveries
Purchase order (PO) errors
Material received against no PO
Status of unplaced requisitions
Orders open to government agency for approval
Delays in processing material received
Damage or loss items received
Claims for products damaged after shipment from our plant
Delays in outbound shipments
Complaints or improper packing in our shipments
Errors in travel arrangements
SL3151Ch10Frame Page 497 Thursday, September 12, 2002 6:03 PM
498 Six Sigma and Beyond
Accuracy of route and rate information on shipments
Success in meeting schedules; material shortages in production
Success in estimating inventory requirements
Clock card errors by employees
Damaged shipments
Stock shelf life exceeded
Items in surplus
Purchase requisition errors
Effectiveness of material order follow up
Adequacy and effectiveness of planning and scheduling
Application of residual inventories to current needs
Inventory turnover manufacturing jobs without schedules
Timeliness of incorporating ECNs
Timely replacement of rejected parts
Adequacy of reject control plan
Effectiveness of packing operation
Application of residual inventories to current needs
Floor shortages
Labor utilization index
Data processing rerun time on material programs
Bad requisitions
Value of termination stores and residual inventory
Manpower uctuations around mean
Percent supplier material ($) rejected and returned; total material ($)
purchased
Number of defective suppliers (repetitive); total number of suppliers
Number of single source suppliers; total number of suppliers
Percent of supplier material ($) holding up production: total material $
Number of late lots received (actually holding up production); total
lots received
Percent of purchased material (actual); total material bid or budgeted
Percent of reductions in B/M effected through purchasing effort; total
material bid or budgeted.
Correct quotations or rates
Customers call back as promised
Installation of exact equipment requested by customer
Appointments kept at the time promised customers
Prompt handling of complaints
Accurate meter readings
Courteous treatment of customers
Right packages of goods ordered shipped
Number of telephone numbers correctly dialed
PMI rejects
Savings made
Material handling budget met
SL3151Ch10Frame Page 498 Thursday, September 12, 2002 6:03 PM
Miscellaneous Topics Methodologies 499
Travel expense against open shop orders
Orders to government for approval disapproved, resubmitted, and
open, not approved
14. Personnel
Success in eliminating security violations
Hiring effectiveness
Thoroughness and speed of responding to suggestions
Employee participation in company sponsored activities
Administration of insurance programs
Accident prevention record
Processing insurance claims
Provision of adequate food services
Personnel security clearance errors
External classied visit authorization errors
Speedy processing of visitors through lobbies
Records accuracy
Adequacy of training programs
Thoroughness and speed of investigating suggestions
Grievances
Employment requisitions lled
Administration of insurance program
Acceptance of organization development recommendations
Effectiveness of administration of merit increases
Overhead budget performance
15. Product assurance
Participation in design reviews
Customer liaison
Technical society participation
Accuracy of proposals and contracts
Application of program policies
Prevention of eld complaints
Effectiveness of reporting and recording
Customer rejects
Rejected material on the oor
Adequacy of vendor ratings
Effectiveness of eld quality control
Rejects
Screening efciency
Inspection documentation
Quality assurance audits
16. Product control
Success in developing realistic schedules
Success in developing realistic estimates
Success in identifying defective specications
Process sheets written with no error
SL3151Ch10Frame Page 499 Thursday, September 12, 2002 6:03 PM
500 Six Sigma and Beyond
Transportation hours without damage to product
Parts shortages in production
Downtime due to shortages
17. Production
Success in reducing the scrap, rework, and use-as-is categories
Success in maintaining perfect attendance records
Success in identifying defective manufacturing specications
Success in meeting production schedules
Success in cost reduction through suggestion submittal
Success in improving rst article acceptance
Performance against standard
Success in reducing required MRB action
Utilities improperly left running at close of shift
Application of higher learning curves
Floor parts shortages
Delays due to rework, material shortage, etc.
Control of overtime (nonscheduled)
Prevention of damage to work in process
Cleanliness of assigned areas
Conformance to estimates
Suggestions submitted
Labor utilization index
Defects
Asset utilization
Scrap
Utilization of correct materials, drawings, and procedures
Prevention of damage
Safety records
Inches of weld with no defects
Log book entries with no defects
Security violations
Compliance to schedules
Accuracy of estimates
18. Program Management
Liaison with customer
Financial quality of proposals (technical approach, cost, time)
Soundness of project plans
Coordination of support activities
Satisfactory eld sell off
Backlog
New business volume versus budgeted
19. Publications
Compliance to specications
Errors corrected
Thoroughness of material
Quality of production
SL3151Ch10Frame Page 500 Thursday, September 12, 2002 6:03 PM
Miscellaneous Topics Methodologies 501
20. Quality control
Inspection errors
Sampling program errors
Timeliness of inspection reports
Adequacy of vendor quality ratings
Returned goods and eld rework due to inspection oversight; customer
rejects
Quality assurance audits
Inspection documentation
Customer liaison
21. Research and development
Can it be applied?
Can it be developed?
Can it be manufactured?
Can it be marketed?
22. Security
Personnel security clearance errors
Timely and accurate processing
External classied visit authorization errors
Accurate processing of visitor identication
Effectiveness of security program
Guards, security checks, badges, passes
Records accuracy
Fire watch
23. Services: general
Promptness in reply to requests
Quality of service rendered
Blueprint and drawing control, reproduction, distribution
Test equipment maintenance and calibration
TRW communication
Reproduction facilities
24. Purchasing
Purchase order changed due to error
Late receipt of materials
Rejections due to incomplete description
25. Supervision
A supervisors performance is measured by the overall effectiveness of the
department; in other words, the supervisor is judged by the sum total of
accomplishments of the people working for him or her. The worth of
individual or group achievements should be evaluated against the fol-
lowing criteria:
Impact of potential error (abort of mission, cost effect on schedules,
etc.)
Contributions of the individual or group to the prevention of error
Difculty of the job and level of skill required
Work schedules and load impact on error potential
SL3151Ch10Frame Page 501 Thursday, September 12, 2002 6:03 PM
502 Six Sigma and Beyond
Ability of individual to correct his/her own errors
Attitude of the individual toward work, project, or command mission
GUIDELINE COST OF QUALITY ELEMENTS BY DISCIPLINE
Note: Nonconformance elements listed reect those that are the responsibility of
that department.
Engineering: price of conformance
a. Design specication reviews
b. Product qualication, evaluation, characterization
c. Drawing checking
d. Supplier evaluation
e. Preventive maintenance
f. Process capability studies
g. Fabrication of special test xtures
h. Verication of workmanship standards
i. Review of test specications
j. Failure effects/mode analysis
k. Pilot production runs
l. Packaging qualication
m. Customer interface
n. Safety review operator safety
o. Technical manuals
p. Preproduction reviews
q. Defect prevention program
r. Schedule reviews
s. Process reviews
t. Early approval of production specications
u. Computer-aided design (CAD)
v. First piece approval
w. Agency approval
x. Supplier qualication
y. Special test xture design review
z. Education
aa. Prototype inspection and test
ab. Testing
ac. Receiving sample testing
ad. In-process sample testing
ae. Final sample testing
af. Laboratory analysis and test
ag. Fault insertion test
ah. Engineering audits
ai. Training for special testing
aj. Personnel appraisal
SL3151Ch10Frame Page 502 Thursday, September 12, 2002 6:03 PM
Miscellaneous Topics Methodologies 503
Engineering: price of nonconformance
a. Warranty expense
b. Engineering travel and time on problems
c. Redesign
d. Premium freight costs
e. Material review activities
f. Failure analysis (return evaluation)
g. Corrective action
h. Failure reports
i. Return goods analysis
j. Product liability (design related)
Comptroller: price of conformance
a. Forecasting performance
b. Training and procedures
c. Ledger review of P & L and balance sheet
d. Budget generation
e. Long-range planning
f. Job description
g. Cost of quality budget
h. Timecard review
i. Capital expenditure reviews
j. Total expenditure reviews and delegation of authority
k. Order entry review
l. Product cost standards
m. Cost reduction
n. Cost of quality reviews
o. Data processing report/nancial report reviews
p. Ledger reviews
q. Invoicing review
Comptroller: price of nonconformance
a. Billing errors
b. Inventory out of control
c. Improper A/P vendor payments
d. Incorrect accounting entries
e. Bad debts turnovers, overdue A/R
f. Payroll errors
Software: price of conformance
a. Software planning
b. Software reliability projection/prediction
c. Systems analyst interrogating activities
d. Documentation review
e. Learn/understand customer requirement/business
f. Preparation and review of system specications
g. Flow chart review
h. Correlation analysis
SL3151Ch10Frame Page 503 Thursday, September 12, 2002 6:03 PM
504 Six Sigma and Beyond
i. Data entering operator training
j. Tape duplication and verication
k. Program testing
l. Function test
m. Performance test
n. Code verication
o. Depreciation of software (outdated)
p. System test
q. Inspect programs
Software: price of nonconformance
a. Keeping track of system failures
b. Going back to customer to re-evaluate
c. Customer change requirement
d. Recode; debug; retest
e. Documentation changes
Plant administration: price of conformance
a. Consultants
b. Preventive maintenance program
c. Modeling
d. Controlled/critical storage
e. Environmental control
f. Labor training
g. Review of labor production rates
h. Security
i. Surveillance
j. Machine maintenance P.M.
k. Machine maintenance training
l. Timely machine replacement
m. Periodic equipment depreciation review
n. Equipment depreciation reappraisal
o. Facility planning audits
p. Facility inspection and test
q. Data on labor productivity
r. Pilot run to check standard
s. Labor surveillance
t. Time card control test
u. Time card audit
v. Machine maintenance test
w. Machine maintenance inspection
x. Equipment depreciation inventory
y. Equipment depreciation audit
z. Equipment depreciation tracking
Plant administration: price of nonconformance
a. Facility planning redesign
b. Facility equipment replacement
c. Missed schedule
SL3151Ch10Frame Page 504 Thursday, September 12, 2002 6:03 PM
Miscellaneous Topics Methodologies 505
d. Incorrect labor level
e. Increased failure
f. Incorrect time
g. Machine scrap
h. Machine rework
i. Machine downtime
j. Product liability
k. Equipment depreciation obsolete
l. Equipment depreciation premature
Purchasing: price of conformance
a. Supplier review and approval
b. Send proper specs to vendor make it clear what is required
c. Periodic seminars
d. Forecasting cost of carrying hard-to-get materials
e. Material cost resulting from multiple sourcing
f. Strike build-up costs
g. Evaluation of suppliers equipment that will be used to do the job
h. Review supplier incoming quality practices
i. Recertication of suppliers
j. Incoming inspection cost
k. Information systems cost associated with vendor rating
Purchasing: price of nonconformance
a. Scrap
b. Sorting
c. Reinspection due to rejects
d. Rework
e. Excess inventory due to lack of condence in vendor delivery
f. Loss incurred as a result of vendor delinquencies
g. Corrective action cost
h. Shipping cost on returns to vendors
i. Purchase order changes resulting from error
j. Incoming inspection cost
k. Premium freight
l. Trips to suppliers to resolve problems
m. Expediting cost to ensure proper deliveries (i.e., phone bill)
Marketing: price of conformance
a. Procedures
b. Training
c. Forms design
d. Sales support material
e. Design specs
f. P&L
g. Computive data
h. Market forecast
i. Legal and product safety review
j. User market research
SL3151Ch10Frame Page 505 Thursday, September 12, 2002 6:03 PM
506 Six Sigma and Beyond
k. Sales support cost
l. Customer survey
m. Sales dollars
n. Service cost by area/advertising
o. Loss leaders
p. Launch U.S. availability
q. Pilot and eld test
r. Incentive programs
s. Market survey
Marketing: price of nonconformance
a. Labor of redos administration
b. Incorrect order entry
c. A/R receivables
d. Special instruction
e. Field service excessive
f. Warranty
g. Literature reprint
h. Contingent liability
i. Unit productivity
j. Product recall
k. Loss of market share
Human resources: price of conformance
a. Pre-screen applications
b. Interviewing
c. Personnel testing
d. Reference checking
e. Security clearance, if necessary
f. Orientation
g. Training
h. Job description and work plans
i. Safety program
j. Quality improvement program
k. Exit interviews
l. Performance appraisal
m. Attendance tracking
n. Productivity rates
o. Personnel records audits
p. Tracking of injuries
Human resources: price of nonconformance
a. Turnover rates
b. Grievance tracking
c. Non-timely lling of position
d. Cost of injuries
Manufacturing: price of conformance
a. Training: supervisor hourly
SL3151Ch10Frame Page 506 Thursday, September 12, 2002 6:03 PM
Miscellaneous Topics Methodologies 507
b. Special review
c. Tool/equipment control
d. Preventive maintenance
e. ZD program
f. Identify incorrect (zero defect) specications/drawings
g. Housekeeping
h. Controlled overtime
i. Checking labor
j. Trend charting
k. Customer source inspection
l. First piece inspection
m. Stock audits
n. Certication
Manufacturing: price of nonconformance
a. Rework
b. Scrap
c. Repair and return expenses
d. Obsolescence
e. Equipment/facility damage
f. Repair equipment/material
g. Expense of controllable absence
h. Supervision of manufacturing failure element
i. Discipline costs
j. Lost time accidents
k. Product liability
Quality control: price of conformance
a. Quality training
b. Test planning
c. Inspection planning
d. Audit planning
e. Product design review
f. Supplier qualication
g. Productibility/quality analysis review
h. Process capability studies
i. Machine capability studies
j. Calibration of quality equipment
k. Operator certication
l. Incoming inspection
m. In-process inspection
n. Final product inspection
o. Product test
p. Product audit
q. Test equipment
r. Checking gauges, xtures, etc.
s. Prototype inspection
SL3151Ch10Frame Page 507 Thursday, September 12, 2002 6:03 PM
508 Six Sigma and Beyond
t. Quality systems audits
u. Customer/agency audits
v. Outside lab evaluations
w. Life testing
x. Product audits
Quality control: price of nonconformance
a. Scrap analysis
b. Rework analysis
c. Warranty cost analysis
d. Concessions analysis
e. Factory returns analysis
f. Material review board action
Industrial engineering: price of conformance
a. Operator training
b. Design review
c. Inventory control
d. Job description
e. Methods description
f. Test equipment description verication
g. Material utilization
h. Line rebalance
i. Process verication
j. Product control card system
k. Material usage verication
Industrial engineering: price of nonconformance
a. Tool repair
b. Tool modication
c. Corrective action costs
d. Engineering change order
e. Purchasing change order
f. Turnover
g. Obsolete job description
Information systems: price of conformance
a. Job descriptions (written)
b. Hiring and testing
c. Schools
d. Program documentation and testing
e. Cost benet analysis
f. Risk analysis of project
g. Proper communication of user requirements between user and infor-
mation systems
h. Verication of input data
i. Test techniques
j. Pilots
k. Parallel runs
l. Post implementation audit
SL3151Ch10Frame Page 508 Thursday, September 12, 2002 6:03 PM
Miscellaneous Topics Methodologies 509
Information systems: price of nonconformance
a. Systems do not meet user requirements redo
b. Corrective maintenance
c. Reruns
d. Input cycles edit and update
e. Hardware downtime
f. Scheduling failures
Law department: price of conformance
a. Maintenance of law library
b. Seminar on prevention of product liability claims
c. Label copy evaluation
d. Advertisement copy review
e. Safety program audit
f. Equal opportunity program audit
g. Compliance audit of SEC
h. Contract review
i. Checking paperwork for errors
j. Compliance audit on environmental protection agency (EPA)
k. Review of federal/state submissions (new products, patents, etc.)
Law department: price of nonconformance
a. Product liability matters (travel, litigation, outside rms, time)
b. Warranty reviews
c. Penalties for late ling
d. Product complaint review (internal and with regulatory agency)
e. Product recalls
f. Defense of patent infringement suit
g. Representing grievances
h. Internal department rework (rewrite, retype, etc.)
i. Seminar on defending product liability suits
j. Settlements
COST OF QUALITY AND DFSS RELATIONSHIP
We made the statement earlier that perhaps one of the key contributions of COQ in
DFSS is to identify and eliminate the hidden factory cost. To do that, let us visit
some of the more demanding calculations.
First, we begin with a review of the Poisson distribution:
this of course is equivalent to
where n is the total number of independent trials; p is the probability of occurrence;
r is the number of occurrences; u is the number of units produced; and d is the
number of non-conformities (defects). This value (d) is also known as the np.
Y
np e
r
r np
=

( )
!
Y
d
u
e
r
r
d
u
=

( )
!
SL3151Ch10Frame Page 509 Thursday, September 12, 2002 6:03 PM
510 Six Sigma and Beyond
Under special conditions, such as normalizing per unit, the d/u = np/u. Therefore,
if we substitute terms for the normalizing case where u = 1, for the special case
where r = 0 (remember 0! = 1), we are able to reduce the above formulas into
The reader will notice that this equation really reects the rst time yield (Y
FT
)
for a specic d/u. Of course, if we know the rst time yield we can solve for d/u
with the following formula:
d/u = ln(Y
FT
)
where ln is the natural log.
If the assumptions for the Poisson model are not reasonable then we may use
the binomial model
This of course, becomes Y = (1 p)
m
for the special case of r = 0 where p is the
constant probability of an event and q = 1 p.
In dealing with COQ issues, as Grant and Leavenworth (1980) have pointed out,
the Poisson approximation may be applied when the number of opportunities for
non-conformance (n) is large and the probability (p) of an event (r) is small. In fact,
as n increases and r decreases, the approximation by the Poisson model improves.
Furthermore, we can use COQ issues and information to generate or double-
check the validity of the critical to quality characteristic (CTQ) as well as the critical
to process characteristic (CTP). Above all, we are capable of measuring the classical
perspective of yield. The traditional formula, which is based on process capability, is
where Y
nal
= nal yield; S = number of units that pass; and U = number of units
tested.
Another way to view yield is to calculate the rolled throughput, which is
or
Y
rt
= Y
tp;1
* Y
tp;2
*, * Y
tp;m
or the normalized variation of
where Y
norm
is the normalized yield; Y
rt
is the rolled throughput yield; m is the
number of categories; tp is the throughput yield of any given category; and dpu is
defects per unit.
Y e
d
u
=

Y
m
r m r
p q
r m r
=

!
!( )!
Y
S
U
final
=
Y e
rt
dpu
=

Y Y
norm rt
m
= ( )
1
SL3151Ch10Frame Page 510 Thursday, September 12, 2002 6:03 PM
Miscellaneous Topics Methodologies 511
REFERENCES
Grant, E.L. and Leavenworth, R.S., Statistical Quality Control, McGraw-Hill, New York, 1980.
SELECTED BIBLIOGRAPHY
Aaron, M.B., Measure for Measure Alternative to Goodness, Motherhood & Morality, paper
presented in Automach, Australia for SME, 1986.
Bestereld, D.H., Quality Control, Prentice Hall, Englewood Cliffs, NJ, 1979.
Carlsen, R.D., Gerber J, and McHugh, J.F., Manual of Quality Assurance Procedures and
Forms, Prentice Hall, Englewood Cliffs, NJ, 1981.
Ford Motor Co., Team Orientated Problem Solving. Electrical & Electronics Div., Rawson-
ville, MI, 1987.
Juran, J.M. and Gzyna F.M., Quality Planning and Analysis. 3rd ed., McGraw-Hill, New
York, 1993.
Schneiderman, A.M., Optimum Quality Costs and Zero Defects: Are They Contradictory
Concepts? Quality Program, Nov. 1986, pp. 2327.
REENGINEERING
Reengineering by denition is a drastic change of the process. However, if the process
changes, then the job/task regarding that process must be changed as well. This section
addresses the approach and method for reengineering only from the process perspective.
For more details see Stamatis (1997) and Selected Bibliography. The discussion will
focus on drastic changes as well as developmental changes for the process. Evolutionary
changes in process are addressed by statistical process control charting and other
monitoring methods and are beyond the scope of this volume (Volume IV of this series
covers this topic). Both approaches to redesign merge the viewpoints of management
and labor, resulting in more job satisfaction and productivity.
Drastic changes are taking place across the corporate world in the areas of com-
munication practices, corporation cultures, and productivity. These changes are the
result of increased employee awareness, an advanced level of technology, competition,
mergers, greater demand of quality, and in general, increasing business costs.
These changes have forced management to respond in several ways, including
asking employees (union and non-union) for their help. This initiative by management
has resulted in participative programs such as teamwork and as of late redesigning the
actual job or process. As a result of these changes, trust and open communication are
cultivated and encouraged. Information sharing, as well as moving responsibility and
accountability to employees themselves, is a common occurrence.
This employee participation has generated a need for both job and process
redesign so that an organization may be more competitive in the world markets as
well as more efcient in producing its product or service.
PROCESS REDESIGN
A process may change in evolutionary form and or in a very drastic approach (Chang,
1994). When a process changes in an evolutionary form, it may be because of
statistical process control monitoring or some other kind of monitoring method.
SL3151Ch10Frame Page 511 Thursday, September 12, 2002 6:03 PM
512 Six Sigma and Beyond
Under this condition, the redesign process takes the form of a problem-solving
approach. A typical approach is the following:
Step 1. Reason for improvement why is there a need for change?
Select appropriate and applicable measures and targets.
Determine any gaps.
Step 2. Dene problem Whenever possible, stratify the improvement area.
Look for root cause rather than symptoms.
Step 3. Analysis Verify root cause.
Step 4. Solution(s) Determine alternatives.
Select best solution.
Step 5. Result(s) Verify and evaluate the elimination of the root cause by
asking:
Are we better off?
Are we worse off?
Are we the same as before?
Step 6. Implementation Review the control plan. Change it if appropriate
and applicable.
Standardize the process.
Replicate.
THE RESTRUCTURING APPROACH
When a process changes drastically, it may be because of a newly introduced
technology or a process reengineering effort. If a process changes because of newly
introduced technology, then this process follows a restructuring plan. Process restruc-
turing seeks to break down complexity into manageable pieces. Rather than focusing
on a single problem, the process redesigner seeks to understand how whole sets of
activities and problems are interrelated. When complexity itself is a problem, restruc-
turing is probably the best path to pursue (Rupp and Russell, 1994). The heart of
the restructuring approach consists of six steps. They are:
Step 1. Need for change Reevaluate the process and the technology under
consideration.
Why is the technology necessary?
Are there other alternatives?
Step 2. Analysis of new technology What is the cost?
What is the time for implementation?
What does the value analysis indicate?
Step 3. Evaluate the new paradigm(s) Look at the alternatives from a
wider perspective.
Look at as many alternatives as possible.
Stretch the alternatives for results.
Step 4. Design the new process Dene, measure and evaluate all alter-
natives with the following in mind:
Flow
Structure
Tasks
SL3151Ch10Frame Page 512 Thursday, September 12, 2002 6:03 PM
Miscellaneous Topics Methodologies 513
Step 5. Build the new process Verify the process performance and effec-
tiveness.
Step 6. Implementation Develop new control plan.
Standardize the new process
Replicate
If, on the other hand, a process changes because of reengineering efforts, then
this process is developed in four stages. The four stages are:
Stage 1. Recognition for change. One of the rst things that the team has to
recognize is that the status quo is about to change. Once the realization
sets in, then a formal analysis of what has to change must be performed
and the intentions of that change must be communicated throughout the
organization to those who are or will be involved. As part of the commu-
nication effort, the context of the change and the operating principles will
also be communicated.
Stage 2. Change content denition (formulation). In this stage, a process map
is developed, so that the process targets and objectives may be declared.
(In some cases, two process maps are developed as needed. One represents
the old process and the second represents the new process. This is done for
comparison purposes.) This stage begins the process analysis (tasks and
jobs required) and determines the process changes and the new owners.
Perhaps one of the most important aspects of this stage is the formulation
of the baseline production requirements such as capacity, cycle time, pro-
ductivity, efciency, and quality requirements. In some cases, this is the
stage where a pilot study will be designed.
Stage 3. Change implementation. This is the stage where most of the tedious
work has to occur. Specically, the controls are designed for the new
process, and a systematic analysis is performed to identify potential mod-
ication points and to eliminate nonvalue adding steps. A formal value
analysis and FMEA may be performed in this stage to identify areas of
opportunity and possible restructuring.
Stage 4. System maintenance. This is the nal stage of the reengineering
process. This is where the old system ofcially is declared obsolete and
the new system is installed with all the new structures, modications, waste
reductions, and new targets of production.
This four-stage model of the process redesign identies the general elements of the
change. To complete the discussion, however, we must also address the specic tasks
that the team leader (project manager) must perform and how the team members will
respond. Table 10.9 summarizes these seven steps to the implementation process.
THE CONFERENCE METHOD
Another way to redesign a process is the conference method. This method is based
on the notion of a cross-functional design team, which has been charted by a steering
SL3151Ch10Frame Page 513 Thursday, September 12, 2002 6:03 PM
514 Six Sigma and Beyond
committee to design a more effective organization (Weisbord, 1987; Wilgus, 1995).
Within the charter of the team it is imperative that a four-item analysis be addressed.
That analysis should cover the following:
1. External inuences and how the organization must change in response to
these inuences
2. Customer analysis
3. Vision of what the company aspires to become
4. Principles or values that will guide behavior
In this method, there are three basic ways of analysis:
1. The vision conference, which is made up of two elements:
a. The past and present. Here the team acknowledges practices that
should be brought into the future and those that should be dropped.
b. The future. Here the team reviews the vision statement and identies
the organizational values, which include customer focus, trust, and
shared responsibility.
2. The technical conference, which is made up of two elements:
TABLE 10.9
Seven-Step Process Redesign Model
Prime Action of Team Leader Support Action
Introduce the process change; full disclosure about
the project to all concerned
Action kick-off by management team who will be
responsible to the team and team leader for
follow-up
Develop strategy for implementation Strategy is incorporated into the business plan, and
the team leader acts as both changing agent and
support member
Perform appropriate and applicable training to the
team members
Managers and team leader support the team for the
change; they provide encouragement, coaching,
and resources as needed
Follow up with both managers and employees to
develop team level information, meetings,
reports, problem resolution structure, and
whatever else is necessary
Managers and team leader dene the system of
the change; they provide the appropriate
support as needed.
Follow up on the reports and measurement element
of this stage; evaluation of the results is also
important
Team leader conducts meetings and improves
quality gures; interpretation is an important
information through a systematic ow upward
Help managers with problem Team leader and managers set rst set of
performance resolution targets
Full integration of process redesign Team leader reports progress; audit(s) may be
conducted in order to verify targets and or modify
the process or the targets
SL3151Ch10Frame Page 514 Thursday, September 12, 2002 6:03 PM
Miscellaneous Topics Methodologies 515
a. Analysis of current state. Here the core process is studied and evaluated
rather than what people do.
b. Creation of the ideal state. Here the group activity is focused on
generating a conducive environment for breakthrough thinking.
3. The social conference. In this stage of the conference the social system
of the organization is evaluated. The elements upon which the evaluation
is based are structure, skills, style, symbols, and human systems. Each
element is inuenced by changes in the environment and must be aligned
with the organizations vision values and technical system. The goal of
this stage is to generate a design that is the best.
THE OOAD METHOD
Yet another way of redesigning the process is through Object Oriented Analysis and
Design (OOAD). Basically, this method is a framework for understanding, develop-
ing, organizing, and managing projects. OOAD is the practice of examining any
collection of activities (processes, automation situation, and information ow) as a
series of interacting objects. As such, this method is applicable across many indus-
tries. Historically, OOAD is the result of the interaction of project management tools
(PERT, CPM), system design tools (CASE), and analysis tools. As such OOAD has
a valued use throughout project and process life cycle activities.
OOAD does not require special computing, CASE tools, or computer programs.
The four basic tools or templates basic to doing enterprise and plant OOAD are the
opportunity framework, the activity object diagram, the integrated object template,
and the system requirements template. From a reengineering perspective of a par-
ticular process, the OOAD has the following minimum requirements:
1. Motivate people and keep them involved.
2. Get people to recognize that quality is their responsibility.
3. Recognize other initiatives and coordinate with their activities.
4. Get all interacting groups involved in the project (operations, automation,
safety, logistics, nance, and so on).
5. Get the information to the place and people where it is needed in a timely
manner.
6. Translate the engineering, operations, and support views into the overall
organizations view.
7. Include the effects of real world practices in the design.
8. Ensure that the technology is acquired by the client.
Once the minimum requirements are established, then the reengineering team
is ready to implement the method for change. The success of the OOAD will depend
upon the implementation of the following ten steps:
1. Establish the program scope and focus.
Define and publish OOAD.
Agree on primary focus the core process to be reengineered.
SL3151Ch10Frame Page 515 Thursday, September 12, 2002 6:03 PM
516 Six Sigma and Beyond
2. Set high level goals and objectives.
Derive from current business strategies.
Identify critical processes, activities, and metrics.
3. Establish the appropriate time horizon for planning.
Incorporate technology forecasts.
Balance tactical with strategic intent.
4. Adopt the specic methodology for executing the plans.
Provide a plan for planning.
Separate work plans from techniques.
5. Organize teams and participants.
Involve the right people.
Allocate adequate time and training.
6. Provide education and training.
Internal
External missions
7. Dene the procedure to approve plans.
Process priority
High level (top management) commitment
8. Promote technology transfer.
Project organization
Education/training
9. Communicate plans and progress.
Conferences and presentations
Recognition and rewards
10. Screen outside assistance.
Timetable and expertise
REENGINEERING AND DFSS
Reengineering may play a major role in the DFSS because it will focus the process
of improvement by
Positioning six sigma in a process management framework
Achieving breakthrough improvements in quality, cost, and cycle time
Leveraging six sigma with reengineering and vice versa
Using the right tool for the right problem
Avoiding the traps of six sigma implementation
Turning process management and improvement into a way of life
Many executives who so enthusiastically embrace six sigma do not really know
what they are getting into, and that is a guarantee of trouble downstream. For many,
six sigma methodology has become a corporate panacea, a silver bullet of sorts, and
that spells unrealistic expectations and eventual disillusionment. More importantly,
many companies are discovering that there is a large gap between learning the
techniques of six sigma and realizing its benets. Some are falling prey to six sigma-
itis, symptomized by a vast number of uncoordinated projects that do not support
SL3151Ch10Frame Page 516 Thursday, September 12, 2002 6:03 PM
Miscellaneous Topics Methodologies 517
critical corporate goals or customer functionality. If these problems are not addressed
quickly, six sigma will become just another corporate fad, companies will not benet
from its power, and institutional cynicism will get yet another boost. We have been
down this road before, and we do not need to go down it again.
Executives of these companies need a better understanding of what six sigma
is and what it is not, what it can do and what it cannot. With six sigmas focus on
problem identication and resolution, can it create breakthrough process designs?
Does it have the power to address so-called big P (big picture) processes that cross
functional boundaries? Is its popularity at least in part a reection of the fact that
six sigma does not shake up an organization, which might make it easier to swallow
but limits its impact? Might six sigma actually reinforce, rather than knock down,
the silos that impede improved performance? We need realistic answers to these
essential questions.
We suggest that process management and reengineering are necessary comple-
ments to six sigma especially in the DFSS phase. Six sigma is part of the answer,
but it is not the whole answer. Six sigma veterans have learned they need to leverage
their six sigma efforts with other improvement techniques process management,
reengineering, and lean thinking. Six sigma is a methodology that uses many tools,
is not a religion, and business results are more important than ideological purity.
The real winners at DFSS do not limit their arsenal to just one weapon but employ
all appropriate techniques, combining them in an integrated program of process
redesign and improvement. After all, we already know that over 90% of our problems
are systems (process) problems. It would be ludicrous to look the other way when
we have a chance to x them in the design stage.
In addition, most of the six sigma projects, discussions, and experts emphasize
the manufacturing end of potential improvements. But let us remember that in
services, the cost of quality is in the 60 to 70% range of sales. That is an incredible
potential of improvement, and reengineering is a perfect tool not only to evaluate
but also to help the six sigma initiatives reformulate the processes in such a way
that improvements will occur as a matter of course and on an ongoing basis. In this
respect, one may even say that the goal of DFSS and reengineering is to create a
process in which all its components are managed, measured, and improved from
two distinct, yet the same perspectives:
1. Organizational protability
2. Satisfaction of customer functionality
REFERENCES
Chang, R.Y., Improve Processes, Reengineer Them, or Both, Training & Development, Mar.
1994, pp. 5461.
Rupp, R.O. and Russell, J.R., The Golden Rules of Process Redesign, Quality Progress, Dec.
1994, pp. 8590.
Stamatis, D.H., The Nuts and Bolts of Reengineering, Paton Press, Red Bluff, CA, 1997.
Weisbord, M., Productive Workplaces, Jossey-Bass, San Francisco, 1987.
Wilgus, A.L., The Conference Method of Redesign, Quality Progress, May 1995, pp. 8995.
SL3151Ch10Frame Page 517 Thursday, September 12, 2002 6:03 PM
518 Six Sigma and Beyond
SELECTED BIBLIOGRAPHY
Baer, W., Employee-Managed Work Redesign: New Quality Of Work Life Developments,
Supervision, Mar. 1986, pp. 69.
Cooley, M., Architect or Bee, South End Press, Boston, 1981.
Crosby, B., Employee Involvement and Why It Fails, What It Takes to Succeed, Personnel
Administration, Feb. 1986, pp. 9596.
Meyer, L., How the Right Measures Help Teams Excel, Harvard Business Review, May/June
1994, pp. 95104.
McElroy, J., Back to Basics at NUMI: Quality Through Teamwork, Automotive Industries,
Oct. 1985, pp. 6364.
Pyzdek, T., Considering Constraints, Quality Digest, June 2000, p. 22
Schumann, A.l., Fed Up with Furniture Failure? Ofce System 85, May 1986, pp. 6067.
Solomon, B.A., A Plan That Proves Team Management Works, Personnel, June 1985, pp. 68.
Swineheart, D.P. and Sherr, M.A., A System Model for Labor-Management Cooperation,
Personnel Administration, Apr. 1986, pp. 8790.
Wall, T., What Is New in Job Design, Personnel Management, 1984, pp. 2729.
GEOMETRIC DIMENSIONING
AND TOLERANCING (GD&T)
GD&T is an engineering product denition standard that geometrically describes design
intent. It also provides the documentation base for the design of quality and production
systems. Used for communication between product engineers and manufacturing engi-
neers, it promotes a uniform interpretation of a components production requirements.
This interpretation and communication are of interest to those who are about to
undertake the DFSS baton. Without the appropriate and applicable interpretation of
the design and without the appropriate communication of that design to manufac-
turing, problems will denitely occur.
Therefore, in this section we will address some of the key aspects of GD&T in
a cursory manner. We will touch on some of the denitions and principles of general
tolerancing as applied to conventional dimensioning practices. The term conventional
dimensioning as used here implies dimensioning without the use of geometric
tolerancing. Conventional tolerancing applies a degree of form and location control
by increasing or decreasing the tolerance.
Conventional dimensioning methods provide the necessary basic background to
begin a study of geometric tolerancing. It is important that you completely under-
stand conventional tolerancing before you begin the study of geometric tolerancing.
When mass production methods began, interchangeability of parts was impor-
tant. However, many times parts had to be hand selected for tting. Today, industry
has faced the reality that in a technological environment, there is no time to do
unnecessary individual tting of parts. Geometric tolerancing helps ensure inter-
changeability of parts. The function and relationship of a particular feature on a part
dictates the use of geometric tolerancing.
Geometric tolerancing does not take the place of conventional tolerancing.
However, geometric tolerancing species requirements more precisely than conven-
tional tolerancing does, leaving no doubts as to the intended denition. This precision
SL3151Ch10Frame Page 518 Thursday, September 12, 2002 6:03 PM
Miscellaneous Topics Methodologies 519
may not be the case when conventional tolerancing is used, and notes on the drawing
may become ambiguous.
When dealing with technology, a drafter needs to know how to properly represent
conventional dimensioning and geometric tolerancing. Also, a technician must be able
to accurately read dimensioning and geometric tolerancing. Generally, the drafter con-
verts engineering sketches or instructions into formal drawings using proper standards
TABLE 10.10
GD&T Characteristics and Symbols
M
S
L
P
.50
A
A
Flatness
Straightness
Circularity (Roundness)
Cylindricity
Profile of a Surface
Profile of a Line
Parallelism
Perpendicularity (Squareness)
Angularity
Circular Runout
Total Runout
Position
Concentricity
Maximum Material Condition
Regardless of Feature Size
Least Material
Projected Tolerance Zone
Diameter
Basic
Datum Feature Symbol
Datum Target
Individual
Individual
Related
Related

Form
Runout
Location
Geometric Characteristics and Symbols
Characteristics Symbol Feature Tolerance
SL3151Ch10Frame Page 519 Thursday, September 12, 2002 6:03 PM
520 Six Sigma and Beyond
and techniques. After acquiring adequate experience, a design drafter, designer, or
engineer begins implementing geometric dimensioning and tolerancing on the research
and development of new products or the revision of existing products.
Most dimensions in this text are in metric. Therefore, a 0 precedes decimal
dimensions less than one millimeter, as in 0.25. When inch dimensions are used, a
0 will not precede a decimal dimension that is less than one inch. For review of
decimals and their operations, refer to Volume II of this series.
Most dimensions in this text are in the metric International System of Units (SI).
The common SI unit of measure used on engineering drawings is the millimeter.
The common U.S. unit used on engineering drawings is the inch. (The reader may
want to review the discussion and conversions of the SI system in Chapter 20 of
Volume II of this series.) The actual units used on your engineering drawings will
be determined by the policy of your company. The general note UNLESS OTH-
ERWISE SPECIFIED, ALL DIMENSIONS ARE IN MILLIMETERS (or
INCHES) should be placed on the drawing when all dimensions are in either
millimeters or inches. When some inch dimensions are placed on a metric drawing,
the abbreviation IN. should follow the inch dimensions. The abbreviation mm
should follow any millimeter dimensions on a predominantly inch-dimensioned
drawing. Angular dimensions are established in degrees () and decimal degrees
(X.X), or in degrees () minutes () and seconds ().
The following are some rules for metric and inch dimension units (for a more
detailed discussion see Volume II of this series):
Millimeters
The decimal point and zero are omitted when the metric dimension is
a whole number. For example, the metric dimension 12 has no
decimal point followed by a zero.
When the metric dimension is greater than a whole number by a
fraction of a millimeter, the last digit to the right of the decimal point
is not followed by a zero. For example, the metric dimension 12.5
has no zero to the right of the ve. This rule is true unless tolerance
values are displayed.
Both the plus and minus values of a metric tolerance have the same
number of decimal places. Zeros are added to ll in where needed.
A zero precedes a decimal millimeter that is less than one. For example,
the metric dimension 0.5 has a zero before the decimal point.
Examples in ASME Y14.5M show no zeros after the specied dimen-
sion to match the tolerance. For example, 24 0.25 or 24.5 0.25 are
correct. However, some companies prefer to add zeros after the spec-
ied dimension to match the tolerance, as in 24.00 0.25 or 24.50
0.25.
Inches
A zero does not precede a decimal inch that is less than one. For
example, the inch dimension .5 has no zero before the decimal point.
A specied inch dimension is expressed to the same number of decimal
places as its tolerance. Zeros are added to the right of the decimal point
SL3151Ch10Frame Page 520 Thursday, September 12, 2002 6:03 PM
Miscellaneous Topics Methodologies 521
if needed. For example, the inch dimension .250 .005 has an
additional zero added to .25 to match the three-decimal tolerance.
Fractional inches may be used but generally indicate a larger tolerance.
Fractions may be used to give nominal sizes such as in a thread callout.
Both the plus and minus values of an inch tolerance have the same
number of decimal places. Zeros are added to ll in where needed.
Zeros are added where needed after the specied dimension to match
the tolerance. For example, 2.000 .005 and 2.500 .005 both have
zeros added to match the tolerance.
The following rules are summarized from ASME Y14.5M. These rules are
intended to give you an understanding of the purpose for standardized dimensioning
practices. Short denitions are given in some cases:
Each dimension has a tolerance except for dimensions specically iden-
tied as reference, maximum, minimum, or stock. The tolerance may be
applied directly to the dimension, indicated by a general note, or located
in the title block of the drawing.
Dimensioning and tolerancing must be complete to the extent that there
is full understanding of the characteristics of each feature. Neither mea-
suring the drawing or assumption of a dimension is permitted. Exceptions
include drawings such as loft, printed wiring, templates, and master lay-
outs prepared on stable material. However, in these cases the necessary
control dimensions must be given.
Each necessary dimension of an end product must be shown. Only dimen-
sions needed for complete denition should be given. Reference dimen-
sions should be kept to a minimum.
Dimensions must be selected and arranged to suit the function and mating
relationship of a part. Dimensions must not be subject to more than one
interpretation.
The drawing should dene the part without specifying the manufacturing
processes. For example, give only the diameter of a hole without a man-
ufacturing process such as DRILL or REAM. However, there should
be specications given on the drawing, or related documents, in cases
where manufacturing, processing, quality assurance, or environmental
information is essential to the denition of engineering requirements.
It is allowed to identify (as non-mandatory) certain processing dimensions
that provide for nish allowance, shrink allowance, and other require-
ments, provided the nal dimensions are given on the drawing. Non-
mandatory processing dimensions should be identied by an appropriate
note, such as NON-MANDATORY (MFG DATA).
Dimensions should be arranged to provide required information arranged
for optimum readability. Dimensions should be shown in true prole views
and should refer to visible outlines.
Wires, cables, sheets, rods, and other materials manufactured to gage or
code numbers should be specied by dimensions indicating the diameter
SL3151Ch10Frame Page 521 Thursday, September 12, 2002 6:03 PM
522 Six Sigma and Beyond
or thickness. Gage or code numbers may be shown in parentheses follow-
ing the dimension.
A 90 angle is implied where centerlines, and lines displaying features,
are shown on a drawing at right angles and no angle is specied. The
tolerance for these 90 angles is the same as the general angular tolerance
specied in the title block or in a general note.
A 90 basic angle applies where centerlines of features are located by
basic dimensions and no angle is specied. Basic dimensions are consid-
ered theoretically perfect in size, prole, orientation, or location. Basic
dimensions are the basis for variations that are established by other tol-
erances.
Unless otherwise specied, all dimensions are measured at 20C (68F).
Compensation may be made for measurements taken at other tempera-
tures.
All dimensions and tolerances apply in a free state condition except for
non-rigid parts. Free state condition describes distortion of the part after
removal of forces applied during manufacturing. Non-rigid parts are those
that may have dimensional change due to thin wall characteristics.
Unless otherwise specied, all geometric tolerances apply for full depth,
length, and width of the feature.
Dimensions apply on the drawing where specied.
To appreciate GD&T, the following denitions must be understood to be suc-
cessful in your dimensioning practices:
Actual size The measured size of a feature or part after manufacturing.
Diameter The distance across a circle measured through the center. Rep-
resented on a drawing with the symbol 0. Circles on a drawing are
dimensioned with a diameter.
Dimension A numerical value indicated on a drawing and in documents
to dene the size, shape, location, geometric characteristics, or surface
texture of a feature. Dimensions are expressed in appropriate units of
measure.
Feature The general term applied to a physical portion of a part or object.
A surface, slot, tab, keyseat, or hole are all examples of features.
Feature of size One cylindrical or spherical surface, or a set of two parallel
plane surfaces, each feature being associated with a size dimension.
Nominal size A dimension used for general identication such as stock
size or thread diameter.
Radius The distance from the center of a circle to the outside. Arcs are
dimensioned on a drawing with a radius. A radius dimension is preceded
by an R. The symbol CR refers to a controlled radius. A controlled
radius means that the limits of the radius tolerance zone must be tangent
to the adjacent surfaces, and there can be no reversal in the contour. The
use of CR is more restrictive than R (where reversals are permitted). The
symbol SR refers to a spherical radius.
SL3151Ch10Frame Page 522 Thursday, September 12, 2002 6:03 PM
Miscellaneous Topics Methodologies 523
Reference dimension A dimension, usually without a tolerance, used for
information purposes only. This dimension is often a repeat of a given
dimension or established from other values shown on the drawing. A ref-
erence dimension does not govern production or inspection. A reference
dimension is shown on a drawing with parentheses. For example, (60)
would indicate a reference dimension.
Stock size A commercial or premanufactured size, such as a particular size
of square, round, or hex steel bar.
A tolerance is the total amount that a specic dimension is permitted to vary.
A tolerance (not to be confused with TOLERANSING) is not given to values that
are identied as reference, maximum, minimum, or stock sizes. The tolerance may
be applied directly to the dimension, indicated by a general note, or identied in the
drawing title block.
The limits of a dimension are the largest and smallest numerical value that the
feature can be. For example: A dimension is stated as 12.50 0.25. This is referred
to as plus-minus dimensioning. The tolerance of this dimension is the difference
between the maximum and minimum limits. The upper limit is 12.50 + 0.25 = 12.75
and the lower limit is 12.50 0.25 = 12.25. So, if you take the upper limit and
subtract the lower limit you have the tolerance: 12.75 12.25 = 0.50.
The specied dimension is the part of the dimension from which the limits are
calculated. The specied dimension of the example above is 12.5. A dimension on a
drawing may be displayed with plus-minus dimensioning, or the limits may be specied
as 12.75 and 12.25. Many companies prefer this second method because the limits are
shown and calculations are not required. This is called limits dimensioning.
A bilateral tolerance is permitted to vary in both the + and the directions from
the specied dimension. An equal bilateral tolerance is where the variation from
the specied dimension is the same in both directions. An unequal bilateral tolerance
is where the variation from the specied dimension is not the same in both directions.
A unilateral tolerance is permitted to increase or decrease in only one direction
from the specied dimension.
REFERENCES
American Society of Mechanical Engineers, Dimensioning and Tolerancing: ASME Y14.5M-
1994, American Society of Mechanical Engineers, New York, 1994.
SELECTED BIBLIOGRAPHY
Anon., GDT Gets Everyone Working on Design Quality, Quality Assurance Bulletin, number
1315, undated.
Karl, D.P., Morisette, J., and Taam, W., Some applications of a multivariate capability index
in geometric dimensioning and tolerancing, Quality Engineering, 6, 649665, 1994.
Krulikowski, A., Geometric Dimensioning and Tolerancing: A Self Study Workbook. Quality
Press, Milwaukee, 1994.
SL3151Ch10Frame Page 523 Thursday, September 12, 2002 6:03 PM
524 Six Sigma and Beyond
Madsen, D.A., Geometric Dimensioning and Tolerancing The Goodheart-Wilcox Company,
Inc., South Holland, IL, 1995.
Wearing, C. and Karl, D.P., The importance of following GD&T specications, Quality
Progress, Feb. 1995, pp. 9598.
METROLOGY
The two pillars of the six sigma breakthrough methodology are measurement and
project selection. In this section we are going to discuss the issue of measurement
from a metrology perspective, and in Chapter 13 we will address project selection
under the umbrella of project management.
We have known for a long time that we are only as good as our measurement
will allow us to be. Therefore, improving measurement systems and understanding
the protocol of measurement will benet the DFSS team immensely. It is very
frustrating when you know that what you measure with is not sensitive enough, not
accurate enough, or not precise enough. As though that is not enough, we compound
the problem with variability (incompatibility) between metrology systems.
When a project has been selected for a DFSS investigation/improvement, it is
imperative to understand the metrology system in place the advantages as well
as the disadvantages/limitations of that system and plan accordingly.
UNDERSTANDING THE PROBLEM
Incompatibility issues are more than just a nuisance. When system A doesnt work
with system B, users have to stock more than one set of spare parts, train operators
on multiple software packages, and produce different types and formats of inspection
reports. While these problems are signicant, the real difculty that system incom-
patibility creates is the inability to exchange a common inspection model between
measuring devices. The time it takes to reprogram each device to inspect the same
part, rexturing time and cost, and the resulting loss of accuracy in the measurement
and inspection process create serious inefciencies in a system that is, theoretically,
designed to provide a high level of process control and improved inspection through-
put. (In a design format, think for a moment the ramications of a design control/test
if you are not sure of its capability, accuracy, precision and so on.)
These data exchange roadblocks also create time-to-market problems. During
the product development and introduction cycle, unnecessary time spent reprogram-
ming control/inspection routines and reevaluating data contributes to lengthy product
development cycles with a resulting loss of competitiveness.
The effects of system incompatibility are growing rapidly. Genest (2001) has
estimated that the problem has cost the automotive industry alone about $1 billion.
Cost penalties also exist for metrology system suppliers and individual users who
absorb the costs of repair and training necessary when incompatible systems must
work together.
It is imperative that the leadership (Champion, Master Black Belt, and Black
Belt) of a designated project under DFSS understands metrology and uses it effectively.
SL3151Ch10Frame Page 524 Thursday, September 12, 2002 6:03 PM
Miscellaneous Topics Methodologies 525
Otherwise, the results do not mean very much. Let us then examine in a cursory
fashion metrology.
Metrology is a Hellenic word made up of metron = measure of length and logos
= reason; logic. In its combined form it means the science of or systems of weights
and measures. (It was Eli Whitney who between 1800 and 1811 proved interchange-
ability would be a vital part of manufacturing, thus developing the rst use of the
metrology system in the United States.)
METROLOGYS ROLE IN INDUSTRY AND QUALITY
In order for metrology to exist we must recognize that we need measurements to
1. Make things:
Length, width and height
Eliminate one of a kind
2. Control the way others make things:
Designing
Building
3. Describe scientic :
Worldwide exchange of ideas
Dollar
Furthermore, we must understand that metrology is based on standards. Without
standards and the ability to trace the standards, measurement would not be possible.
Therefore, metrology is based on a hierarchical system of standards with a level of
accuracy reecting the level of the standard under question. The relationship of the
hierarchy and accuracy is shown as follows:
In any metrology system there is a control. The control system is one or more
of the following items.
1. Calendar elapsed time: Fixed time e.g., every 3 months.
2. Amount of actual usage: Number of products checked by equipment.
3. Actual operating hours: Meter actual time equipment draws power.
Depending on the control used, there is also a calibration requirement that will
ensure that the measurement is what is supposed to be. Considerations for any
calibration system include the following:
Hierarchy of Standards Levels of Accuracy
National Standards Highest level
National reference
Primary reference standards
Transfer standard
Working standards
Gages, xtures, and instruments Lowest level
SL3151Ch10Frame Page 525 Thursday, September 12, 2002 6:03 PM
526 Six Sigma and Beyond
Establishment and maintenance of a system:
Written description of system
Flow of system calibration, repair, and maintenance
Intervals identied
Established standards and correlation:
Traceable (National Institute of Standards and Technology, NIST)
Levels
Identication:
Equipment identied
Part, tool or equipment number
Calibration label
Calibration date
Next due date
Responsible party
Recall system:
Scheduled or unscheduled
Data recording and analysis:
Computerized
Manual
Characteristics
Environmental controls:
Temperature
Humidity
Just like any other system, a measuring system may develop inaccurate mea-
surements. Some of the sources of inaccuracy are:
Poor contact Gages with wide areas of contact should not be used on
parts with irregular or curved surfaces.
Distortion Gages that are spring-loaded could cause distortion of thin-
walled, elastic parts.
Impression Gages with heavy stylus could indent the surface of contact.
Expansion Gages and parts should obtain thermal equilibrium before
measuring.
Geometry Measurements are sometimes made under false assump-
tions. For example, part being at when not at, or concentric when not
concentric, or round when lobed.
These inaccuracies may be the result of either accuracy and or precision problems
due to:
Operator error The same operator using the measuring instrument on
the same product will come up with a dispersion of readings.
Operator to operator error Two operators using the same measuring
instrument on the same product will exhibit differences traceable to oper-
ator technique.
SL3151Ch10Frame Page 526 Thursday, September 12, 2002 6:03 PM
Miscellaneous Topics Methodologies 527
Equipment error Each piece of equipment has built within it its own
sources of error.
Material error In many cases material cannot be retested, such as in
cases of destruct test.
Test procedure error In cases where two procedures may exist which
expect the same outcome.
Laboratory error In cases of two laboratories performing same test.
(Here the reader may want to review Gage R&R and the applicability of accuracy,
repeatability, reproducibility, stability, and linearity in Volume V of this series
especially Chapters 15 and 16.)
MEASUREMENT TECHNIQUES AND EQUIPMENT
There are many measurement techniques. However, the most typical at least for
the DFSS are:
1. Linear
2. Angular
3. Force and torque
4. Surface and volume
5. Mass and weight
6. Temperature
7. Pressure and vacuum
8. Mechanical
9. Electrical
10. Optical
11. Chemical
To make these measurements, many types of equipment may be used, some tradi-
tional and common and some very specic and unique for individual situations.
Typical types of equipment include:
1. Scale
2. Radius gages
3. Plug gage
4. Thread gage
5. Spline gage
6. Parallels
7. Sine plate
8. Surface plate
9. Caliper
10. Hardness tester
11. Indicator
12. Micrometer
SL3151Ch10Frame Page 527 Thursday, September 12, 2002 6:03 PM
528 Six Sigma and Beyond
13. Comparator
14. Prolometer
15. Coordinate measuring machine
16. Pneumatic gaging
17. Optical
PURPOSE OF INSPECTION
As surprising as it may sound, inspection sometimes is used as part of the metrology
scheme. It is used with the intention of:
Distinguishing good product from bad
Distinguishing good lots from bad
Checking for process change
Comparing process to specication
Measuring process capability
Ensuring product design intent
Determining the accuracy of the inspector
Determining the precision of measuring equipment
Inspection is used primarily in three areas, called inspection points. They are:
1. Incoming material
Verication of purchase order
Checking for conformance to specication
Verication of quantity received
Acceptance of certication
Identication
2. In-process:
First piece setup
Verication of process change
Monitoring of process capability
Verication of process conformance
3. Finished product
Last piece release
Verication of product process
Preparation of certication of process
It is also important to know that there are three kinds of inspection. They are:
1. 100% inspection
Safety product
Lot size too small for sampling
Seventy-nine percent effective manually
2. Sampling
Large lot size
Decision making
SL3151Ch10Frame Page 528 Thursday, September 12, 2002 6:03 PM
Miscellaneous Topics Methodologies 529
3. Visual mechanical, sensory
Fit and function
Lack of standard
Senses feel, smell, taste, touch.
HOW DO WE USE INSPECTION AND WHY?
Even though we know human inspection is very ineffective, sometimes it is the only
option we may have. In any case, we use inspection to classify characteristics of
importance and to study and evaluate testing. As for the characteristics we are
interested in, they can be summarized as follows:
As for the testing, we are interested in
Acceptance
Reliability
Qualication
Verication
of the item tested. Perhaps the most important issue in testing for DFSS is verica-
tion. We must be sure that the test is reective of the real world usage and that it
addresses customer functionality.
METHODS OF TESTING
1. Destructive testing
Can only be conducted once
Detects aws in materials and components
Measures physical properties
Is not cost effective
Critical Will affect safety
Personal injury
Product not usable
Affect consumer condence
Loss to company
Major A Possible injury
Possible product not usable
Affect consumer
Loss to company
Major B Inconvenient to consumer
Visual
Possible loss to company
Minor Possible visual
Inconvenient to company
SL3151Ch10Frame Page 529 Thursday, September 12, 2002 6:03 PM
530 Six Sigma and Beyond
2. Nondestructive testing (N.D.T.)
Is a repeatable test
Detects aws in material and components
Measures physical properties
Is cost effective
Obviously, whenever possible methods of N.D.T. should be used. Typical meth-
ods are:
Eddy current
X-ray
Gamma ray
Magnetic particle
Penetrant dye
Ultrasonic
Pulse echo
Capacitive
Fiber optical
INTERPRETING RESULTS OF INSPECTION AND TESTING
All inspections and all tests generate reports by their nature. That means someone
must evaluate them and take the appropriate action. Typical issues that the DFSS
team should be looking at are:
1. Accuracy and precision
2. Sampling errors
3. Relation to standards
4. Recording documents
5. Tabulation and calculation
6. Reporting of results
7. Analysis and interpretation
8. Corrective action
Another issue in reporting is the level of reporting as well as the level of
responsibility. As a team working on a DFSS project, you should be aware of that
relationship. That relationship is shown as
Levels of
reporting
Levels of
responsibility
Operator Manager
Auditor Engineer
Technician Technician
Engineer Auditor
Manager Operator
SL3151Ch10Frame Page 530 Thursday, September 12, 2002 6:03 PM
Miscellaneous Topics Methodologies 531
TECHNIQUE FOR WRINGING GAGE BLOCKS
One of the most common ways to calibrate certain machinery is through gage blocks.
It is very important for the DFSS team to be familiar not only with the blocks
themselves but how to use them as well. In this section we are going to address both
of these issues.
The following points are of special importance when working with gage blocks:
1. Be sure gaging surfaces are clean.
2. Overlap gaging surfaces about 1/8 inch.
3. While pressing blocks lightly together, slip one over the other.
4. Blocks will now adhere.
5. Slip blocks smoothly until gaging surfaces are fully mated.
By wringing gage blocks together, you can obtain accuracy within millionths of
an inch. Caution is usually given not to use a circular action because this might
cause serious wear or even damage from abrasive dust trapped between surfaces.
Gage blocks are calibrated at the international standard measuring temperature
of 68F (20 C). (This is very important to keep in mind, otherwise see below.)
When measurements are conducted at this temperature between blocks and parts of
dissimilar materials, no correction for different coefcients of expansion is necessary
providing the components have had sufcient time to adjust to the environment. If
blocks and parts are made of the same material and are at the same temperature,
accurate results are possible regardless of whether the temperature is high or low.
To determine the correction requirement when blocks and parts are dissimilar
and at temperatures other than 68F, use the following formula:
E = L (K)(T)
where E = the measurement error in microinches; L = nominal dimension in inches;
K = difference of coefcients in microinches; and T = deviation of temperature
from 68F.
Typical coefcients of expansion in microinches per inch of length per degree F are:
The gage blocks are typically in a set of 81 pieces and they are arranged in the
following order:
Hardened tool steel 6.4
Stainless steel (410) 5.5
Chrome carbide 4.5
Tungsten carbide 3.0
Aluminum 12.8
Copper 9.4
9 Blocks 0001 Series:
.1001 .1002 .1003 .1004 .1005 .1006 .1007 .1008 .1009
SL3151Ch10Frame Page 531 Thursday, September 12, 2002 6:03 PM
532 Six Sigma and Beyond
LENGTH COMBINATIONS
Do not trust trial and error methods when assembling gage blocks into a gaging
dimension. The basic rule is to select the fewest blocks that will suit the requirement.
To construct a length of 1.3275 using a typical 81-piece set, the following procedure
may be used:
1. Write the desired dimension on a piece of paper: 1.3275
2. Begin the selection at the top of the gage block set.
3. Reduce the last digit of the dimension to zero by selecting .1005
a block with a 5 in the fourth decimal place. In this case,
the .1005 block is selected and its length is subtracted 1.2270 .1005
from 1.3275 to determine a remainder still to be selected.
The value .1005 may be written again in an adjacent column
for subsequent proof of the selection.
4. Select a block to reduce the third decimal place to zero. .107 .107
5. Select a block to reduce the second and rst decimals 1.1200
to zero. Wherever possible, such double reductions are .120 .120
desirable to cut down the total number of blocks selected.
1.0000
6. Complete the selection with the 1,000 block. 1.000 +1.000
0.0000 1.3275
There are times when the same gaging dimension must be assembled more than
once from single set of blocks. This may unavoidably increase the number of blocks
required for the specic length. Assume that a second length of 1.3275 is required
from the 81-piece set:
1. Write the requirement. 1.3275
2. Select two blocks to reduce the fourth .2005 .1002
decimal to zero. In this case, .1002 .1003
and .1003. 1.1270
49 Blocks .001 Series:
.101 .102 .103 .104 .105 .106 .107 .108 .109 .110
.111 .112 .113 .114 .115 .116 .117 .118 .119 .120
.121 .122 .123 .124 .125 .126 .127 .128 .129 .130
.131 .132 .133 .134 .135 .136 .137 .138 .139 .140
.141 .142 .143 .144 .145 .146 .147 .148 .149
19 Blocks 1.000 Series:
.50 .100 .150 .200 .250 .300 .350 .400 .450 .500
.550 .600 .650 .700 .750 .800 .850 .900 .950
4 Blocks 1.000 Series:
1.000 2.000 3.000 4.000
SL3151Ch10Frame Page 532 Thursday, September 12, 2002 6:03 PM
Miscellaneous Topics Methodologies 533
3. One block may now be selected to reduce .127 .127
three digits.
4. Select two blocks to form the remaining 1.0000
1.000. In this case, .400 and .600. 1.000 .400
+.600
0.0000 1.3275
For some general rules and guidelines on shapes and basic calculations the reader
is referred to Volume II, Part II. Also, for an explanation and examples of the SI
system see Volume II, Part II.
Volume V of this series covers the issue of GR&R and its terminology and
therefore here we give only brief denitions of the key terms:
Gage accuracy Difference between the observed average of measurements
and the master value. The master value can be determined by averaging
several measurements with the most accurate measuring equipment avail-
able.
Gage repeatability Variation in measurements obtained with one gage
when used several times by one operator while measuring the identical
characteristics on the same parts.
Gage reproducibility Variation in the average of the measurements made
by different operators using the same gage when measuring identical char-
acteristics on the same parts.
Gage stability Total variation in the measurements obtained with a gage
on the same master or master parts when measuring a single characteristic
over an extended time period.
Gage linearity Difference in the accuracy values through the expected
operating range of the gage.
REFERENCES
Genest, D.H., Improving Measurement System Compatibility, Quality Digest, Apr. 2001, pp.
3540.
SL3151Ch10Frame Page 533 Thursday, September 12, 2002 6:03 PM
SL3151Ch10Frame Page 534 Thursday, September 12, 2002 6:03 PM

535

11

Innovation Techniques
Used in Design
for Six Sigma (DFSS)

MODELING DESIGN ITERATION USING SIGNAL
FLOW GRAPHS AS INTRODUCED BY EPPINGER,
NUKALA AND WHITNEY (1997)

The signal ow graph represents a diagram of relationships among a number of
variables. When these relationships are linear, the graph represents a system of
simultaneous linear algebraic equations. The signal ow graph, as shown in
Figure 11.1, is composed of a network of directed branches, which connect at the
nodes. A branch jk, beginning at node j and terminating at node k, indicates its
direction from j to k by an arrowhead on the branch. Each branch jk has associated
with it a quantity known as the branch transmission P

jk

.
For our modeling processes, the branches represent the tasks being worked (an
activity-on-arc representation). The branch transmissions include the probability and
time to execute the task represented by the branch:
P

jk

= p

jk
z

t

jk

where



p

jk

is the probability associated with the branch; t

jk

is the time taken to traverse
the branch; and z is the transform variable used to connect the physical system (time
domain) to the quantities used in the analysis (transform domain).
The z transform simplies the algebra, as it enables us to incorporate the
quantities to be multiplied (probabilities) in the coefcient of the expression, and to
include the quantities to be added (task times) in the exponent. The resulting system
is then analogous to a discrete sampled data system, and the body of literature on
this subject can be applied for the analysis thereof.
The path transmission is dened as the product of all branch transmissions along
a single path. The graph transmission is the sum of the path transmissions of all the
possible paths between two given nodes. The graph transmission is also the resulting
expression on an arc connecting the two given nodes when all of the other nodes
have been absorbed. In particular, we are interested in computing the graph trans-
mission from the start to the nish nodes. Henceforth, graph transmission shall refer
to the graph transmission between the start and the nish nodes and is denoted as T

sf

.
The coefcient of each term in the graph transmission is the probability asso-
ciated with the path(s) it represents, and the exponent of z is the duration associated

SL3151Ch11Frame Page 535 Thursday, September 12, 2002 6:02 PM

536

Six Sigma and Beyond

with the path(s). The graph transmission can be derived using the standard operations
for the signal ow graphs (discussed below). The impulse response of the graph
transmission is then a function representing the probability distribution of the lead
time of the process. It can be shown that the expected value of the lead time of the
process is:

FIGURE 11.1

A typical branching using signal ow graph.

FIGURE 11.2

A simple example with signal ow graph.

FIGURE 11.3

A hypothetical design process.
1
2
3
4
P
13
P
12
P
23
P
34
P
43
P
32
P
24
Tooling
Design
Product
Design
Concept
B
A
Start
Finish
Z
3
Z
2
0.6Z
3
0.3Z
2
0.4
B
A
,
A
0.7
E L
dT
dz
sf
z
( )
=
1

SL3151Ch11Frame Page 536 Thursday, September 12, 2002 6:02 PM

Innovation Techniques Used in Design for Six Sigma (DFSS)

537

N

UMERICAL

E

XAMPLE

A simple example is shown in Figure 11.2.
The hypothetical design process is represented by the graph shown in Figure 11.3.
The two tasks A and B (product design and tooling design) take 3 and 2 units
of time respectively. Once task B is attempted, task A is reworked with probability
0.6, and once A is attempted, B is reworked with probability 0.3. Iterative repetitions
of A are represented by task A.
The graph transmission is found using the mode elimination technique (discussed
below). This graph transmission is given by the equation shown in Figure 11.4.
The expected value of the project lead time E(L) is 7.6 units of time.
The rst few terms of the probability distribution function are represented graph-
ically in Figure 11.5.

FIGURE 11.4

The graph transmission.

FIGURE 11.5

First few terms of the probability.
T
sf
=
1 0.18z
5
z
5
(0.4+0.42z
3
)
Z
5
0.18Z
5
0.18Z
5
0.4 0.7
B B
Z
5
(0.6z
3
)
Probability
Time
5 8 10 13
0.42
0.4
0.076
0.072

SL3151Ch11Frame Page 537 Thursday, September 12, 2002 6:02 PM

538

Six Sigma and Beyond

The distribution can be found for this simple example by performing synthetic
division on T

sf

to obtain the rst few terms of the innite series. The nominal (once
through) time for A and B in series is 5 units of time, which occurs with probability
0.4. It is more likely (probability 0.42) that the lead time L of the process will be 8
units of time.

R

ULES



AND

D

EFINITIONS



OF

S

IGNAL

F

LOW

G

RAPHS



AS

I

NTRODUCED



BY


H

OWARD

(1971)

AND

T

RUXAL

(1955)

To effectively use signal ow graphs, follow four rules:
1. Signals travel along branches only in the direction of the arrows.
2. A signal traveling along any branch is multiplied by the transmission of
that branch.
3. The value of any node variable is the sum of all signals entering the node.
4. The value of any node variable is transmitted on all branches leaving that
node.
A path is a continuous succession of branches, traversed in the indicated branch
directions. The path transmission is dened as the product of branch transmissions
along the path. A loop is a simple closed path, along which no node is encountered
more than once per cycle. The loop transmission is dened as the product of the
branch transmissions in the loop.
The transmission T of a ow graph is dened as the signal appearing at some
designated dependent node per unit of signal originating at a specied source node.
Specically, T

ik

is dened as the signal appearing at node k per unit of external
signal injected at node j. There are a number of ways of computing transmissions.

B

ASIC

O

PERATIONS



ON

S

IGNAL

F

LOW

G

RAPHS

Solution of signal ow graphs requires knowledge of certain of their topological
properties. The basic operations of addition, multiplication, distribution and factoring
can be used to reduce the number of branches and nodes in the system. At rst
glance, it might appear that by successive application of such transformations a
graph could be reduced to a single branch connecting any two given nodes. However,
if the graph contains a closed loop of dependencies, as when modeling iterations,
one or more self loops will eventually appear.

T

HE

E

FFECT



OF



A

S

ELF

L

OOP

The effect of a self loop at some node on the transmission through that node is
analyzed in Figure 11.6.
The node signal at the rst node is x, and the signal returning around the self
loop is xt. Since the node signal is the algebraic sum of the signals entering that
node, the external signal arriving from the left must equal y(1 t). Hence, the effect
of a self loop t is to divide an external signal by the factor (1 t) as the signal passes
through the node. This holds for all t.

SL3151Ch11Frame Page 538 Thursday, September 12, 2002 6:02 PM

Innovation Techniques Used in Design for Six Sigma (DFSS)

539

S

OLUTION



BY

N

ODE

A

BSORPTION

Node absorption corresponds to the elimination of a variable by substitution in the
associated algebraic equations. With the aid of the basic transformations and the self
loop replacement, any node in a graph can be absorbed and the equivalent expressions
for the transmission between two other nodes calculated. Although the branch is no
longer shown, its effects are included in the new branch transmission values, as
shown in Figure 11.7.
To compute the overall graph transmission, all the intermediate nodes are
absorbed in turn, yielding the transmission between the start and nish nodes.
Reduction of graphs is computationally intensive, and manual solution of graphs of
even moderate size can be difcult.

REFERENCES

Eppinger, S.D., Nukala, M., and Whitney, D.E., Generalized models of design iteration using
signal ow graphs, Research in Engineering Design, 9(2), 112123, 1997.
Howard R., Dynamic Probabilistic Systems, Wiley, New York, 1971.
Truxal, J.G., Automatic Feedback Control System Synthesis, McGraw-Hill, New York, 1955.

FIGURE 11.6

The effect of a self loop.

FIGURE 11.7

Node absorption.
x
t
y
x x
= 1 + t + t
2
+ t
3
+ ... =
y 1
1 t x
1 1
1/(1-t)
t
xz
t
xy
t
xy
+1 t
zz
t
xy
t
xy
t
xy
t
zz
x
x
z
y
y

SL3151Ch11Frame Page 539 Thursday, September 12, 2002 6:02 PM

540

Six Sigma and Beyond

SELECTED BIBLIOGRAPHY

Altus, S.S., Kroo, I.M., and G., P.J. (1996). A generic algorithm for scheduling and decom-
position of multidisciplinary design problems,

Journal of Mechanical Design,

Vol.
118(4), 486489, 1996.
Anderson, J., Pohl, J., and Eppinger, S.D., A Design Process Modeling Approach Incorpo-
rating Nonlinear Elements, Proceedings of 1998 ASME Design Engineering Techni-
cal Conferences, Atlanta, Sept. 1998.
Austin, S.A., Baldwin, A.N., and Newton, A. (September 1994). Manipulating the ow of
design information to improve the programming of building design,

Construction
Management & Economics,

12(5), 445455, 1994.
Austin, S.A., Baldwin, A.N., and Newton, A., A data ow model to plan and manage the
building design process,

Journal of Engineering Design,

7(1), 325, 1996.
Austin, S.A. et al., Analytical design planning technique: a model of the detailed building,

Design Process Design Studies,

20, 279296, 1999.
Baldwin, A.N. et al., Modelling information ow during the conceptual and schematic stages
of building design,

Construction Management & Economics,

17(2), 155167, 1999.
Browning, T.R., Exploring Integrative Mechanisms with a View Towards Design for Integra-
tion, Proceedings of the Fourth ISPE International Conference on Concurrent Engi-
neering: Research and Applications, Rochester, MI, Aug. 2022, 1997, pp. 8390.
Browning, T.R., Applying the design structure matrix to system decomposition and integration
problems: a review and new directions, IEEE Transactions on Engineering Manage-
ment, 48(3), 292306, 2001.
Cho, S.-H. and Eppinger, S., Product Development Process Modeling Using Advanced Sim-
ulation, Proceedings of the 13th International Conference on Design Theory and
Methodology (DTM 2001), Pittsburgh, Sept. 912, 2001.
Eppinger, S.D., Model-based approaches to managing concurrent engineering,

Journal of
Engineering Design,

2, 283290, 1991.
Eppinger, S.D. et al., A model-based method for organizing tasks in product development,

Research in Engineering Design,

6(1), 113, 1994.
Eppinger, S.D. and Salminen, V., Patterns of Product Development Interactions, International
Conference on Engineering Design, Glasgow, Scotland, Aug. 2001.
Gebala, D.A. and Eppinger, S.D.,Methods for Analyzing Design Procedures, Proceedings of
the ASME Third International Conference on Design Theory and Methodology, 1991,
pp. 227233.
Grose, D.L., Reengineering the Aircraft Design Process, Proceedings of the Fifth
AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and Optimi-
zation, Panama City Beach, FL, Sept. 79, 1994.
Johnson, E.W. and Brockman, J.B., Measurement and analysis of sequential design processes,
ACM Transaction on Design Automation of Electronic Systems, 3(1), 120, 1998.
Kusiak, A. and Park, K., Concurrent engineering: decomposition and scheduling of design
activities,

International Journal of Production Research,

28, 10, 18831900, 1990.
Kusiak, A. and Szcerbicki, E., Transformation from conceptual design to embodiment design,
IIE Transactions, 25(4), 612, 1993.
Kusiak, A. and Wang, J., Decomposition of the design process,

Journal of Mechanical Design,

115, 687695, 1993.
Kusiak, A. and Wang, J., Efcient organizing of design activities,

International Journal of
Production Research,

Vol. 31, 753769, 1993.
Kusiak, A., Larson, N., and Wang, J., Reengineering of design and manufacturing processes,

Computers and Industrial Engineering,

26(3), 521536, 1994.

SL3151Ch11Frame Page 540 Thursday, September 12, 2002 6:02 PM

Innovation Techniques Used in Design for Six Sigma (DFSS)

541

Kusiak, A. and Larson, N., Decomposition and representation methods in mechanical design,
ASME Transactions:

Journal of Mechanical Design,

117(3), 1724, 1995.
Kusiak, A.,

Engineering Design: Products, Processes and Systems,

Academic Press, San
Diego, 1999.
McCulley, C. and Bloebaum, C., A genetic tool for optimal design sequencing in complex
engineering systems,

Structural Optimization,

12(23), 186201, 1996.
Osborne, S.M., Product Development Cycle Time Characterization Through Modeling of
Process Iteration, masters thesis (mgmt./eng.), M.I.T., Cambridge, MA, 1993.
Rogers, J. L., A Knowledge-Based Tool for Multilevel Decomposition of a Complex Design
Problem, NASA, Hampton, VA, Technical Paper TP-2903, 1989.
Smith, R.P. and Eppinger, S.D., Identifying controlling features of engineering design itera-
tion,

Management Science,

43, 276293, 1997.
Smith, R.P. and Eppinger, S.D., A predictive model of sequential iteration in engineering
design,

Management Science,

43, 11041120, 1997.
Smith, R.P. and Eppinger, S.D., Deciding between sequential and parallel tasks in engineering
design,

Concurrent Engineering: Research and Applications,

6, 1525, 1998.
Smith, R.P. and Morrow, J., Product development process modeling,

Design Studies,

20,
237261, 1999.
Steward, D.V.,

Systems Analysis and Management: Structure, Strategy and Design,

Petrocelli
Books, New York, 1981.
Ulrich, K.T. and Eppinger, S.D.,

Product Design and Development,

2

nd

ed., McGraw-Hill,
New York, 2000.
Yassine, A.A., Falkenburg, D.R. and Chelst, K., (1999) Engineering design management: an
information structure approach,

International Journal of Production Research,

37(13), 29572975, 1999.
Yassine, A.A. and Falkenburg, D.R., A framework for design process specications manage-
ment,

Journal of Engineering Design,

10(3), Sept. 1999.
Yassine, A.A et al., DO-IT-RIGHT-FIRST-TIME (DRFT) Approach to Design Structure
Matrix (DSM) Restructuring, Proceedings of the 12th International Conference on
Design Theory and Methodology (DTM 2000), Baltimore, Sept. 1013, 2000.
Yassine, A.A., Whitney, D., and Zambito, T., Assessment of Rework Probabilities for Design
Structure Matrix (DSM) Simulation in Product Development Management, Proceed-
ings of the 13th International Conference on Design Theory and Methodology (DTM
2001), Pittsburgh, September 912, 2001.

AXIOMATIC DESIGNS

Bad design is, well, bad design. Six sigma, tightening tolerances, substituting one
material for another and so on only treat the symptoms, not the problem. Also, they
may create expensive bad designs.
Axiomatic design, a theory and methodology developed at Massachusetts Insti-
tute of Technology (MIT; Cambridge, Mass.) 20 years ago, helps designers focus
on the problems in bad designs. As Suh (1990) points out, The goal of axiomatic
design is to make human designers more creative, reduce the random search process,
minimize the iterative trial-and-error process, and determine the best design among
those proposed. This, of course, applies to designing all sorts of things: software,
business processes, manufacturing systems, work ows, etc. The technique can also
be used for diagnosing and improving existing designs.

SL3151Ch11Frame Page 541 Thursday, September 12, 2002 6:02 PM

542

Six Sigma and Beyond

S

O

, W

HAT

I

S



AN

A

XIOMATIC

D

ESIGN

?

While MIT and axiomatic might suggest some lofty academic theory, axiomatic
design is well grounded in reality. By denition an axiom is universally recognized
principle. One of the earliest uses of axioms was by Euclid, who developed Euclidian
geometry from a fundamental set of postulates or axioms. Sir Isaac Newtons laws
of mechanics are another example of axioms. Other elds based on axioms include
thermodynamics and information theory. So when we talk about

axiomatic

we are
talking about a systematic, scientic approach to design. It guides designers through
the process of rst breaking up customer needs into functional requirements (FRs),
then breaking up these requirements into design parameters (DPs), and then nally
guring out a process to produce those design parameters. [Does this sound familiar?
Y = f(x

1

, x

2

x

n

)]. In MIT language, axiomatic design is a decomposition process
going from customer needs to FRs, to DPs, and then to process variables (PVs),
thereby crossing the four domains of the design world: customer, functional, phys-
ical, and process.
The fun begins in decomposing the design. A designer rst explodes higher-
level FRs into lower-level FRs, proceeding through a hierarchy of levels until a
design can be implemented. At the same time, the designer zigzags between pairs
of design domains, such as between the functional and physical domains. Ultimately,
zigzagging between the what and how domains reduces the design to a set of
FR, DP, and PV hierarchies.
Along the way, there are these two axioms: the independence axiom and the
information axiom. From these two axioms come a bunch of theorems that tell
designers some very simple things, which if followed, can make enormous progress
in the quality of their product design. The rst axiom says that the functional
requirements within a good design are independent of each other. This is the goal
of the whole exercise: identifying DPs so that

each FR can be satised without
affecting the other FRs.

The second axiom says that

when two or more alternative designs satisfy the
rst axiom, the best design is the one with the least information.

That is, when a
design is good, information content is zero. That is information as in the measure
of ones freedom of choice, the measure of uncertainty, which is the basis of
information theory. Designs that satisfy the independence axiom are called

uncou-
pled

or

decoupled

. The difference is that in an uncoupled design, the DPs are totally
independent; while in a decoupled design, at least one DP affects two or more FRs.
As a result, the order of adjusting the DPs in a decoupled design is important.
This order is shown in a design matrix Figure 11.8 that shows functional
coupling between FRs and DPs at a given level of the design hierarchy. Ideally,
these FRs and DPs are to be decoupled.

A

XIOMATIC



AND

O

THER

D

ESIGN

M

ETHODOLOGIES

In the axiomatic design world, zigzagging between adjacent domains, that is between
the what domain on the left and the how domain on the right, will lead to
independent, uncoupled (or at least decoupled) design parameters namely, good
designs.

SL3151Ch11Frame Page 542 Thursday, September 12, 2002 6:02 PM

Innovation Techniques Used in Design for Six Sigma (DFSS)

543

Axiomatic design is not quite the Taguchi method, which is a specic application
of robust design. It is not quite quality function deployment (QFD). Nor, like many
other quality methodologies, is it an after-the-fact approach that looks at results and
then traces back to the source of those results.
Robust design (Taguchi) and axiomatic design are the only methods that address
the design itself, ensuring that the designs are good to start with. Unfortunately,
while Taguchi focuses on making a part immune to the error in variation, it focuses
on only one requirement at a time. A problem might arise when a design has to
satisfy two requirements simultaneously, such as designing a car door to seal com-
pletely and close easily. In short, a coupling exists between these two functional
requirements.
Taguchi method alone sometimes may trap designers into optimizing the wrong
function, optimizing a function they do not have ownership of, or optimizing a design
parameter that is linked to many functions. Worse, by optimizing one function,
designers run the probable risk of degrading other functions. Axiomatic design, on
the other hand, avoids all that by breaking the coupling between functional require-
ments so that they no longer interact with one another.
QFD is similar to axiomatic design in that customer requirements are listed along
the left side of a matrix and engineering requirements are lined up along the top. From
this matrix, designer teams can see conicts that need to be resolved. However, QFD
is very subjective. Nor does QFD show a mathematical relationship between a functional
requirement and a design parameter, which axiomatic design does.

A

PPLYING

A

XIOMATIC

D

ESIGN

TO CARS
The automotive industry is fraught with couplings between design parameters, such as
in styling versus aero-dynamic/cooling requirements, in styling versus crashworthiness,
FIGURE 11.8 Order of design matrix showing functional coupling between FRs and DPs.
Customer:
The benefits
customers
seek
Functional:
Functional requirements
(FRs) are a minimum set
of independent
requirements that
completely characterize
the functional needs of
the design solution in the
functional domain
Physical design
parameter (DP) are
the elements of the
design solution in
the physical domain
that are chosen to
satisfy the specified
FRs
Process: Process
variables (PVs)
are the elements
in the process to
produce the
product specified
in terms of DPs.
Customer
attributes
(Cas)
Customer
domain
Functional
requirements
(FRs)
Functional
domain
Design
parameters
(DPs)
Physical
Domain
Process
parameters
(PPs)
Process domain
SL3151Ch11Frame Page 543 Thursday, September 12, 2002 6:02 PM
544 Six Sigma and Beyond
and in highly complex automatic transmissions designs. At one carmaker, the axi-
omatic methodology helps design teams optimize elements of a conceptual design
before engineering creates the detailed designs. The described benet is that it helps
avoid unintended consequences in design, with the axiomatic method indicating
where interactions exist between the various elements and what the optimum
sequence is. The point is that axiomatic design is said to be a step beyond Taguchi
and worthwhile in a DFSS endeavor.
As we already mentioned, an axiomatic design helps designers with both new
and existing designs. In both cases, designers are more creative and develop better
designs in less time.
New Designs
By following the process, the designer designs in a systematic way, completing
prerequisite tasks before continuing to the next stage. Accordingly, the designer is
more creative by:
Understanding a clearly dened problem before design begins
Identifying innovative ways to fulll the functional requirements
Saving time by:
Avoiding frustrating dead ends
Drastically reducing random searches for solutions
Minimizing or eliminating design iterations
The designer uses current design tools more effectively, producing better designs
by:
Selecting the best design among good alternatives
Optimizing the design properly
Verifying the design against explicit requirements
Having a fully documented design for troubleshooting and extensions
Diagnosis of Existing Design
For diagnosing an existing design, the use of axiomatic design highlights problems
such as coupling and makes clear the relationships between the symptoms of the
problem (one or more FRs not being achieved) and their causes (the specic DPs
affecting those FRs). While improving the solution, the designer also enjoys the
new-design benets above.
Extensions and Engineering Changes to Existing Designs
When an existing version needs an engineering change or an upgrade, axiomatic
design identies all of the areas affected by the contemplated changes. As a result,
unintended problems are avoided.
SL3151Ch11Frame Page 544 Thursday, September 12, 2002 6:02 PM
Innovation Techniques Used in Design for Six Sigma (DFSS) 545
To summarize, for both new and existing designs, the designer is more creative,
turning out better designs quicker.
Efcient Project Work-Flow
Axiomatic design helps to identify tasks, set a task sequence from the system
architecture, and assign resources effectively. This process also allows you to check
progress against explicit FRs.
Effective Change Management
When creating change, axiomatic design uses explicit criteria and allows you to
select the best option, identify effects throughout the system, and document changes.
Efcient Design Function
Axiomatic design enables use of a common language and shared information
between design team members, which preserves institutional learning.
The designers benets translate into three categories of benets for the organi-
zation:
1. Competitive advantage: The organization gains a competitive advantage
when it satises its customers needs best. With axiomatic design, those
needs map to explicit functional requirements and constraints, which the
designers strive to meet. (If, for some reason, no design meets the initial
set of FRs and Cs, the rm can explain the tradeoffs of specic alternatives
to the customer.) Constraints such as cost and weight can be allocated
and veried as the design progresses to ensure they are met. Time to
market, another source of competitive advantage, is shortened since
designers avoid time-consuming iterations and dead ends.
2. Higher prot: The organization can earn more prot by selling more units,
commanding a higher price, or reducing cost. Axiomatic design helps in
all three areas. With products that meet customers needs better than
competitive products, the rm gains market share, resulting in higher unit
sales. In addition, meeting those needs better means more value to the
customer, who is then willing to pay a higher price. Three types of cost
can be lowered: research and development (R&D), cost of goods sold
(COGS), and support, for the following reasons:
a. The R&D cost is less because designers spend less time designing the
product initially and making engineering changes after the product is
released.
b. COGS drops when products are not coupled and therefore are easier
to assemble and test.
c. Support costs are lower because products that are not coupled install
and set up faster and typically require fewer warranty repairs.
SL3151Ch11Frame Page 545 Thursday, September 12, 2002 6:02 PM
546 Six Sigma and Beyond
3. Less risk: Axiomatic design reduces both technical risk and business risk.
Axiom 2, the information axiom, ensures that the chosen design has
minimum information content, which is dened as the most technically
probable to succeed. Business risk is also reduced because:
Products satisfy customers needs since FRs are derived from those needs.
Development schedules are shorter and more predictable.
Upgrades can be done quickly and effectively.
In sum, axiomatic design provides the designer with the benet of designing
better products faster, and provides the rm with a competitive advantage, higher
prot, and less risk.
When you follow the axiomatic design process, you continue to use all of your
current software design tools. You will nd that you will be more creative, turning out
better designs faster, since you will minimize iterations and trial and error. You will
have complete documentation of all the design decisions and supporting analysis.
To facilitate the analysis of axiomatic designs, to our knowledge the Acclaro
Software allows you to link to all of your tools and to a common database for the
entire design team. It is available commercially and may be purchased by contacting
Axiomatic Design Software, Inc.
There are a number of techniques used today in design such as QFD, TRIZ, and
robust design. The use of these techniques and others is completely consistent with
axiomatic design. In fact, axiomatic design can help the designer apply these tech-
niques better. Figure 11.9 shows how they all t together. Some examples of what
these techniques can do are:
1. With QFD (Quality Function Deployment, the voice of the customer),
designers gather information from customers about their requirements and
the relative importance of each. This information helps the designer to
choose which FRs must be present and which may be safely ignored.
2. When a designer has selected an FR and wants to identify alternative DPs
to achieve it, TRIZ (the theory of inventive problem solving) can be helpful
in generating alternatives.
FIGURE 11.9 Relationship of axiomatic design framework and other tools.
structures
(generative)
patterns
(repetitive)
events
(reactive)
Customer
Domain
Functional
Domain
Process
Domain
Physical
Domain
QFD QFD QFD QFD
TRIZ TRIZ TRIZ
Systems
Engineering
VA/VE
Robust Design
DFA, DF...
FMEA
SPC
FMEA
Find Inspection Find
& &
Fix Fix
DoE
SL3151Ch11Frame Page 546 Thursday, September 12, 2002 6:02 PM
Innovation Techniques Used in Design for Six Sigma (DFSS) 547
3. After choosing a DP to satisfy an FR, the designer uses robust design to
optimize the design of this particular DP, which helps to reduce the
information content of the design.
4. The designer follows the axiomatic design process and uses the various
techniques when appropriate. Axiomatic design helps the designer avoid
mistakes such as unknowingly attempting to optimize a coupled design.
Users of axiomatic designs and applicable software, such as the Acclaro, have
found that the process of implementing axiomatic designs is enhanced, in the sense
that the designers have more freedom to document every decision and to specify the
relationships between FRs and DPs to any level of detail. It also does matrix
manipulations, checks for design problems such as coupling, and communicates
relevant information to members of the design team. Specically, the Acclaro soft-
ware runs on standard PCs and workstations and in addition it links to your software
design tools and to your existing database through SQL. No other software is required
except for the Java environment, which is available at no charge from Sun Micro-
systems or from Axiomatic Design Software, Inc.
REFERENCES
Suh, N.P., The Principles of Design, Oxford University Press, New York.
SELECTED BIBLIOGRAPHY
Black, J.T. and Shroer, B.J., Decouplers in integrated cellular manufacturing systems. Journal
of Engineering for Industry, Transactions of A.S.M.E., 110, 7785, 1988.
Creveling, C.M., Tolerance Design: A Handbook for Developing Optimal Specications,
Addison-Wesley, Reading, MA, 1997.
Hill, P.H., The Science of Engineering Design, Holt, Rinehart and Winston, New York, 1970.
French, M.J., Engineering Design: The Conceptual Stage, Heinemann Educational Books,
London, 1971.
Kar, K.A., Linking Axiomatic Design And Taguchi Methods Via Information Content in
Design, First International Conference on Axiomatic Design, 2000.
Kramer, B.M., An Analytical Approach to Tool Wear Prediction, thesis, MIT, 1979.
Kramer, B.M. and Suh, N.P., Tool wear by solution: a quantitative understanding. Journal of
Engineering for Industry, Transactions of A.S.M.E., 102, 303339, 1980.
Mohsen, H., Thoughts on the Use of Axiomatic Design Within the Product Development
Process, First International Conference on Axiomatic Design, 2000.
Otto, K. and Wood, K., Product Design: Techniques in Reverse Engineering and New Product
Development, Prentice Hall, Upper Saddle River, NJ, 2001.
Rinderle, J. R. and Suh, N.P., Measures of functional coupling in design, Journal of Engi-
neering for Industry, Transactions of A.S.M.E, 104, 383388, 1982.
Stoll, H.W., (1986). Design for manufacture: An overview, Applied Mechanics Review, 39,
13561364, 1986.
Suh, N.P., Development of the science base for the manufacturing eld through the axiomatic
approach, Robotics and Computer Integrated Manufacturing, 1, 399455, 1984.
SL3151Ch11Frame Page 547 Thursday, September 12, 2002 6:02 PM
548 Six Sigma and Beyond
Suh, N.P., Trobophysics, Prentice Hall, Englewood Cliffs, NJ, 1986.
Suh, N.P. and Rinderle, J.R., Qualitative and quantitative use of design and manufacturing
axiom, CIRP Annals, 31,333338, 1982.
Suh, N.P., Bell, A.C., and Gopssard, D.C., On an axiomatic approach to manufacturing systems,
Journal of Engineering for Industry, Transactions of A.S.M.E., 100, 127130, 1978.
Tice, W., The Application of Axiomatic Design Rules to an Engine Lathe Case Study, thesis,
MIT, 1980.
TRIZ THE THEORY OF INVENTIVE PROBLEM
SOLVING
It has been said that TRIZ is one of the components of customer-driven robust
innovation. The other two are QFD and Taguchi. TRIZ is a methodology that was
developed in Russia in 1926 by Genrich Altshuller and has been growing all over
the world since then. (Because TRIZ is the pronunciation in Russian, many names
have been given to this methodology. Some of the most common are: TIPS Theory
of Inventive Problem Solving; TSIP Theory of the Solution of Inventive Problems
and SI Systematic Innovation.)
The foundation of the theory is the realization that contradictions can be method-
ically resolved through the application of innovative solutions. Terninko, Zusman,
and Zlotin (1996) have identied three premises that support the theory. They are:
1. The ideal design is a goal.
2. Contradictions help solve problems.
3. The innovative process can be structured systematically.
To be sure, TRIZ focuses on innovation, but what is innovation? According to
the founder, Altshuller, there are ve levels of innovation. They are:
Level 1: Refers to a simple improvement of a technical system. It presupposes
some knowledge about the system. It is really not an innovation, since it
does not solve the technical problem.
Level 2: An invention that includes the resolution of a technical contradiction.
It presupposes knowledge from different areas within the relevancy of the
system at hand. By denition, it is innovative since it solves contradictions.
Level 3: An invention containing a resolution of a physical contradiction. It
presupposes knowledge from other industries. By denition, it is innovative
since it solves contradictions.
Level 4: A new technology containing a breakthrough solution that requires
knowledge from different elds of science. It is somewhat of an innovation
since it improves a technical system but does not solve the technical problem.
Level 5: Discovery of new phenomena. The discovery pushes the existing
technology to a higher level.
SL3151Ch11Frame Page 548 Thursday, September 12, 2002 6:02 PM
Innovation Techniques Used in Design for Six Sigma (DFSS) 549
For Altshuller (1997, p. 1819), the benet of using TRIZ is to help inventors elevate
their innovative solutions to levels 3 and 4. To optimize these levels he suggests the
following tools:
1. Segmentation nding a way to separate one element into smaller elements
2. Periodic action replacing a continuous system with a periodic system
3. Standards structured rules for the synthesis and reconstruction of
technical systems
4. ARIZ algorithm to solve an inventive problem. The core tool of TRIZ
methodology. It provides nine steps, and they are:
Analysis of the problem: Identify the problem in concise, clear and
simple language. No jargon.
Analysis of the problems model: Identify the conict in relation to
the overall problem. A boundary diagram may facilitate this. The idea
of this step is to focus on the conict.
Formulation of the ideal nal result (IFR): Here you identify the phys-
ical contradiction. The process is to identify the vague problem and
transform it into a specic physical problem. [Another clue for the Y
= f(x
1
, x
2
,, x
n
)]
Utilization of outside sources and eld resources: If the problem
remains, imaginatively interject outside inuences to understand the
problem better.
Utilization of informational data bank: The utilization of standards and
databases with appropriate information is recommended here to solve
the problem.
Change or reformulate the problem: If at this stage the problem has
not been solved, it is recommended to go back to the starting point
and reformulate the problem with respect to the supersystem.
Analysis of the method that removed the physical contradiction: Check
whether or not the quality of the solution provides satisfaction. A key
question here is: Has the physical contradiction been removed most
ideally?
Utilization of found solution: Here the focus is on interfacing analysis
of adjacent systems. It is also a source for identifying other technical
problems.
Analysis of steps that lead to the solution: This is the ultimate score-
card. This is where the former process is compared to the current one.
The analysis has to do with the new gap. Deviations, obviously, are
recorded for future use.
To actually use TRIZ in a design situation, the reader must be aware not only
of the nine steps just mentioned but also the 40 principles that are associated with
the methodology. Here we are going to list them without any further discussion. The
reader is encouraged to see Altshuller (1997), Terninko et al. (1996), and other
sources in the bibliography for more details.
SL3151Ch11Frame Page 549 Thursday, September 12, 2002 6:02 PM
550 Six Sigma and Beyond
1. Segmentation
2. Extraction
3. Local quality
4. Asymmetry
5. Consolidation
6. Universality
7. Nesting
8. Counterweight
9. Prior counteraction
10. Prior action
11. Cushion in advance
12. Equipotentiality
13. Do it in reverse
14. Spheroidality
15. Dynamicity
16. Partial or excessive action
17. Transition into a new dimension
18. Mechanical vibration
19. Periodic action
20. Continuity of useful action
21. Rushing through
22. Convert harm into benet
23. Feedback
24. Mediator
25. Self service
26. Copying
27. Dispose
28. Replacement of mechanical system
29. Pneumatic or hydraulic constructions
30. Flexible membranes or thin lms
31. Porous material
32. Changing the color
33. Homogeneity
34. Rejecting and regenerating parts
35. Transformation of properties
36. Phase transition
37. Thermal expansion
38. Accelerated oxidation
39. Inert environment
40. Composite materials
SL3151Ch11Frame Page 550 Thursday, September 12, 2002 6:02 PM
Innovation Techniques Used in Design for Six Sigma (DFSS) 551
REFERENCES
Altshuller, G., 40 Principles: TRIZ Keys to Technical Innovation, Technical Innovation Center,
Worcester, MA, 1997.
Terninko, J., Zusman, A., and Zlotin, B., Step-by-Step TRIZ: Creating Innovative Solution
Concepts, Responsible Management Inc., 3rd ed., Nottingham, NH, 1996.
SELECTED BIBLIOGRAPHY
Altshuller, G.S., Creativity as an Exact Science, Gordon and Breach, New York, 1988.
Bar-El, Z., TRIZ methodology, The Entrepreneur Network Newsletter, May 1996.
Braham, J., Inventive Ideas Grow on TRIZ, Machine Design, Oct. 12, 1995, 58.
Kaplan, S., An Introduction to TRIZ: The Russian Theory of Inventive Problem Solving,
Ideation International Inc., Southeld, MI, 1996.
Osborne, A., Applied Imagination, Scribner, New York, 1953.
Pugh, S., Total Design Integrated Methods for Successful Product Engineering, Addison-
Wesley, Reading, MA, 1991.
Taguchi, G., Introduction to Quality Engineering, Asian Productivity Organization, Tokyo,
1983.
Terninko, J., Systematic Innovation: Theory of Inventive Problem Solving (TRI7lTIPS),
Responsible Management Inc., Nottingham, NH, 1996.
Terninko, J., Step by Step QFD: Customer-Driven Product Design, Responsible Management
Inc., Nottingham, NH, 1995.
Terninko, J., Introduction to TRIZ: A Work Book, Responsible Management Inc., Nottingham,
NH, 1996.
Terninko, J., Robust Design: Key Points for World Class Quality, Responsible Management
Inc., Nottingham, NH, 1989.
von Oech, R., A Whack On The Side Of The Head, Warner Books, New York, 1983.
Zusman, A. and Terninko, J., TRIZlIdeation Methodology for Customer-Driven Innovation,
8th Symposium on Quality Function Deployment, The QFD Institute, Novi, MI, June
1996.
SL3151Ch11Frame Page 551 Thursday, September 12, 2002 6:02 PM
SL3151Ch11Frame Page 552 Thursday, September 12, 2002 6:02 PM

553

12

Value
Analysis/Engineering

INTRODUCTION TO VALUE CONTROL
THE ENVIRONMENT

Among the major problems faced by industry today, two are the cost prot squeeze
and ineffective communications. Rising wages and materials costs are squeezing
prot against a price ceiling, and our communications systems do not seem to be
able to help effect a solution to the problem. We cannot buy labor or material for
less than the market cost nor can we sell for more than the consumer is willing to
pay. What then is the solution?
It is necessary to apply every known effective technique to learn how to thor-
oughly analyze the elements of a product or service so that we can identify and
isolate the unknown, unnecessary costs. In short, it is necessary to make a direct
attack on the high cost of business.
Value control has been proven to be an effective management tool to seek out
and eliminate this hidden cost wherever it may be. It can aid in solving both prot
and communications problems, and it can have an effect on operations that will be
limited only by your understanding of the techniques and managements willingness
to apply them.
Many people are highly skilled at cost analysis and problem solving and think
that value control is something we do all of the time. There are many who think
that value control is part of every engineers job. Some also think that it is something
we have done for 20 or 30 years, but we did not call it value control. A primary
objective of this chapter is to demonstrate that value control is not only different
but is a more powerful technique than any used in the past.
Value control is not new, in that it has been around for about 25 years, but it
has only been within the past ve to ten years that it has been widely accepted. It
is a broad scope management tool that considers all of the factors involved in a
decision. It goes to the heart of the problem, determines the function to be performed,
and applies creative problem solving and business operations such as time and
motion study, work simplication, feasibility reviews, systems analysis, etc. But, it
follows a systematic organized approach that, in addition, applies unique techniques
that identify value control as a special approach to prot improvement.
Why, after all these years of scientic and specialized management techniques,
is it necessary to develop another technique that does what many of the others were
supposed to do? Why is value control necessary?
It is still possible for one person to know all that is required to operate a small
company or design a simple product. However, our increasingly complex society

SL3151Ch12Frame Page 553 Thursday, September 12, 2002 6:01 PM

554

Six Sigma and Beyond

and increasingly complex technology have tended to make most of our managers
and technical people specialists in a limited area of activity. They have tended to
compartmentalize our operations and, to a large degree, our thinking. The more
complex the organization, the more the operation becomes fragmented into auton-
omous units that deal in a small part of the operation and have an effect on only a
small part of the prot.
In 1927, one of the greatest technological milestones in the history of human
development occurred as the result of the knowledge of two men. Charles A. Lind-
bergh sat on the Coronado Beach in San Diego with Donald Hall, chief engineer of
Ryan Airlines, and established the basic criteria for the Spirit of St. Louis. Two
people knew all that was needed to develop an advanced product that even today
clearly shows their creative thinking.
Lindbergh established the requirements, Hall provided the technical knowledge,
and the 13 Ryan supervisors and employees provided the understanding, know-how,
and enthusiasm to develop a product that was designed, was built, was tested, and
won everlasting glory for Mr. Lindbergh, all within 13 weeks.
The product was designed to perform a specic function for a specic cost target.
There was no communication problem, there was no cost problem since $15,000
was all they had for the entire project, and there was no timing problem any
delay was unthinkable.
Consider the design of an advanced aircraft today. The cost in people and
materials is almost beyond comprehension. Hundreds of thousands of people in
dozens of industries in several states work in vast industrial complexes for years
before the product takes to the air.
A product such as the automobile has created a similar situation to the degree
that it is a basic national industry and affects people in every corner of the country
and in many cases abroad.
Is it any wonder that value control is developing on the management scene? It
is the only technique that is specically designed to consider all of the factors
involved in decision making product performance, project schedules, and total
cost.
Value control is a program of involvement. It makes use of experience from
engineering, manufacturing, purchasing, marketing, nance any area and every
area that contributes to the development of a product. It can be used to keep cost
out of a product and it can be used to get cost out of a product. It can do this because
cost is everywhere and everyone in the organization contributes to it.
Cost is the result of marketing concepts, management philosophies, standards
and specications, outdated practices or equipment, lack of time, and incomplete,
unobtainable, or inaccurate information, along with dozens of other contributing
factors. Every company has at least some of these problems, and it often requires
completely new ideas to change them.
To prevent and eliminate unnecessary cost we must know how to identify cost.
We must be able to identify a problem and be willing to improve the situation. This
means change change in habits, change in ideas, change in philosophies. We
know we must change to keep up with the world. Value control enables us to take

SL3151Ch12Frame Page 554 Thursday, September 12, 2002 6:01 PM

Value Analysis/Engineering

555

a good look at all of the factors that must go into making a successful change that
will be for our benet.
Value control requires special skills. It is not cost analysis, design reviews, or
something we do as part of our job. It is different because the basic philosophy is
different. It is not concerned with trying to reduce the cost of an item or service; it is
concerned with function and methods to provide the function at the lowest overall cost.
Value control concerns people and their habits and attitudes. It is to a large
degree a state of mind. It accepts changes as a way of life and makes every effort
to determine how change can be made to provide the most benet.
It is a function-oriented system that makes use of creative problem solving and
team action. The team is designed to provide an experienced, balanced, and broad
scope look at a subject without being constrained by past experience. It requires
trained people who understand the system and its application.

HISTORY OF VALUE CONTROL

Value control was originated at the General Electric Company. In 1947, Harry
Erlicher, vice president of purchases, noted that during the war years it had frequently
been necessary to make substitutions for critical materials that quite adequately
satised the required function and often resulted in an improved product. He rea-
soned that if it was possible to do this in wartime it might be possible to develop a
system that could be applied as a standard procedure to normal operations to increase
the companys efciency and prot.
L.D. Miles was assigned to study the possibility, and the result was a systematic
approach to problem solving based on function that he called value analysis.
The program was so successful that shortly thereafter the U.S. Navy started to
use the system to help get more hardware in the face of a rapidly shrinking budget.
The Navy called the program value engineering.
Value analysis, value engineering, value management, value assurance, and value
control are all the same in that they make use of the same set of techniques developed
by Miles in 1947. In many cases, however, the title tends to describe how the system
is being applied. Value analysis is generally considered to apply to removing cost
from a product. Value engineering and value assurance are applied during the
program development phase to keep cost out of a product (our focus in design for
six sigma [DFSS]). Value management and value control are overall programs that
recognize that value techniques can be applied at any stage of a program. They strive
to apply value techniques to control value in all areas of operations.
The Navy program developed so successfully that it was picked up by the
Department of Defense and is now considered to be the key element in the govern-
ments cost reduction program. In addition, value techniques are now being used in
industry and government throughout the free world. They are being applied to
aircraft, engines, automobiles, washing machines, dryers, TV sets, and all sorts of
consumer and industrial products as well as construction projects and management
planning. In addition, several states are applying value techniques to increase ef-
ciency of operations.

SL3151Ch12Frame Page 555 Thursday, September 12, 2002 6:01 PM

556

Six Sigma and Beyond

VALUE CONCEPT

The value process is a function-oriented system. It makes use of team action and
creative problem solving to achieve results and is specically designed to simulta-
neously consider all of the factors involved in decision making performance,
schedule, and cost.
In order to obtain an experienced, balanced, broad-scope look at all facets of
the project, a carefully selected team is organized to satisfy the specic requirements
needed.
Selection of the team must consider personality as well as technical competence
of the candidates. The team must not only have the technical know-how required,
but the members must be compatible and know how to work together. The value
manager acts as a coach to guide the team members through the system to obtain
maximum benet from their activities.

D

EFINITION

The Society of American Value Engineers denes the term value engineering as
follows:

Value engineering is the systematic application of recognized techniques which
identify the functions of a product or service, establish a monetary value for that
function and provide the necessary function at the lowest overall cost.

The program described by this denition is not a cost reduction program it
is a prot improvement program in that it recognizes all of the factors contributing
to product cost. It recognizes that the lowest cost product may induce high warranty
problems that may adversely affect prot. In order to increase prot it may, therefore,
be necessary to increase product cost in some cases.

Overall cost is the prime
concern.

A value program is implemented by applying all of the known techniques of
problem solving and cost reduction plus a large body of special skills. The primary
objective is to identify and remove unnecessary cost.
Unnecessary cost is cost that can be removed

without

affecting product function.
It has been estimated that the average consumer product may include over 30 percent
unnecessary cost. This unintentional cost is the result of habits, attitudes, and all
other human factors, and everyone in an organization contributes to it.

P

LANNED

A

PPROACH

Value control achieves results by following a well-organized planned approach. It
identies unnecessary cost and applies creative problem-solving techniques to
remove it. The three basic steps brought to bear are:
1. Identify the function (what does it do?)
2. Evaluate the function (what is it worth?)
3. Develop alternatives (what else will do the job?)

SL3151Ch12Frame Page 556 Thursday, September 12, 2002 6:01 PM

Value Analysis/Engineering

557

F

UNCTION

Function is the very foundation of value control. The concern is not with the part
or act itself but with what it does; what is its function? It may be said that a function
is the objective of the action being performed by the hardware or system. It is the
result to be accomplished, and it can be dened in some unit term of weight, quantity,
time, money, space, etc.

Function is the property that makes something work or sell.

We pay for a
function, not hardware. Hardware has no value; only function has value. For example,
a drill is purchased for the hole it can produce, not for the hardware. We pay to
retrieve information, not to le papers.
Dening functions is not always easy. It takes practice and experience to properly
dene a function. It must be dened in the broadest possible manner so that the
greatest number of potential alternatives can be developed to satisfy the function. A
function must also be dened in two words, a verb and a noun. If the function has
not been dened in this way, the problem has probably not been clearly dened.
Function denition is a forcing technique that tends to break down barriers to
visualization by concentrating on what must be accomplished rather than the present
way a task is being done. Concentrating on function opens the way to new innovative
approaches through creativity.
Some examples of simple functions are as follows: produce torque, convert
energy, conduct current, create design, evaluate information, determine needs,
restrict ow, enclose space, etc.
There are two types of functions, basic and secondary. The basic function
describes the most important action performed. A secondary function supports the
basic function and almost always adds cost.

V

ALUE

After the functions have been dened and identied as basic or secondary, we must
evaluate them to determine if they are worth their cost. This step is usually done by
comparison with something that is known to be a good value. This means the term
value must be understood.
Aristotle described seven classes of value: economic, political, social, aesthetic,
ethical, religious, and judicial. In value control, we are primarily concerned with
economic value. Webster denes value in terms of worth as follows:

Value: (1) A fair return or equivalent in goods, services, or money for something
exchanged; (2) the monetary worth of something; marketable price; monetary value.

Webster in turn denes worth in terms of value:

Worth: The value of something measured by its qualities or by the esteem in which
it is held.

If we set a value, we determine its worth. If we determine the worth of something,
we set a value on it. The two terms can be used interchangeably, and value is dened

SL3151Ch12Frame Page 557 Thursday, September 12, 2002 6:01 PM

558

Six Sigma and Beyond

for our purpose as follows:

Value is the lowest overall cost to reliably provide a
function.

By overall we mean all costs that affect the function such as design and
development expenses as well as manufacturing, warranty, service, and other costs.
Value (V-E) is broken down into three kinds, each with a specic meaning:
1. Use Value (V-U): A measure of the properties that make something satisfy
a use or service.
2. Esteem Value (V-e): A measure of the properties that make something
desirable.
3. Exchange Value (V-x): A measure of the properties of an item that make
it possible for us to exchange it for something else.
These measures may be in dollars, time, or any other measurable quantity.
However, on occasion it is necessary to rank a series of functions by their relative
value, one to another.
Value is not constant; it changes to satisfy circumstances at a given time. As
circumstances change, values may change. This is usually true of monetary, moral,
and social values. Value can therefore be expressed as the relationship of three kinds
of value:
VE = Vu + Ve + Vx
Do not confuse cost with value. Cost is a fact; it is a measure of the time, labor,
and material that go into producing a product. We can increase cost by adding
material or labor to a product, but this will not necessarily increase its value. Value
is an opinion based on the desirability or necessity of the required qualities or
functions at a specic time.
The relationship of cost (C) to value provides an index of performance (P).
P = (VE)/C = (Vu + Ve + Vx)/C
It can be seen that maximum performance of our resources can be obtained
when cost is low and value is high or when P is greater than one. However, it is
usually found that P is less than one, and this is an indication that we are not getting
good value for our expenditure of funds. It is a direct indication of unnecessary cost.

D

EVELOP

A

LTERNATIVES

Function has been dened as the property that makes something work or sell and
value as the lowest cost to reliably provide a function. The performance index (P)
has identied the problem. Now, what else will do the job? We need to develop
alternative ways to perform the function.
In order to develop alternatives, we make maximum use of imagination and
creativity. This is where team action makes a major contribution. The basic tool is
brainstorming. In brainstorming, we follow a rigid procedure in which alternatives
are developed and tabulated with no attempt to evaluate them. Evaluation comes

SL3151Ch12Frame Page 558 Thursday, September 12, 2002 6:01 PM

Value Analysis/Engineering

559

later. At this stage, the important thing is to develop the revolutionary solution to
the problem.
Free use of imagination means being free from the constraints of past habits
and attitudes. A seemingly wild idea may trigger the best solution to the problem
in someone else. Without a free exchange of ideas, the best solution may never be
developed. A skilled leader can produce outstanding results by brainstorming and
by providing simple thought stimulations at the proper time.

E

VALUATION

, P

LANNING

, R

EPORTING

,

AND

I

MPLEMENTATION

The creative phase does not usually result in concrete ideas that can be directly
developed into outstanding products. The creative phase is an attempt to develop
the maximum number of possible alternatives to satisfy a function. These ideas or
concepts must be screened, evaluated, combined, and developed to nally produce
a practical recommendation. This requires exibility, tenacity, and visualization and
frequently the application of special methods designed to aid in the selection process.
The process is carried out during the evaluation and planning phases of the job plan
and is covered in detail in those sections.
The recommendation must be accepted as part of a design or plan to be suc-
cessful. In short, it must be sold. It must show the benets to be gained, how these
benets will be obtained, and nally proof that the ideas will work. This takes time,
persistence, and enthusiasm. Details of a recommended procedure are covered in
later sections of the text.

T

HE

J

OB

P

LAN

These are the basic features that make value control an effective tool. All are applied
in a stepwise approach to a value study. The approach, called the job plan, demands
specic answers to the following questions:
What is it?
What does it do?
What does it cost?
What is it worth?
Where is the problem?
What can we do?
What else will do the job?
How much does that cost?
The plan is broken down into six steps:
1. Information phase
2. Creative phases
3. Evaluation phase
4. Planning phase
5. Reporting phase
6. Follow-up phase

SL3151Ch12Frame Page 559 Thursday, September 12, 2002 6:01 PM

560

Six Sigma and Beyond

Each step is designed to lead to a systematic solution to the problem after
consideration of all of the factors involved.

APPLICATION

Although indoctrination workshops are usually conducted with existing hardware
or systems, the greatest opportunity for savings is in the prevention of unnecessary
cost. The function techniques apply, but modications are required that depend on
the users understanding and ingenuity in applying them to conceptual ideas. Sys-
tems, procedures, manufacturing methods, and tool design are some of the areas
other than hardware where functional techniques have been used successfully.
Figure 12.1 shows how the overall savings varies with time in the application of
function analysis techniques from concept to hardware, system, plan, or any other
type of project implementation.
The gure also indicates that the cost to change increases and the net savings
decreases as a project develops. Once a product or service is in production or use,
the cost of tools, hardware, forms, and time necessary to achieve the product stated
at any given time cannot be recovered.
In order to be effective, value control needs trained people working as a team.
A team needs a coach who, in this case, is the value manager. The team provides
the technical expertise necessary, and the value manager provides the know-how
to apply this knowledge for effective results. Value control also requires manage-
ment to provide the necessary support and a creative environment. In short, success
is up to everyone in an organization. People must be trained; they must understand
the system; they must understand the application; they must be aware of cost and
how to handle it, and management must support their activity by active participa-
tion.
Success means change: change in methods, change in procedures, change in
attitudes. With this approach, value control will become an effective prot maker.

FIGURE 12.1

Relationship of savings potential to time.
$ Savings potential
Concept Design Development Production
------------------------------------------ time
Potential savings
Cost to change

SL3151Ch12Frame Page 560 Thursday, September 12, 2002 6:01 PM

Value Analysis/Engineering

561

VALUE CONTROL THE JOB PLAN

The six steps in the job plan involve the following questions and activities:
1. Information phase
What is it?
Collect all data, drawings, blueprints, costs, parts, flow sheets, process
sheets.
Talk with people, ask questions, listen, develop.
Become familiar with the project.
Discuss, probe, analyze.
What does it do?
Define functions.
Determine basic function(s).
Construct FAST diagram.
What does it cost?
Conduct function/cost analysis.
What is it worth?
Establish a value for each function.
Determine overall value for the product or source.
Where is the problem?
Analyze the diagram.
Locate poor value functions.
Pinpoint the areas for creativity.
What can we do?
Set goal for achievement.
2. Creative phase
What else will do the job?
Brainstorm the poor value target functions use imagination cre-
ate alternatives, develop unique solutions, combine or eliminate
functions.
Look for revolutionary ideas.
Do not overlook discoveries obtained by serendipity.
3. Evaluation phase
Select best ideas.
Screen all creative ideas.
Evaluate carefully for useful solutions.
Combine best ideas.
Categorize into basic groups.
Screen for best ideas.
How much does it cost?
Generate relative costs.
Analyze potential.
Anticipate roadblocks.

SL3151Ch12Frame Page 561 Thursday, September 12, 2002 6:01 PM

562

Six Sigma and Beyond

4. Planning phase
Develop best ideas.
Develop practical solutions.
Obtain accurate costs.
Review engineering and manufacturing requirements.
Check quality, reliability.
Talk with people.
Resolve anticipated roadblocks.
Develop alternative solution.
Plan your program to sell.
Show the benefits.
5. Reporting phase
Present ideas to management.
Show before and after costs, advantages and disadvantages, non-recur-
ring costs of development and implementation, scrap, warranty, and
other forecasts and net benefit.
Plan your recommendation to sell.
Make recommendation for action.
6. Implementation
Ensure proper implementation.
Be certain that the change has been made in accordance with the origi-
nal intent.
Audit actual costs.

VALUE CONTROL TECHNIQUES VERSUS JOB PLAN
T

ECHNIQUES

1. Dene functions.
2.

Identify/overcome roadblocks.

3. Use specialty products/processes.
4. Bring new information.
5. Construct FAST diagram.
6. Cost/evaluate FAST diagram.
7. Use accurate costs.
8. Establish goals.
9. All info from best source.
10.

Use good human relations

.
11. Get all the facts.
12. Blast-create-rene.
13. Get $ on key tolerance.
14. Put $ on main idea.
15.

Use your own judgment.

16.

Spend company $ as if own.

17. Use companys services.

SL3151Ch12Frame Page 562 Thursday, September 12, 2002 6:01 PM

Value Analysis/Engineering

563

18.

Specics not generalities.

19. Use standards.
20. Use imagination.
21. Challenge requirements.
22. Use supplier services.
Items in bold type indicate techniques that apply at every phase of the job plan
as well as in most other activities.

INFORMATION PHASE
D

EFINE



THE

P

ROBLEM

The rst phase of the job plan is the information phase. It is broken down into three
distinctly separate parts:
1. Information development
2. Function determination
3. Function analysis and evaluation
They are all part of the information phase because in reality, they are part of the
problem resolution. The work done in the information phase is the basis for the

Job Plan Applicable Techniques

Information phase

What is it?
What does it do?
What does it cost?
What is it worth?
Where is the problem?
What can we do?
1,

2

,4,5,6,7,8,9,

10

,11,13,14,

15

,

16

,17,

18

,19,21,22

Creative phase

What else will do the job?
Brainstorm
Create alternatives

10

,12,15,20,21

Evaluation phase

Review suggestions
Rene results
Evaluate carefully

2

,3,7,8,

10

,11,13,14,

15

,

16

,17,

18

,19,20,21,22

Planning phase

Develop best ideas
Develop alternative solutions
Plan program to sell

2

,3,5,7,8,

10

,11,13,14,

15

,

16

,17,

18

,19,20,21,22

Reporting phase

Present ideas to management
Make recommendation for action

2

,

10

,

15

,

18
Implementation phase

Ensure proper implementation


2

,

10

,

15

,

16

,17,

18

,19,20,22

SL3151Ch12Frame Page 563 Thursday, September 12, 2002 6:01 PM

564

Six Sigma and Beyond

development of alternative methods to perform the required functions. If the func-
tions have not been properly dened and evaluated, the correct questions will not
be generated, and the most satisfactory problem solution is not likely to be developed.

Information Development

Information Collection

The rst part of the information phase is the development of all available information
concerning the project. This includes drawings, process sheets, ow diagrams, pro-
cedures, parts samples, costs, and any other available material. Discuss the project
with people who are in a position to provide

reliable

information. Check to be certain
that honest wrong impressions are not being collected, that is, information that may
have been fact at one time but is no longer valid.
It is very important that good human relations be used during this data and
information collecting phase. Get the person responsible for the project or develop-
ment in the rst place to help by showing that individual how he or she will be able
to prot from successful results of the completed study.
The project identication pre-workshop checklist Table 12.1 details all of
the information required for study. If the data or information are not on hand, it will
be necessary to obtain them. A basic information data sheet that should be lled out
as a rst step to identify the project is shown in Figure 12.2. A brief description of
the project should be written under Operation and performance to be certain all
of the team members are in at least basic agreement as to the product or process
operation. If available, a schematic or a picture should also be drawn in this area.

Cost Visibility

The next step towards a problem solution is to complete the cost visibility section
Figure 12.3 of the cost-function worksheet as detailed in the cost visibility
sheet Figure 12.4.
P.F. costs are estimated as follows:
Manufacturing cost = Material + Labor + Burden
P.F. cost or Total cost = Manufacturing cost + Other

TABLE



12.1
Project Identication Checklist

1. Assembly and part drawings
2. Quantity requirements per assembly and annual usage
3. Sample of assembly (project)
4. Sample of each part in assembly (raw cost of stamping blanks if practical)
5. Cost data (material, labor, overhead)
6. Tooling cost (special instructions)
7. Planning sheets (sequence of manufacturing operations including detailed cost breakdown)
8. Specications (materials, manufacturing, engineering)
9. Required features (special instructions)
10. Name of project engineer

SL3151Ch12Frame Page 564 Thursday, September 12, 2002 6:01 PM

Value Analysis/Engineering

565

Review these cost data in accordance with the preset goals of your project, and
make a preliminary judgment of the potential prot improvement. Consider the
factors involved and set a goal for achievement that will provide a protable position.
The target should indicate a 30 to 100% cost reduction to be practical. It may seem
improbable that this can be achieved; however, it is a target to work toward. A check
against this target will be made at the completion of the information phase.

Project Scope

It is now possible to make a preliminary determination of the project scope. Consider
the new project as outlined on the project identication sheet, the present cost and
target for improvement, and the time available for the study. After evaluating these
factors, dene the scope. Limiting or expanding the scope of a study depends on
the objective and the time allowed for the study. In project work, the analysis of
function should rst be performed upon the total assembly or process. If the objec-
tives of the value control study are not achieved at that level, the next lower level

FIGURE 12.2

Project identication sheet.
Date: ____________ Drawing or Part number: _____________________
Part Name: Number required per assembly:
_______________________________ __________________________
Used on major assembly (Name and Number):
_______________________________________
Team Number: Task Force and date:
_____________________ _________________________________
Team members: Department: Phone:
_____________________ ________________ _________________
Present cost:____________________________ Cost Estimate:___________________
Total cost: Material: Labor: Overhead:
_________________ ______________ _________ ________
Operation and performance: _______________________________________________
______________________________________________________________________
Additional comments: ____________________________________________________
______________________________________________________________________

SL3151Ch12Frame Page 565 Thursday, September 12, 2002 6:01 PM

566

Six Sigma and Beyond

should be studied and so on down to the lowest level of indenture. The lower the
level of indenture, the more detailed and complex the study will become. This may
require additional time in the present study or future studies to consider segments
identied by function analysis.

FIGURE 12.3

Cost visibility sheet.

FIGURE 12.4

Cost function worksheet.
Cost Visibility Team No: Assembly Part Name:
Sheet Date: Assembly Part No:
Determine Manufacturing Determine Cost Elements
Cost
Material $ Labor $ Burden $
Item Reg. Part Raw Labor $ Burden Other Total Cost
# Name Material Component per
$ unit
Total cost
List all functions and separate Basic Second Remarks
from constraint
Verb Noun

SL3151Ch12Frame Page 566 Thursday, September 12, 2002 6:01 PM

Value Analysis/Engineering

567

For existing projects seeking to improve performance or utility in general, there
are usually existing designs that have cost data available. This will mean that thinking
can be focused on a chosen part of a complex assembly. Everything up to and beyond
the basic part should be accepted as necessary. Therefore, the initial scope will have
been dened. However, as the study progresses, it may be necessary to redene the
scope to ensure completion of the project within the available time and conditions
of the workshop.

Function Determination

The information on hand together with an analysis of costs will tend to dene the initial
scope of the project. The product or process has been dened and its cost evaluated by
use of the cost visibility techniques and a target set. It is now possible to start to dene
the functions to be performed or that are being performed by the system.
Start with the function or functions of the assembly or total system, then break
the system down into each part and dene the functions of each. Remember to strive
to dene the functions in two words, and also keep in mind that the denition must
not constrain thinking. It is the function denition that will help to visualize new
ways to satisfy the function. If it is too constraining, it will tend to restrain thinking.
Figure 12.4 should be used for this effort. Most simple products will have at
least 2025 functions. Detailed information on dening functions is covered further
on in this section.

Function Analysis and Evaluation

After the functions have been determined, identify the basic function or functions.
The basic function is the function that cannot be eliminated unless the product can
be eliminated. There may be more than one, but an effort should be made to
determine the one most likely basic function. Determining the basic function is the
rst step in the construction of a FAST diagram. A detailed discussion on the
construction of FAST diagrams is to be found further on in this section.
The FAST diagram makes it possible to complete the cost function worksheet.
A typical cost function sheet lists all functions versus all parts of a product or actions
of a system, procedure, or administrative activity. The objective is to convert product
cost to function cost.
The cost of each piece of hardware or action is redistributed to the function
performed. This proportional redistribution of cost to function requires information,
experience, and judgment, and all team members must contribute their expertise.
After the cost of each part or action has been redistributed to the functions
performed, the cost columns are totaled to obtain the function cost. This cost is then
placed on the FAST diagram. The FAST diagram then becomes a very valuable tool.
It tells what is happening, why, how, when, and what it costs to perform the function.
It is now possible to evaluate the functions to determine if they are worth what is
being paid for them. In other words,

a value must be set on each function.

Determining the value of each function is a subjective process. However, it is a
key element in the value process. Comparing the function cost to function value

SL3151Ch12Frame Page 567 Thursday, September 12, 2002 6:01 PM

568

Six Sigma and Beyond

provides an immediate indication of the benet being obtained for expended funds.
The ratio of value cost to function cost is the performance index. The sum of all
values is the value of the system or the lowest cost to reliably provide the basic
function. It should be compared to the preliminary goal set earlier.
It may be that the new goal is considerably higher than the original. If this is
the case, an evaluation of the diagram will indicate what must be done to achieve
the original goal. It may indicate that an entirely new concept is required, or it may
be that it will be acceptable to settle for less. It is often the case that the original
goal and the new value are close. An analysis of the function costs will again indicate
necessary action.
This analysis clearly denes the task for product improvement. It breaks the
problem down to functions that must be improved, revised, or eliminated to achieve
the goal. It is now possible to proceed to the second phase of the job plan the
creative phase.

C

OST

V

ISIBILITY

Experience has shown, for example, that the automotive industry is a price leadership
industry. Experience has also shown that in spite of the tremendous leverage of the
industry, it cannot control the prices it must pay for the basic materials required for
production. It is then quite clear that we cannot buy materials for less and we cannot
sell our products for more. Consequently, only one avenue remains open to increase
prot, and this is to identify the areas of high and unnecessary costs and to nd
ways to reduce or eliminate these costs.
In the past, tremendous effort has been made to keep our products at a compet-
itive level. The intent is to add value control as another tool to aid in achieving the
desired function of a product at the best cost.
Cost visibility techniques are the rst to be applied in the value control job plan.
Cost visibility techniques are well ordered and range from very simple to highly
complex. These techniques do not tell us where unnecessary costs are; they tell us
where high costs are. This is important because they identify a starting point.

Denitions

Since the techniques of cost visibility are concerned with all types of costs, each
type will be dened so there is no misunderstanding:

Actual cost

Costs actually incurred during the performance of a manufac-
turing process. They include labor, material, and burden applied in accor-
dance with local ground rules.

Allowance

Costs other than material, labor, and burden that must be
included in the total cost of a product, such as: packaging materials, scrap,
inventory losses, inventory costs, etc.

Burden

Includes all cost incurred by the company that cannot be traced
directly to specic products. The accounting department determines burden
rates. These are assigned to individual operations on a formula basis.

SL3151Ch12Frame Page 568 Thursday, September 12, 2002 6:01 PM

Value Analysis/Engineering

569

Burden consists of both xed and variable categories, and separate rates
are often established for each. The method of assigning burden differs from
industry to industry and even from one company to another within an
industry. Any quantiable product factor may serve as a basis for assign-
ment of burden as long as consistent use of the factor across the entire
product line results in full and equitable burden distribution.

Fixed burden

Includes all continuing costs regardless of the produc-
tion volume for a given item, such as salaries, building rent, real estate
taxes, and insurance.
Variable burden Includes costs that increase or decrease as the vol-
ume rises or falls. Indirect materials, indirect labor, electricity used to
operate equipment, water, and certain perishable tooling are also in-
cluded in this classification.
Cost The amount of money, time, labor, etc. required to obtain anything.
In business, the cost of making or producing a product or providing a
service.
Design cost The sum of material, labor, and variable burden. An under-
standing of the elements of design is essential for an understanding of cost
visibility techniques.
Fixed cost Cost elements that do not vary with the level of activity (insur-
ance, taxes, plant, and depreciation).
Incremental cost (sometimes called a marginal cost) Not all variable costs
vary in direct proportion to the change in the level of activity. Some costs
remain the same over a given number of production units, but rise sharply
to new plateaus at certain incremental changes. The costs thus effected are
incremental or marginal costs.
Labor Manpower expended in producing a product or performing a service.
Labor may be direct or indirect.
Direct labor Labor that can be traced directly to a specific part. Wages
paid the stamping press operator would be classified direct labor.
Indirect labor Labor that is necessary in the manufacturing process but
is not directly traceable to a specific part (material handling, inspection,
receiving, shipping, etc.) is generally included in burden.
Manufacturing cost The sum of material, labor, and variable burden. An
understanding of the elements of manufacturing is essential for an under-
standing of cost visibility techniques.
Material All hardware, raw (steel, zinc, plastic powders) and purchased
(instrument panel knobs, decals, rivets, screws, etc.) items consumed in
manufacturing a part. Material may be direct or indirect.
Direct material Raw and purchased material which becomes an inte-
gral part of an end item. (The cost of the metal from which a fender is
formed would always be a direct material.)
Indirect material Material that is necessary in the manufacturing pro-
cess but is not directly traceable to a specific part (lubricants, wiping
cloths, marking pens, etc.) is generally included in burden.
SL3151Ch12Frame Page 569 Thursday, September 12, 2002 6:01 PM
570 Six Sigma and Beyond
Prot Amount earned in producing a part or a service. It is usually applied
as a percentage of manufacturing cost.
Standard cost A theoretical manufacturing cost developed by engineers
and accountants. It is based on manufacturing processes, work measure-
ments, and material sizes and weights developed by engineers and historical
and current actual costs furnished by accountants. Standard costs are used
to measure the amount of materials, labor, and overhead factors that enter
into the manufacture of a product. Dollar values are assigned to these
theoretical costs as a common denominator. Standard costs are important
as control aids although they are based primarily on historical data.
Total cost Includes manufacturing cost plus a prot and other expenses.
The following expenses are usually added to manufacturing cost by sales
and/or accounting departments to make up the total cost:
a. Administrative and Commercial Costs Costs incurred in the
administration of the company, research, and selling of the product.
They are usually a factor represented as a percentage of manufacturing
cost.
b. Freight Costs Costs incurred in getting materials, sub-assemblies
and purchased parts to manufacturing or assembly plants.
Sources of Cost Information
The application of cost visibility techniques begins with an analysis of total cost,
progresses through an analysis of cost elements, and nally analyzes component or
process costs. To perform these steps the best cost information available is required.
This information will be available from sources such as:
Accounting Current and historical costs (actual costs)
Purchasing Cost of purchased items and tools, tool breakdowns, operation
line-ups, and material weights, both gross and net
Operating facilities cost planning Estimated costs of parts and tools, pro-
cess sheets and material weights.
Suppliers Estimates and/or quotations, costs, process information, and
material prices.
Feasibility and value guidance Manufacturing feasibility and cost trends.
All requests to previously mentioned sources should be channeled through
an appropriate Value Guidance Group to coordinate and follow.
Cost Visibility Techniques
These techniques are not necessarily used in chronological order. We must always
use our judgment, not only in utilizing the techniques that indicate high cost, but
also in utilizing all the other tools of value control.
Cost visibility analysis is based on the information shown in Figure 12.3. Based
on the information gathered, the team makes the appropriate recommendations.
SL3151Ch12Frame Page 570 Thursday, September 12, 2002 6:01 PM
Value Analysis/Engineering 571
Technique 1 Determine Manufacturing Cost
First look at manufacturing costs and, basing your judgment on experience or
comparison, determine if the part is worth the cost. This technique is simple and
obvious but perhaps it has been overlooked as an indicator as to where high costs
may be. If this technique tells us that the cost is high, it is necessary to go to one
or more of the other techniques to nd out specically where that high cost is located.
Technique 2 Determine Cost Element
If the objective is to locate the high cost, use the second technique which involves
determining cost elements. Here, the total direct material, total direct labor, and total
burden are broken out of the total manufacturing cost. The elements of total cost
again offer a basis for comparison:
Determine Cost Elements
Material $______________ Labor $______________ Burden $_________
Compare material content with labor dollar content. Compare these elements of
cost with those for another similar manufactured item. If the elements of cost vary,
it is an indication one may be high in cost, and the reason for the difference must
be found.
This technique can also be used to arrive at a normal distribution of cost.
Accounting can usually determine the normal distribution of cost in material, labor,
and overhead for a specic department or prot center. Every part can then be
compared to the distribution cost to determine if the cost elements are high or low.
Again, comparison is being used to nd high cost.
The cost breakdown may show that $10 worth of material and $.10 worth of
labor are being expended on a certain part. If this is the case, it can be asked if we
are in business to spend $.10 on labor for $10 worth of material. Perhaps the material
supplier should be asked to perform the labor operation. This could eliminate the
labor which may be used more productively elsewhere.
Conversely, it may be found that $.10 worth of raw material requires $10 worth
of labor. If this is the case, the overhead should be broken down into variances,
setups, tooling, direct labor, indirect labor, etc. The manufacturing area should be
questioned about methods and processes, prot centers being used, overhead, capital
equipment, labor grades, etc.
Technique 3 Determine Component or Process Costs
The third technique goes one step further in breaking down material, labor, and
overhead. To determine component and element costs as they occur in the manu-
facture of a part, break down each component as shown in Figure 12.3.
Figure 12.3 shows the components broken down in elements. From this list, you
examine the reasonable costs versus the unreasonable. The process may sound very
subjective at this stage, but it is important to differentiate an item that does not t
SL3151Ch12Frame Page 571 Thursday, September 12, 2002 6:01 PM
572 Six Sigma and Beyond
the pattern of other items. When the examination ends, more than likely you have
identied a most probable high cost item. Circle this amount, and examine it in
detail. Determine why this cost is so far out of line with other operations. This
technique gives a very precise and accurate cost visualization. It shows where costs
are being created on a component and element basis. Almost every analysis would
include the use of techniques 1, 2, and 3.
Now think of the third technique in more depth. If we study technique 3 in
depth, we will see that it can be used to analyze parts being assembled into a major
sub-assembly, major sub-assemblies being put together into a nal assembly, and a
number of nal assemblies being put together to make the total product. Good
judgment must be used in the application of this technique, and it will also dictate
the way the techniques should be used.
Technique 4 Determine Quantitative Costs
This technique analyzes cost on the basis of some measurable unit such as time,
weight, size, area, etc., and then makes a comparison with the cost per unit of a
known good value. It is sometimes surprising how seemingly complex products will
fall into a pattern.
One of the most convenient ways to use this technique is to build a cost curve
for the product under study. A comparison to the curve will indicate whether the
product is high or low. Techniques 1, 2, and 3 can then be used to zero in on the
specic cause of the cost deviations.
Cost per period of time This is good for high volume production. It can
also be used to describe cost per similar product class. Simply determine the number
produced in a convenient time period, minute, hour, day, etc. This can then be
compared to a similar unit. A simple example would be the cost per unit of a specic
class and size fastener.
Cost per pound This is a basis for comparison usually applied to castings,
weldments, or forgings, but it can be applied to anything that will plot on a graph.
Determine the cost per pound of each item and plot these on a graph, and the high
cost items will be immediately apparent. Again, this is a basis for comparison
another way to nd meaningful cost basis. Remember, even though weight may not
be an important design criterion, it still costs money to ship every pound of unnec-
essary weight.
Cost per dimension Some examples of the use of this unit would be as
follows: the cost per unit length for a simple extrusion, the cost per unit volume in
a tank, the cost per unit length of wiring, The cost per square foot of area covered
by a high-cost epoxy paint. These are convenient cost gures to have available as a
basis for comparison. Cost per unit of length, area, and volume are the key words
of this technique.
Cost per functional property Determine the actual amount spent per func-
tional property. For example, in wiring harnesses, what is the cost per ampere
conducted, on a mechanical component, per pound of weight supported or per inch
pound of torque transmitted? This gives a basis for a direct comparison. The function
can then be evaluated by comparison. This is a basic value control technique.
SL3151Ch12Frame Page 572 Thursday, September 12, 2002 6:01 PM
Value Analysis/Engineering 573
The use of these cost analysis techniques will literally explode costs in such a
way that a circle can be drawn around the areas that show where work is required.
The functional approach techniques can be used to study the high cost area. It does
not follow automatically that high cost is unnecessary cost. High cost may be
unnecessary cost, but we must use other value tools to nd out if it really is.
Technique 5 Determine Functional Area Costs
One purpose of this technique is to help answer the question, where should effort
be applied? If the study item is a part or a simple assembly (two or three parts),
then the scope is already dened. If the project is a complex assembly which could
have its principle of operation changed by a new design concept, questions such as
available time, savings potential, type of improvements, stage of product maturity,
etc., should be considered.
Divide the present cost into functional areas to dene the project scope. Division
of cost into functional areas will pinpoint high cost differently than usual cost
visibility analysis, and will help to broaden or narrow the scope of study. This will
direct effort to more protable areas. An example is shown in Figure 12.5.
FUNCTION DETERMINATION
Function analysis is the foundation of value control. A product or system is not
analyzed from a part or action standpoint, it is analyzed from a function standpoint
to break down the barriers to visualization for improved creativity and the develop-
ment of the maximum number of practical alternatives. The objective is to obtain
the maximum benet possible; cost reduction is often as high as 30 to 100 percent.
Function analysis makes it possible to set high cost reduction goals and to meet
them. This can be done because basic functions are identied and isolated, and other
methods to perform them are developed through the use of applied creativity. The
function approach requires that certain denitions, ground rules, techniques, and
relationships be understood.
FIGURE 12.5 A form that may be used to direct effort.
Part Name:
Functional Present cost Hi Low
area
X
Y
Total cost
SL3151Ch12Frame Page 573 Thursday, September 12, 2002 6:01 PM
574 Six Sigma and Beyond
Experience has shown that function analysis combined with the systematic
approach of the job plan will almost invariably produce desired cost reductions.
However, the goal of eliminating all unnecessary costs is dependent upon the skill,
training, dedication, and organizational support attained.
What Is Function?
Function is the property that makes something work or sell. Function states what
the product or system does. It is the objective of the action, the result to be accom-
plished, and can be dened in some unit of measure such as weight, quantity, time,
money, space, or some other practical measure.
Functions are expressed in two words, a verb and a noun. The use of only two
words forces a brief or terse denition of the necessary characteristics. The use of
two words avoids the possibility of combining functions and of attempting to dene
more than one simple function at a time. The two word requirement aids in achieving
the broadest level of abstraction. It is a forcing technique that causes a struggle to
clarify understanding and aid visualization for creativity.
Proper identication of function involves a point of view. The function must be
identied in such a way that is stripped of all restrictions that would inhibit devel-
opment of new and better ways to provide the function.
For example, consider the fastening of a simple nameplate to a part. One might
describe the function that applies as attach nameplate. It would be far better to
describe the function as identify product, because a nameplate is only one of many
ways to achieve the desired function. Nameplates might be riveted, welded, or
cemented. However, it is also possible to identify products by etching, stamping,
molding, or printing on the part, thereby eliminating the nameplate altogether. Some
examples of functions are:
Identifying the function in broadest terms provides the greatest potential for
value improvement because it allows greater freedom to creatively develop better
value alternatives. Further, it tends to overcome any preconceived ideas of the manner
by which the function is to be accomplished.
Basic and Secondary Functions
Basic Functions
There are two types of functions: basic and secondary. Basic function is the specic
purpose for which a device is designed and made. Or stated another way: basic
function is the performance feature that must be attained if the total item or system
is to work or sell. Consider a screwdriver. Transfer torque is the basic function.
If this function is eliminated, the screwdriver will not work.
Support weight Improve appearance Create design
Transmit torque Establish style Evaluate information
Enclose part Increase prestige Develop plan
Conduct current Decrease cost Survey market
Amplify voltage Create style Change attitude
SL3151Ch12Frame Page 574 Thursday, September 12, 2002 6:01 PM
Value Analysis/Engineering 575
A clear understanding of the users need is necessary if a satisfactory basic
function denition is to be developed. For example, if the desired application is to
pry open paint cans, the function would be dened in terms of the transfer of a linear
force. A screwdriver could perform this function but the transfer force function
may be provided at lower cost if transfer torque is eliminated. A plain, at strip
will transfer force without the costly handle. Make sure your study item has a basic
function; otherwise, it can be eliminated.
Secondary Functions
Secondary functions are the result of performance features of a system or item that
have been added because of the method chosen to accomplish the basic function. They
may help the product work a little better and sell better; in other words, they support
the basic function. In the case of the screwdriver, its secondary functions would be:
Can you determine what parts perform these functions in a typical screwdriver?
Secondary functions are sometimes unwanted or unnecessary. An example would
be make noise. We have a complete sound laboratory trying to eliminate or control
noise on our cars. On the other hand, money was added to the turn signal asher to
increase noise and then later to control noise.
In the automobile business, styling is a major factor. Styling features may be
basic or secondary. However, whether they are basic or secondary is more subjective
than in a mechanical part. For this reason, good supporting marketing data are
required to guide and advise the stylist of the consumers attitude and requirements.
FUNCTION ANALYSIS AND EVALUATION
There are six distinct function evaluation techniques to help clarify problems and
identify unnecessary cost. The problem will dictate which techniques are needed.
The order in which they are given here has no particular signicance; skill develops
through application. Practice will eventually provide a methodology that will best
t problem needs. The six techniques are:
Technique 1 Identify and Evaluate Function
This is a simple technique that asks the question what must the part or assembly
do? It applies to all projects and requires a clear determination of all use and esteem
functions. Each function should be expressed in two words see Figure 12.4.
Transmit information Upgrade appearance Resist force
Multiply torque Prevent slip Reduce wear
Resist corrosion Increase leverage Insulate user
Technique 1 Identify and evaluate function
Technique 2 Evaluate principle of operation
Technique 3 Evaluate basic function
Technique 4 Theoretical evaluation of function
Technique 5 Input output method
Technique 6 Function analysis system technique
SL3151Ch12Frame Page 575 Thursday, September 12, 2002 6:01 PM
576 Six Sigma and Beyond
After all functions have been listed, classify them as basic or secondary refer
to denitions of basic and secondary in the ground rules. This technique claries
the function, prevents combining of functions, and reveals the relationship of basic
and secondary functions.
Technique 2 Evaluate Principle of Operation
This technique is essentially the same as technique 3, except the emphasis is on
principle of operation. This technique requires a detailed examination of the physical
laws or effects upon which the function could be based, to allow a simpler, more
reliable, and less costly operation. For example, to provide data on auto engine
temperature, a transmit information function based on laws and effects that respond
to heat might be replaced by one based on magnetic principles.
This approach has broad application on new items and can be a useful tool in
the advance departments or research departments on developmental items. For exam-
ple, in the development stage, the decision to provide mechanical, electrical, vacuum,
or other means to provide automatic temperature control would have an effect on
the system design.
Technique 3 Evaluate Basic Function
This technique imposes the strictest discipline and requires the acceptance of a
forcing assumption: only basic function has value. The assumption is made as a
mental step in order to force our thinking to search for new and simpler designs that
will provide the basic function in such a way that the least number of secondary
functions is required to make it work and sell.
This technique is best applied to assemblies; however, it can be modied to
single parts. The blast-create-rene technique as described in detail in the creative
phase is an example of a special case of this technique. The value, as it is developed
here, is the combined result of individual judgment, creativity, and past experience
that expresses what the function should cost based on the work it performs (and the
way it could be done).
There are many variations to this technique. One is to expand the scope of study
and eliminate imposed functions by revising each listed function determined in
technique 2 and asking the question, Is this function performed this way as a result
of the basic design concept? Redesign to eliminate imposed functions means
expanding the scope thereby causing adjoining components to dictate new limits.
Some of the largest savings, 50 to 80%, will come using this technique.
Technique 4 Theoretical Evaluation of Function
The theoretical evaluation of function places a precise value on a function by using
appropriate mathematical relationships. It applies to measurable parameter functions
only, such as create heat and resist bending, as opposed to functions that provide
appearance, maintain decor, etc.
For example, if we were to plot the cost in cents per foot against the torque
carrying capacity for various materials, we would see that the graph instantly will
SL3151Ch12Frame Page 576 Thursday, September 12, 2002 6:01 PM
Value Analysis/Engineering 577
highlight the cost required to satisfy the function transmit torque. This approach
takes value engineering from an art to a science and opens the door for value research.
While the basic concept is still the same, equating cost to function, a considerable
grasp of basic value techniques and mathematics is required.
Technique 5 Input Output Method
This technique is useful in highlighting the basic function of a product by viewing
it as a black box item that receives certain inputs and transforms them into outputs.
These inputs and outputs are not functions and therefore do not have to be dened
in terms of two words. The function itself is a result of the input and it causes the
output; hence, the function is positioned between the input and the output.
In the example below, 6 volts DC is the input to the transformer and 12 volts
DC is the output. The function that ts between the input and the output is amplify
voltage. Additional examples of this technique are listed below.
It should be noted that any item may have more than one input or output, and
that unless inputs are transformed into outputs, the item has no value. Since function
is the key link between input and output, this is equivalent to stating that only function
can have value.
Technique 6 Function Analysis System Technique
This technique is the primary function analysis technique used in most cases. This
system was developed to assist in performing function analysis on a complete system.
The use of determination logic helps to identify and verify the basic functions and
also helps identify higher and lower level functions and supporting systems. The
technique requires construction of a FAST (Functional Analysis System Technique)
diagram by the use of determination logic questions: How? Why? and When? The
steps necessary to complete a FAST diagram are:
1. List all functions performed by the assembly or system on Figure 12.4.
Be sure to identify each function by a verb and noun. Review and check
proper columns to identify basic and secondary functions. This is actually
technique 2.
2. Prepare a 1 2 card (or a Post-it note) for each function listed in Step
1. Take a close look at the functions and indicate the relationship of all
functions to each other. This requires determination of the next higher
level function for each known function. In other words, nd the functions
that cause other functions to be performed. In order to do this, ask three
Item Input Function Output
Transformer 6 volts DC Amplify voltage 12 volts DC
Hot water Cold water Heat water Hot water
Heater Power Convert energy Heat
Pipeline Fluid Transmit uid Fluid
SL3151Ch12Frame Page 577 Thursday, September 12, 2002 6:01 PM
578 Six Sigma and Beyond
questions about each of the functions listed in Step 1 to identify the
functions that will link other functions together. Each question must be
answered specically.
The logic questions are:
How? How is this function accomplished?
Why? Why is this function performed?
When? When is this function performed?
Select the function you think is the basic function and apply the logic questions
to the right and left of the basic function. Ask how the function is performed to
determine the function to the right. Ask why this function is performed to determine
the function to the left. It may be necessary to select more than one of the functions
to get the correct basic function.
Ask why? < ----------------------Control Air ----------------------> Ask how?
In the example, the function control air is selected as the basic function. How
is air controlled? The reply is direct air and modulate air. Both answer the how
question. Do they both answer the why question? Why is the air modulated? Why
is the air directed? The answer is to control air. The logic questions are satised
and we can add the next FAST diagram block see Figure 12.6.
Now the question why do we control air? must be answered. The reply is
achieve comfort. How do we achieve comfort? Control air. So the basic logic
questions have been satised for the basic function control air.
The basic function has been isolated, and the rest of the primary path functions
can be determined. These primary path functions become the basic framework for
developing a complete FAST diagram. The how and why logic questions must
now be applied to every function. Each must satisfactorily answer the question
relative to its position in the diagram. For example: If we take the function modulate
air, we can further analyze it into vary opening, direct force, apply torque, apply
effort.
FIGURE 12.6 Second step in the FAST diagram block process.
HOW WHY
Achieve
comfort
Control
air
Modulate
air
Direct air
SL3151Ch12Frame Page 578 Thursday, September 12, 2002 6:01 PM
Value Analysis/Engineering 579
Whenever these questions are answered satisfactorily, the position of a known
function is established within the FAST diagram. In some cases a new function is
discovered. Then the primary path questions must be asked of the new function.
This step identies the relationship between a low-level and a high-level func-
tion, with the highest-level functions on the left. It identies functions that are the
result of other functions and functions that cause other functions. Unless you under-
stand these relationships, it will not be possible to develop a FAST diagram, which
is necessary to stimulate creativity and to clarify the relationships of parts or actions.
When the primary path has been selected and positioned on the chart, position
all secondary functions that did not t into the primary path by applying the when
question and adding them below the primary path. All of the functions listed may
not be functions; some may be specications or objectives. Show the objectives and
specications on your FAST diagram in phantom blocks in the upper left corner of
the diagram.
This completes the construction of the FAST diagram but does not complete the
information that can be added to it to provide the total assembly picture. The function
cost worksheet (Figure 12.3) can now be completed by listing all functions horizon-
tally and all parts and process costs as determined from the detailed cost data.
Remember: the cost information should be for a specic function. A partial cost
function FAST diagram is shown in Figure 12.7.
The parts that perform each function can also be added to the FAST diagram.
This step will dene the high cost areas and point out where to concentrate creative
effort. By analyzing the FAST diagram, you can nd interesting creative relation-
ships. The function to the right of the selected function tells how this function is
FIGURE 12.7 A partial cost function FAST diagram.
HOW WHEN WHY
Achieve
Comfort
Control
Air
Control
Assembly
And so on
Modulate
Air
And
so
on
Meet
Specs
Direct
Air
And so
on
Apply
Scope
Apply
Torque
Increase
Concept
Scope
Effort
SL3151Ch12Frame Page 579 Thursday, September 12, 2002 6:01 PM
580 Six Sigma and Beyond
performed. The function to the left tells why this function is performed. The function
below or above tells when, and that listed immediately below the function tells what
performs this function.
These simple words how, why, when, and what stimulate creativity. The
answers also keep your thinking close to the area in which a change is being sought.
In further utilization of your FAST diagram, try incorporating secondary func-
tions into existing parts by modication to the part. You will have the most success
if the functions are next to each other or happening at the same time.
This technique may be applied to existing or proposed designs, concepts, pro-
cedures, processes, documents, or any type of software. The primary purpose is to
identify functional relationships to stimulate creativity.
Cost Function Relationship
The FAST diagram clearly identies functions and their relationship to each other.
The techniques of cost visibility identify high cost areas. These techniques can now
be combined to clearly identify the relationship between cost and function. This will
make it possible to identify the areas of unnecessary cost for the application of
creative problem-solving techniques.
The cost function worksheet (Figure 12.3) is the basic tool. The functions are
listed across the top from the FAST diagram. The parts, processes, or actions are
listed vertically with their actual costs see format in Figure 12.7.
It is now necessary to determine the actual cost of each function by applying
the cost for the part or action that causes the function to be performed. In many
cases, it may be necessary to break the cost down into several functions. For example,
say in a foldout sample, the thumbwheel costs $.0957. This is distributed over three
functions: provides decor, apply torque, limit rotation. The percentage of cost applied
to each is a matter of qualied judgment unless a detailed breakdown can be obtained.
In our example, let us assume that the function modulate air is made up of
the cost of three items totaling .1100. This is the cost of the function modulate air.
In order to nd the cost of the system to modulate air, all of the functions in the
critical path plus the supporting functions must be totaled. This cost is, say, $0.2699
or X% of the total assembly cost. In other words, if the modulate air function could
be eliminated, $.2699 could be removed from the assembly.
These function costs can be applied to the FAST diagram for convenience. This
enables a ready determination of what can be accomplished by eliminating or
combining functions to provide a less costly assembly.
When used in conjunction with the FAST diagram, the cost function worksheet
provides an accurate function cost. This can then be evaluated in terms of its value
or worth. By the application of creative techniques, new ways to perform the desired
function can be developed.
Evaluate the Function
After a FAST diagram is complete, with part or action costs or assigned to the proper
functions, values can be assigned to each of the functions. By assigning cost rst,
SL3151Ch12Frame Page 580 Thursday, September 12, 2002 6:01 PM
Value Analysis/Engineering 581
the task force members become familiar with detail costs and are therefore better
prepared to assign values to the functions by comparison.
Value is dened as the lowest cost to reliably perform a function. In evaluating
a function, the value or worth used must be the intrinsic value, not the result or
effect of that function, and it does not include other functions on the FAST diagram.
(During this phase of the job plan, the team must be optimistic, just as in the creative
phase; if not now, when will the team be optimistic?)
One of the easiest ways to determine value of a function is by comparison to
another method to perform the function at a lower cost. For example, in a given
design, the support weight function was performed by a columnar part of its
attachment, for a function cost of 23 cents. The team assigned a value of 5 cents
for the support weight function because the team members reasoned that the
specied load could be supported in suspension for that amount. At this time, the
team did not have a solution to the problem, but during the brainstorming session
the team generated proposed changes that were developed to accomplish the overall
target.
In many cases, function values cannot be assigned by comparison, and other
means must be used such as:
1. Apply the test for value How much of my own money would I pay for
that function?
2. Rate function numerically Apply ratios to function cost to arrive at
new values.
3. Apply VE techniques for lower cost Set a goal or target for functions
(percentage reduction).
4. Others Make use of other information, such as noticeable differences,
value standards, and mathematical comparisons.
The sum of the individual function values establishes a product or total system
value; this becomes the teams new target. Now the team knows which functions to
attach during their creative sessions the high cost and low value functions.
Once these values have been established by the team, place this assigned value
in the upper right-hand corner of the function box. The team has isolated the problem
and set its own goal(s) for improvement.
Remember these ten tests for value:
1. Can we do without it?
2. Does it need all of its features?
3. Does it cost more than it is worth? Is anyone buying it for less?
4. Is there something better that can do the job?
5. Can it be made by a less costly method?
6. Can a standard item be used?
7. Does it cost more than the total of reasonable costs for material, labor,
burden, and prot?
8. Can a less costly tooling method be used, considering the quantities
involved?
SL3151Ch12Frame Page 581 Thursday, September 12, 2002 6:01 PM
582 Six Sigma and Beyond
9. Can another dependable supplier provide it for less?
10. Would you pay the price if you were spending your own money?
CREATIVE PHASE
The creative phase requires the use of your imagination to develop alternative
solutions to the functions dened in Phase I. The systematic value control approach
makes use of brainstorming as a principal technique; however, the blast-create-
dene technique must frequently be used in conjunction with others.
Brainstorming is dened as the combined effort of two or more people to
determine all possible methods for performing the required functions. There is no
attempt at evaluation; this will come later. The requirement is to develop any and
all ideas that may include the outstanding alternative to satisfy the required functions.
It is necessary to become free from the constraints of past habits and attitudes
and apply thought needlers see Table 12.2 to increase the ideas when they
begin to slow down. Refer to specialty processes, products, or materials for ideas.
Apply the use of standards. Seek ideas from plant specialists and supplier represen-
tatives. Use catalog les such as Thomas Register and Sweets. Remember:
Ideas come from every place and anybody. Do not restrict your thinking!
Conduct a brainstorming session on each required function. List all ideas.
Try to eliminate or combine functions. Be as exible as possible.
There is no end to change. Change is in fact necessary to survival; therefore,
people must constantly advance. Our concern in value control is advancement in
engineering and manufacturing in a creative and productive sense.
Although we would all readily agree with the comments in the preceding para-
graph, the individual effort necessary to expand our creative contributions is not
automatic but rather requires concentrated effort and deliberate practice.
There are several stiing factors that prevent creative productivity from being
as free as it could be. For one, customs and traditions that have become a part of
our everyday life bind us whether we realize it or not. Second, habits that can be
good or bad, depending on the situation, can limit creative productivity. One way
to control habits is to rst realize that much of what we do and observe in others is
determined by habit, and then make a conscious effort to appraise the value of our
problem solving habits and attempt to discard those that minimize creative thinking.
Unless we progress in this effort, we can become enclosed in a prism of com-
placency. Inappropriate habits in problem solving can also build a wall of pride
about the way we are currently doing things and completely smother our will.
In addition, factors that stie our own creativity are present in those around us.
The attitudes of others can be encouraging and stimulating. On the other hand, the
attitudes of associates can be stiing when our creative efforts are met with com-
placency or defensive reactions.
Some individuals who have presented good original ideas and then encountered
dogma, inertia, minimizers, rationalizers, complacency, apathy, negativism, autocracy,
SL3151Ch12Frame Page 582 Thursday, September 12, 2002 6:01 PM
Value Analysis/Engineering 583
or other stiing conditions will freeze creative thought. Others will transfer their
creativity to other parts of their lives: home, church, recreation, any place but the job.
If we are to encourage creative productivity, we must eliminate any idea that
the instant an idea is proposed it must be bitten, broken, or kicked. In order to break
ineffective habits and overcome stiing environments, a technique that is helpful is
to rmly commit ourselves to a goal to our associates, superiors, or even the general
public. In actual process, this technique resolves itself into the establishment of rm
deadlines and numerous subdeadlines in the course of a project. You will experience
that process during every value control experience.
Another technique is the inversion technique. It is used to solve the what-causes-
it type problem. This technique concentrates on inverting the problem. For example,
if the problem is how to cut cost, the technique would ask how you increase the
cost effectively.
TABLE 12.2
Idea Needlers or Thought Stimulators
How much of this is the result of custom, tradition or options?
Why does it have this shape?
How would I design it if I had to build it in my home workshop?
What if this were turned inside out? Reversed? Upside down?
What is this were larger? Higher? Wider? Thicker? Lower? Longer?
What else can it be made to do?
Suppose this were left out?
How can it be done piecemeal?
How can it appeal to the senses?
How about extra value?
Can this be multiplied?
What if this were blown up?
What if this were carried to extremes?
How can this be made more compact?
Would this be better symmetrical or asymmetrical?
In what form could this be? Liquid, powder, paste, or solid? Rod, tube, triangle, cube, or sphere?
Can motion be added to it?
Will it be better standing still?
What other layout might be better?
Can cause and effect be reversed? Is one possibility better than the other?
Should it be put on the other end or in the middle?
Should it slide instead of rotate?
Can you demonstrate or describe it by what it is not?
Has a search been made of the patent literature? Trade journals?
Could a supplier supply this for quicker assembly?
What other materials would do this job?
What is similar to this but costs less? Why?
What if it were made lighter or faster?
What motion or power is wasted?
Could the package be used for something afterward?
If all specications could be forgotten, how else could the basic function be accomplished?
Could these be made to meet specications?
How do competitors solve problems similar to this?
SL3151Ch12Frame Page 583 Thursday, September 12, 2002 6:01 PM
584 Six Sigma and Beyond
Yet another technique for breaking through our judgment controls of creative
expression is that of blast, create, and rene. This technique is extremely helpful
in reaching value objectives. For years, we have been trying to reduce cost by 5,
10, or 15% through normal cost reduction procedures (material, fabrication methods,
etc.) This has become more and more difcult.
If we try to take out a larger percentage, say 50%, we are immediately forced
to take a new approach to the problem. The blast, create, and rene (BCR) technique
combines the function approach with creativity and evaluation of ideas in order to
nd new, more effective ways to accomplish the required function in products,
processes, or procedures. There are several reasons to use the BCR approach;
however, the three major ones are:
1. It makes possible more creative problem solutions by eliminating details
of the existing product and freeing the mind for thought that could lead
to more productive solutions.
2. It directs thinking to basic considerations.
3. It provides a mechanism for building on these basic considerations to
develop a nal product satisfying all necessary requirements.
Intense study of any product shows that it is, to greater or lesser degree, the
result of a chain of happenings (evolution). Even the new products that value
engineering may bring forth will, to some extent, also exhibit this type of evolution.
Therefore the search for better value requires that we ask the following vital ques-
tions: How can this chain of inuence be stopped? How can we objectively look at
a function? The technique of blasting, creating, and then rening is especially
directed toward accomplishing these objectives. Its application is in three phases,
which are:
PHASE 1. BLAST
This phase consists of specically identifying that portion of the problem under
study that does, in fact, perform the basic function (or part or most of it). Next, we
blast that portion out of the problem (isolate it) so that we can think about it clearly
and specically. The basic function is the rst block in the FAST diagram.
PHASE 2. CREATE
In this phase we try to answer the question: What do I have to add to that which I
isolated, in the blast phase, to make it capable of performing the required functions
or to have it work and sell? Alternatives are developed and costs are put on each
one. Make no attempt to evaluate alternatives at this time.
PHASE 3. REFINE
We evaluate the ideas developed in the create phase and through an objective process
of rening, develop an approach which will meet all the performance, cost, and
delivery parameters required.
SL3151Ch12Frame Page 584 Thursday, September 12, 2002 6:01 PM
Value Analysis/Engineering 585
EVALUATION PHASE
Evaluating the ideas developed during the creative phase is a critical step in the job
plan. The ideas generated will include practical suggestions as well as wild ideas.
Each and every idea must be evaluated without prejudice to determine if it can be
used or what characteristics the idea has that may be useful.
Proper evaluation of the ideas is a critical step. Remember, if an idea is discarded
without thorough evaluation, the key to a successful solution may be lost. The time
to create ideas is in the creative phase. If an idea is discarded, there may not be
another opportunity to develop it again.
Evaluation processes can range from the simple to the complex. The methods
selected depend to some degree on the number and quality of the ideas generated.
(It is not uncommon to have several hundred ideas to evaluate.) In the evaluation
process, do not be too critical. Look for the good rather than the bad and do not
present unnecessary roadblocks.
The initial screening will weed out worthless ideas and sometimes generate new
ideas or variations of the present ones. The initial screening will also begin to classify
the ideas into basic groups that, in effect, constitute a second stage in the screening
process. After the initial screening, it may be necessary to resort to systems designed
to aid the process. Two favored, because of their simplicity, are paired comparison
and Pareto voting. When the initial list of ideas has been screened and evaluated
and reduced to a choice between several alternatives, evaluate the good and bad
features of each alternative. Watch out for roadblocks, and try to determine if they
can be eliminated and how they may be eliminated.
Experience has shown that this evaluation process is a difcult task. The impulse
to quickly screen through the list to zero in on the best ideas must be controlled.
The mass of data must be handled systematically to obtain maximum benet from
the creative phase. Careful screening is essential to isolating the best concept to
carry over into the planning phase where the idea will be developed into a practical
recommendation for action.
SELECTION AND SCREENING TECHNIQUES
A difcult problem that frequently confronts decision makers is the need to organize
a large amount of data so that one or several of the most important items may be
identied. It may be required to determine which of several alternatives appears to
be the best, or it may be necessary to select a number of items so that they may be
ranked and weighted by order of importance or some other criteria. Experience has
shown that most people are not able to handle this task quickly and effectively. For
this reason, it was decided to develop a simple method that would be applicable in
most cases. More complex situations may require more sophisticated methods. How-
ever, experience has shown that a combination of two simple methods, Pareto voting
and paired comparisons, will satisfy a majority of requirements.
Pareto Voting
Pareto voting is based on Paretos law of maldistribution. Vilfredo Pareto
(18461923), a political economist, observed a common tendency of wealth and
SL3151Ch12Frame Page 585 Thursday, September 12, 2002 6:01 PM
586 Six Sigma and Beyond
power to be unequally distributed. This observation has been rened to the degree
that it can be said that there is an 80/20 percent relationship between similar elements.
For example, twenty percent of the parts in an assembly contain eighty percent of
the cost. This is most useful information in cost estimating; however, the relationship
holds for many diverse examples such as the following:
Twenty percent of the states use eighty percent of the fuel.
Twenty percent of the activities create eighty percent of the budgeted
expense.
Twenty percent of the items sold generate eighty percent of the prot.
In value engineering, it is frequently necessary to select the best ideas, the highest
value functions, the highest potential projects, or any of a number of other require-
ments. It has been found that the application of Pareto voting can help to simplify
the list and will in most cases ensure that the most important items have been
selected. It also produces results quickly and can be incorporated into the value
engineering process to allow continuous operations without undue disruptions.
Pareto voting is conducted by requesting each team member to select what he
or she believes are the items or elements that have the greatest effect on the system.
This list of items is limited to twenty percent of the total number of items. For
example, each team member would be allowed to select six items out of a list of
30. The vote is on an individual basis to obtain as much objectivity as possible.
The resultant lists are then compared and arranged into a new consolidated list
in descending order by the number of votes each item received. Usually, several
items will have been selected by two or more team members. The top 10 to 15 items
are then ranked and weighted in a second step by using paired comparisons.
Paired Comparisons
Paired comparisons, or numerical evaluation as it is sometimes called, compares a
list of items to rank and weight them in order of importance or some other criteria.
Ranking is the assignment of a preferred order of importance to a list of items.
Weighting is the determination of the relative degree of difference between items.
In paired comparisons, each item is compared to every other item on the list in
turn, using a simple matrix. It is most convenient for up to 15 items; however, the
limit is only for convenience. In most cases, ranking and weighting of long lists
may be more practically done by direct magnitude estimation (DME).
A comparative decision is made between any two items on a two-level basis.
There is either a great difference or a minor difference. The decision can be made
based on the length of time it takes to decide. If there is no question as to which
item to select, there is a great difference. If thought must be put into the decision,
it would then be a minor difference. A major difference is weighted a 2, a minor
difference a 1.
The paired comparison worksheet provides for the list to be evaluated and the
evaluation grid. Start by transferring the list of items to the worksheet. Now compare
A to B, A to C, etc. comparing A to each item of the list. A is then dropped and B
SL3151Ch12Frame Page 586 Thursday, September 12, 2002 6:01 PM
Value Analysis/Engineering 587
compared to C, to D, etc. on through the list. B is then dropped and C is compared
to each item on the list until every item has been compared to every other item. The
following example will illustrate the process.
It is desired to select a vacation from among the following areas: Majorca,
Florida, Colorado, or Greek island cruise. The rst step is to list the locations on
the evaluation summary area of the worksheet as shown in Table 12.3. The second
step is to begin to compare the items.
Evaluation Summary
From the evaluation summary list, compare A to B, Majorca to Florida, and place
the selected location letter in the A-B box of the evaluation grid. If the difference
is major or clearly in favor of A, place a sufx 2 after the letter A. The A-B box
should read A2. Now compare A to C. If the selection is A, place an A in the A-B
box. If the difference is great, again add the sufx 2. Now compare A to C. If A is
again the selection, place the A in the A-C box. If it requires thought to make the
decision, the numerical sufx should be 1, minor. Drop the A and now compare B
to C and B to D. Lastly, drop the B and compare C to D see Table 12.4.
To determine the ranking and weighting, add up the As, Bs, Cs, etc. In the
example the result is as shown in Table 12.5.
This analysis shows Majorca to be the most desirable. It is 40 percent more
desirable than a Greek island cruise and 60 percent more desirable than Colorado.
Matrix Analysis
Although Pareto voting and paired comparison satisfy the screening and evaluation
process in most cases, there are times when a more detailed analysis is required.
TABLE 12.3
The Worksheet for Setting the List
Key Letter Alternatives Weight
A
B
C
D
Majorca
Florida
Colorado
Greek island cruise
TABLE 12.4
Evolution Summary
B C D
A A2 A2 A1
B C2 D2
C D1
SL3151Ch12Frame Page 587 Thursday, September 12, 2002 6:01 PM
588 Six Sigma and Beyond
Two such cases could be when a decision involves large nancial outlays or when
serious consequences could result from a change. In these cases, every effort must
be made to base a decision on the most objective data possible. For many of these
decisions, there is a need to rank and weigh a number of alternatives against a series
of specic criteria. By doing this, we learn which trade-offs must be made for the
TABLE 12.5
Ranking and Weighting
Key Letter Alternatives Weight
A
B
C
D
Majorca
Florida
Colorado
Greek island cruise
5
0
2
3
TABLE 12.6
Criteria Affecting Car Purchase XXXX Paired
Comparison
B C D E F G Coding & Results
A A1 A1 D1 E1 F1 A1 F Cost 6
B B1 B1 E1 F1 G1 G Economy 4
C C1 E1 F1 G1 E Image 4
D E1 F1 G1 A Styling 3
E1 F1 G1 B Comfort 2
F F1 C Reliability 1
D Selection 1
TABLE 12.7
Criteria Weighing
Criteria
A B C D E F G Total Rank
Weight 3 2 1 1 4 6 4
Alternatives
Ford
Chrysler
Chevy
Honda
Audi
SL3151Ch12Frame Page 588 Thursday, September 12, 2002 6:01 PM
Value Analysis/Engineering 589
various requirements of the project, enabling us to make the best decision. In these
cases, a combinex method is recommended.
Combinex was developed by Fallon (1971) and is based on comparing a number
of alternatives to a series of criteria. Each alternative is compared to the criteria in
turn and given a specic numerical rating. The resultant analysis clearly ranks and
weighs each alternative against each criterion, which allows for trade-offs based on
clearly dened data. This makes it an excellent tool in decision making.
Example
To illustrate the process, a typical problem familiar to most people will be used.
The problem is to select an automobile for purchase. The criteria for selection have
been taken from a list of factors affecting the sale of most products. The criteria
selected will have a different value for each individual and have been chosen to
illustrate several points. The selection criteria are:
In other instances, the criteria used could be the factors affecting the purchase
of manufacturing equipment, location of a plant, construction of various types of
facilities, or any other requirement involving a series of criteria for selection.
The alternatives to be considered for purchase are the XXXX models listed
below along with their ctitious base prices. The analysis was made in April XXXX.
The same analysis made in September XXXX might have resulted in a different
conclusion as time and opinions change.
Rank and Weigh Criteria
The rst step in the process is to decide the importance of the various criteria since
each does not have an equal weight or bearing on the selection. In other words, the
selection criteria must be ranked and weighed. To do this we will use the method
of paired comparisons. A team of ve persons applied paired comparisons as seen
in Table 12.6.
The result of the groups analysis is their opinion. Another group would apply
their own values and probably produce a different result. This groups ranking and
weighing shows cost to be the most important criterion. Cost was six times more
important than comfort and 50 percent more important than economy.
A. Styling E. Image
B. Comfort F. Cost
C. Reliability G. Economy (mi/gal)
D. Selection (models available)
Alternatives
1. Ford $ 14,000
2. Plymouth $ 13,600
3. Chevrolet $ 14,500
4. Honda $ 15,000
5. Audi $ 28,000
SL3151Ch12Frame Page 589 Thursday, September 12, 2002 6:01 PM
590 Six Sigma and Beyond
Evaluate Each Alternative
The criteria values are entered into the combinex scoreboard as illustrated in
Table 12.7.
Next, the team compares each alternative, in turn, to each of the criteria. A value
is then placed in the upper section of its respective box. These values are based on
the criteria weighing scale shown below.
In this example, the comparison was made as shown in Table 12.8.
How does the Ford satisfy the styling criteria in the opinion of the selection
team? The team decided it was fair and rated it a 2. For reliability, the team said
the Ford was average and weighed it a 3. After the Ford was compared to each
TABLE 12.8
Criteria Comparison
Criteria
A B C D E F G Total Rank
Weight 3 2 1 1 4 6 4
Alternatives
Ford 2/ 4/ 3/ 4/ 3/ 3/ 4/
Chrysler 4/ 4/ 4/ 5/ 3/ 3/ 4/
Chevy 4/ 4/ 3/ 4/ 3/ 3/ 4/
Honda 4/ 3/ 3/ 3/ 3/ 3/ 5/
Audi 3/ 4/ 3/ 3/ 4/ 1/ 3/
TABLE 12.9
Criteria Weight Comparison Completed Matrix
Criteria
A B C D E F G Total Rank
Weight 3 2 1 1 4 6 4
Alternatives
Ford 2/6 4/8 3/3 4/4 3/12 3/18 4/16 67 4
Chrysler 4/12 4/8 4/4 5/5 3/12 3/18 4/16 75 1
Chevy 4/12 4/8 3/3 4/4 3/12 3/18 4/16 73 3
Honda 4/12 3/6 3/3 3/3 3/12 3/18 5/20 74 2
Audi 3/9 4/8 3/3 3/3 4/16 1/6 3/12 57 5
5 Superior
4 Good
3 Average
2 Fair
1 Poor
SL3151Ch12Frame Page 590 Thursday, September 12, 2002 6:01 PM
Value Analysis/Engineering 591
criterion in turn, the second alternative, the Chrysler, was compared. In each case,
each team member expressed an opinion individually. In some instances, it was
necessary to develop an average. In other cases, the decision was unanimous. This
was done until each alternative was compared to each criterion.
The third step of the process is to multiply the criteria weight by the comparison
value as shown in Table 12.9. For example, the Ford styling weight of 3 was
multiplied by the value of 2. The resultant product of 6 is inserted in the lower
section of the box. After completion of each individual weighing, the score is
summed under the total column.
The total score is shown in the column at the right, and the choices are ranked
in the far right column. This analysis shows the rst choice to be the Chrysler and
the last choice to be the Audi, as illustrated in the complete combinex scoreboard
(Table 12.11).
Analyze Results
An analysis of the table shows that although the Audi was a poor fth in the selection
process, the primary reason was cost. If the cost had been average, the additional
12 points would have raised Audis total above that of the Ford. The table also shows
that if the Ford styling had been rated as good, 4, this car would have been ranked
second with a score of 73. Although styling was originally ranked fourth in impor-
tance with a 3, other factors may now be considered. An improvement in reliability
would not have a major effect on the overall rating, but a reduction in cost or an
improvement in economy could have. Cost could be negotiated; economy would
require some basic product changes.
IMPLEMENTATION PHASE
The objective of a value engineering study is the successful incorporation of rec-
ommendations into the product or operations. However, a successful project often
starts back at the beginning. Each project must be thoroughly analyzed to determine
its potential for benet and the probability of implementation. This is as important
as the knowledge and skill required to apply the system to attain successful results.
An excellent idea is worthless unless it can be properly implemented. If it is not
implemented, no one will obtain any benet. It must also be implemented in the
manner intended. Unfortunately, there have been many cases on record where the
idea could not be implemented because of the high cost to make the change. There
are other cases where the recommendations were not properly understood and
implementation resulted in increased cost. This often results in disillusionment or
the feeling that value engineering does not work on our problems. Actually, in most
cases the real problem was that the problem was not properly diagnosed. It was not
that value engineering does not work; it was a matter of inefcient preliminary
analysis and preparation.
It does not seem reasonable to expend the effort and funds required to make a
value study without rst having done the necessary work to ensure that the project
is practical, that it can be implemented, and that the necessary funds and people will
be available.
SL3151Ch12Frame Page 591 Thursday, September 12, 2002 6:01 PM
592 Six Sigma and Beyond
Selection of projects is a part of the entire value engineering implementation
process. Many times, management will assume that any project will prove protable.
This is not always the case. The project must be practical in relationship to its effect
on the organization see the discussion on SIVE.
To aid in the selection of projects, development of people, implementation of
projects, and all the other aspects necessary to successfully achieve the stated objective,
we have prepared some guidelines. They are guidelines, not rules, as every organization
is different and successful value engineering efforts must be integrated into operations
to become part of the day-to-day decision-making process of the company.
To begin with, we will look at the overall organization and implementation of
value engineering operations. Then we will look at some of the details that make
for success.
GOAL FOR ACHIEVEMENT
What do we want to get from value engineering? What will be the objective? This
is the rst question to answer. Value engineering can increase productivity, reduce
product cost, improve quality, reduce administrative costs, or produce a number of
other benets that may be critical to operations. Whatever the goal, it should be
dened in specic terms, such as increase productivity by a specic percent, reduce
product cost by a specic number of dollars per unit, and so on. Whatever the initial
goal may be, it can be revised and broadened as skill in application and implemen-
tation of the process develops and understanding and credibility increase.
Value engineering is a people-oriented program designed to help people to do
a better job by aiding them to break down constraints to understanding. It provides
some very specic methods and systems to achieve results. Since people perform a
wide range of jobs in an organization, it is certainly logical to expect that if they
can be provided with a system that can help them to do a better job, anything that
they are expected to do can be improved. In the end it is people who do the thinking.
If they can improve their performance, everyone will benet.
This has been our experience. Many people who are highly skilled in their jobs
have developed new insights that have created breakthroughs in technology as well
as major organizational and operational improvements. The goal for achievement
should be known to everyone. It can be product oriented or directed towards man-
ufacturing or administrative operations. It need not be company wide. However, the
scope can be broadened at any time. Once the goal has been determined, the means
to achieve the objective can be developed.
DEVELOPING A PLAN
There are ve steps to incorporating value engineering into operations. They are as
follows:
1. Evaluate the system.
2. Dene an objective.
3. Develop plan and organization to achieve objective.
SL3151Ch12Frame Page 592 Thursday, September 12, 2002 6:01 PM
Value Analysis/Engineering 593
4. Understand the principles.
5. Implement the plan.
Each step can be approached in a number of different ways. However, certain
specic problems must be considered, and pitfalls must be avoided in each. Under-
standing the problems and pitfalls rather than outlining a specic method or procedure
should provide the necessary guidelines for an effective operation. In many cases, a
consultant can aid in the initial stages and support each step of the process by providing
a broad range of experience for the client to build upon. However, it is important that
the consultant have the type and quality of experience to ensure success.
EVALUATION OF THE SYSTEM
Evaluation of value engineering can be a very tricky process. Some companies have
spent large sums of money for educational training/seminars and are not using the
systems in any way. Some companies did not understand the principles and, when
they tried to apply them, found that they had neither the skill nor the discipline to
achieve success. There are still others who feel a highly organized cost reduction
program is value engineering.
As a result, there are some who feel that value engineering works but not on
their product. There are others who feel that value engineering is nothing new; it is
the same thing they have done for years under a different name. And, of course,
there are some who ask the classic question, Who has the time for all this?
Evaluation of the benets to be obtained from value engineering should therefore
be based on at least some prior knowledge of the methods and disciplines so questions
can be asked to determine what is being done. Are the principles of function analysis
and evaluation being applied? Is the function analysis system technique (FAST) used?
How is the creative stage handled? How are the projects selected and organized? How
is the team approach used? What authority does a value engineering team have to
implement projects? How are teams selected? How is the operation organized?
These are key questions that are required to evaluate whether the company
actually has been using value engineering based on the principles established by
Miles (1961) and supported by the Society of American Value Engineers (SAVE).
A major element of the evaluation process should be a one-day orientation for
key management those who will be required to support operations with time,
manpower, and funds. The orientation should be presented by one who has had
successful experience conducting value engineering operations within the constraints
and limitations of daily operations. Preferably, the person should be certied as a
Value Specialist (CVS) by SAVE. To just understand the principles is often not
enough. How to make them work in an operating environment is frequently at least
of equal importance. As in everything, future success is based on a rm foundation.
UNDERSTANDING THE PRINCIPLES
Very early in the plan to introduce value engineering into operations, high-level and
operating management must be introduced to the system. The intention is not to
SL3151Ch12Frame Page 593 Thursday, September 12, 2002 6:01 PM
594 Six Sigma and Beyond
teach them value engineering but to demonstrate the benets to be achieved and
how they are produced. This establishes the need to apply the process and denes
the necessary commitments for success. Those who should attend would be everyone
who will be expected to support operations with time, manpower, and funds.
It is difcult for a large group of high-level people to attend a one-day seminar.
However, it is essential for successful operations. Attendance also broadcasts the mes-
sage of importance to all levels of the organization. In addition, the managers attending
often derive substantial benet from the session that can lead to immediate results.
The one-day orientation should be a case study, so participants can try the various
methods and systems. The result will be understanding of the system and how it
may be applied to various projects. It will identify the organizational and operational
pitfalls and in many cases dene projects for future workshops.
Completion of the management orientation will create a need for a decision to
determine how operations will proceed from this point. If a consultant has been
brought in to aid in progressing to this point, the consultant will now be able to
assist in getting down to brass tacks. If one has not been brought in, now would be
the time. The consultants experience can ensure success from the start and increas-
ingly successful performance as skill develops. At this point there are two ways to
go. However, in the long run, the same objective will be achieved. One approach is
a large multi-team workshop or series of workshops directed towards indoctrinating
a large group of people (3040) in the system at one time. These people would learn
the process while applying the methods and systems to projects of current interest
to the company. These workshops usually develop substantial monetary benet for
the company. The second approach is one or two teams working on a specic project.
Both methods can be successful. However, the rst is better suited to very large
organizations with large amounts of manpower. The second can be used in both
large and small organizations and produces substantial benet that can be used for
further development. In many cases, a combination of the two plus a series of
orientations can be used effectively. The specic plan depends entirely upon the
organization and should be tailored to t.
ORGANIZATION
The rst step is to determine the objective, as was discussed earlier. The second
should be to develop a plan to achieve the objective and set up the necessary
organization. The third step is implementation of the plan; the fourth follow-up and
audit operations.
The essential elements are:
1. Dene the objective
2. Develop the plan
3. Implement the plan
4. Conduct follow-up and audit operations
Upon completion of the evaluation and the making of a decision to implement
value engineering operations, the rst step should be to appoint a coordinator. A
SL3151Ch12Frame Page 594 Thursday, September 12, 2002 6:01 PM
Value Analysis/Engineering 595
brief outline of factors to be considered in selecting a value engineering coordinator
or manager is:
Primary purpose of position
Establish the value engineering business discipline as part of the ber
and decision-making process of the company to increase the opportu-
nity to maximize the protability of all products marketed by the
company.
Plan, staff, and direct a value engineering program to provide maximum
product value by the application of recognized techniques to identify
and eliminate unnecessary cost in products and operations.
Develop and implement a program to educate key employees, man-
agement, and suppliers in the value engineering approach to problem
solving with particular emphasis on function and value.
Publicize and demonstrate the use of value engineering techniques to
company management and suppliers to develop support and participa-
tion in the use of value engineering and in the implementation of
recommendations.
Knowledge and skills requirements
Degree in engineering, business, or economics with a thorough under-
standing of technical aspects of product design and development, busi-
ness operations and economic factors involved
Value engineering training
Three or more years in value engineering program operations and a
thorough understanding of the techniques and methodology as applied
to both product development and manufacturing operations
Minimum of ten years combined experience in product management,
project engineering, manufacturing management, or product develop-
ment with a thorough understanding of procurement practices, systems
analysis, cost, estimates, or any of a number of other broad rather than
specialized product areas
Creativity and exibility in planning and thinking, with demonstrated
leadership abilities necessary to organize and guide persons of widely
divergent backgrounds into an effective team
Ability to communicate effectively in both oral and visual techniques
The coordinator will develop and organize a plan for management approval.
Inherent in the plan should be education and application programs for all who will
be involved in operations. The coordinator should be required to select a consultant,
develop an educational plan, aid in organizing and conducting workshops, and
identify people who may be developed into value specialists. The extent of these
programs will depend upon the size and scope of the company.
From what we have noted here, it is obvious that the problem is complex from
the standpoint of options. However, successful operations do not have to be extensive.
Starting small and developing successfully is preferred to a lot of noise and a big
crash because of poor planning.
SL3151Ch12Frame Page 595 Thursday, September 12, 2002 6:01 PM
596 Six Sigma and Beyond
ATTITUDE
One of the most important factors in value engineering is attitude attitude on the
part of both management and people on task teams. A positive, cooperative, sup-
portive attitude is required. In many cases, value engineering actually requires a new
management style. It cuts across organizational lines, looks at taboo aspects of a
problem, and recommends drastic changes compared to the past. To accept these
disruptions to the old way of doing business requires faith and understanding a
positive attitude.
In many cases, whenever a new idea is presented to an American management
team the initial reaction is negative. The rst remarks are, It is interesting but let
me tell you what is wrong with it. The best approach to this reaction is to listen
carefully. The managers may have some ideas you overlooked. After all negative
reaction has run out, be prepared to ask some specic positive questions of the group
that will develop positive responses. For example, I understand your difculty in
producing this part in the plant. What do you think we would have to do to make
this practical? Do you see any changes we might make to satisfy our methods?
This will usually work to a positive result.
Never argue. In many cases it is benecial to solicit negative ideas, but be
prepared to develop positive questions. Our attitude is that we must begin to ask
Whats good about this idea? How will it help us to do a better job?
Changing peoples attitudes is difcult and may never happen, but understanding
the reasons behind the negative reaction should make it possible to persuade most
people that they can benet from success. Remember, there is a risk of failure in
new ideas. New ideas require change, and they may not work. People want proof
that something will work before they will support it. However, maybe you can show
that the benets are greater than the risk.
The best way to change peoples attitude is to show that top management is
interested in value engineering and expects participation and results in achieving the
stated goals.
VALUE COUNCIL
The value council is a small group of high-level executives who oversee operations.
In a small company, it might be chaired by the president; in a large company, by a
division manager. The council should be staffed with people who have the authority
to make decisions relative to acceptance and/or rejection of proposals, authorizing
funds, and manpower. They set the attitude, develop the environment, break the
bottlenecks, and by their interest and visibility create credibility to participation and
provide authority to operations.
It is important that members of the council make every effort to attend council
meetings except in cases of dire emergency. A member who is unable to attend
should authorize a key assistant to act on his or her behalf. If the council attendance
degenerates, the message sent is that we are losing interest.
SL3151Ch12Frame Page 596 Thursday, September 12, 2002 6:01 PM
Value Analysis/Engineering 597
The council should be made up of ve to six people. Their duties are as follows:
Set objectives
Guide operations
Monitor progress
Eliminate roadblocks
Recommend/approve projects
AUDIT RESULTS
There are two reasons to audit results. The rst is to determine the actual benet
received. Is it in accordance with expectations? If not, why not? The second is to
determine how to improve operations. A periodic status report on a project tends to
move it along. This is especially true of cost reductions.
PROJECT SELECTION
1. Develop awareness to potential
Products
Operations
Planning
Investments
2. Selection methods
Intuitive
Scientific
3. Considerations
Noncompetitive product
Low volume
High warranty
Quality problems
Vendor problems
Manufacturing difficulties
Capital investments
High manpower requirements
Bottlenecks
Potential market
Government regulations
4. SIVE analysis
List potential projects
Potential saving S
Implementation cost C
Confidence factor F
Project priority R
R = S/C F
SL3151Ch12Frame Page 597 Thursday, September 12, 2002 6:01 PM
598 Six Sigma and Beyond
Confidence factor F
Poor 1
Questionable 2
Fair 3
Good 4
Very good 5
5. Example 1 2 3
S = $60,000 $20,000 $2000
C = $10,000 $10,000 $500
F = 1 5 4
R = 6 10 16
CONCLUDING COMMENTS
This is a very brief outline of some of the factors to be considered to implement
value engineering operations in your organization. The complete subject would
require an entire book but even then there would be many exceptions.
Value engineering is a task force type system. Set up the group, get the job done,
dissolve the group, get on with the next problem. It is people oriented; it is designed
to get maximum performance from the individual and capitalize on that persons
performance by supplementing it with the group. Of course, there must be some
type of staff, and they must be skillful in application or know-how to get the people
who can produce results.
Remember, success is based on the three As: attitude, awareness, application.
There must be a positive attitude in the organization an awareness of the need to
change and the skills to apply systems for effective results.
If these guidelines are followed, it has been proven that the benets will be almost
immediate and far greater than the usually expected results. They are often outstanding.
REFERENCES
Fallon, C., Value Analysis to Improve Productivity, Wiley, New York, 1971.
Miles, L., Techniques of Value Analysis and Engineering, McGraw-Hill, New York, 1961.
SELECTED BIBLIOGRAPHY
Fowler, T.C., Value Analysis in Design, Van Nostrand Reinhold, New York, 1990.
Mendelson, S. and Greeneld, H.B. Taking value engineering/value analysis into the twenty-
rst century, Cost Engineering, Vol. 37, No. 8, August, pp. 3334, 1995.
Mudge, A.E., Numerical evaluation of functional relationships, Proceedings, Society of Amer-
ican Value Engineers, Vol. 2, pp. 111123, 1967.
Penza, P., Measuring Market Risk with Value Risk, Wiley, New York, 2000.
Shillito, M.L. and DeMerle, D.J., Value: Its Measurement, Design and Management, John
Wiley & Sons, New York, 1992.
Stakgold, I., Greens Functions and Boundary Value Problems, 2nd ed., John Wiley & Sons,
New York, 1997.
SL3151Ch12Frame Page 598 Thursday, September 12, 2002 6:01 PM

599

13

Project Management
(PM)

Project management (PM) is the application of knowledge, skills, tools, and tech-
niques in order to meet or exceed stakeholder requirements from a project. Meeting
or exceeding stakeholder requirements means balancing competing demands among:
1. Scope, time, cost, quality, and other project objectives
2. Stakeholders customers with differing requirements
3. Identied requirements and unidentied requirements (expectations)
Knowledge about project management can be organized in many ways. In fact,
the ofcial

Guide to the Project Management Body of Knowledge

(PMBOK) has
identied 12 subsections (Duncan, 1994). They are:
1. Project management
2. The project context
3. The process of project management
4. Key integrative processes
5. Project scope management
6. Project time management
7. Project cost management
8. Project quality management
9. Project human resource management
10. Project communications management
11. Project risk management
12. Project procurement management
It is beyond the scope of this book to cover the entire discipline of project
management. However, this chapter will address PM as it may be used in six sigma
and design for six sigma (DFSS) initiatives within an organization. Towards that
end, this chapter will discuss some of the basic concepts of project management and
how the methodology of project management may be used.

WHAT IS A PROJECT?

Projects are tasks performed by people, constrained by limited resources, describable
as processes and subprocesses, that are planned, executed, and controlled within
denite time limits. Above all, they have a beginning and an end. Projects differ
from operations primarily in that operations are ongoing and repetitive while projects

SL3151Ch13Frame Page 599 Thursday, September 12, 2002 6:01 PM

600

Six Sigma and Beyond

are temporary and unique. A project can thus be dened in terms of its distinctive
characteristics it is a temporary endeavor undertaken to create a unique product
or service. Temporary means that every project has a denite ending point. Unique
means the product or service is different in some distinguishing way from all similar
products or services.
Projects are undertaken at all levels of the organization. They may involve a
single person or many thousands. They may require less than 100 hours to complete
or over 10 million. Projects may involve a single unit of one organization or may
cross organizational boundaries as in joint ventures and partnering. Examples of
projects include:
1. Developing a new product or service
2. Effecting a change in structure, stafng, or style of an organization
3. Designing a new product
4. Developing a new or modied product or service
5. Implementing a new business procedure or process
Temporary means that every project has a denite ending point. The ending
point is when the projects objectives have been achieved, or when it becomes clear
that the project objectives will not or cannot be met and the project is terminated.
Temporary does not necessarily mean short in duration. It means that the project is
not an ongoing task, therefore is nite. This point is very important, since many
undertakings are temporary in the sense that they will end at some point, but not in
the same sense that projects are temporary.
For example, assembly work at an automotive plant will eventually be discon-
tinued, and the plant itself decommissioned. Projects are fundamentally different
because the project ceases work when its objectives have been attained, while non-
project undertakings adopt a new set of objectives and continue to work. The
temporary nature of the project may apply to other aspects of the endeavor as well:
The opportunity or market window is usually temporary most projects have
a limited time frame in which to produce their product or service.
The project team seldom outlives the project most projects are performed
by a team created for the sole purpose of performing the project, and the
team is disbanded and members reassigned when the project is complete.
A project or service is considered unique if it involves doing something that has
not been done before and is therefore unique. The presence of repetitive elements
does not change the fundamental uniqueness of the overall effort. Because the
product of each project is unique, the characteristics that distinguish the product or
service must be progressively elaborated. Progressively means proceeding in steps;
continuing steadily by increments, while elaborated means worked out with care
and detail; developed thoroughly (

American Heritage Dictionary,

1992). These
distinguishing characteristics will be broadly dened early in the project and will
be made more explicit and detailed as the project team develops a better and more
complete understanding of the product.

SL3151Ch13Frame Page 600 Thursday, September 12, 2002 6:01 PM

Project Management (PM)

601

Progressive elaboration of product characteristics must not be confused with
proper scope denition, particularly if any portion of the project will be performed
under contract. In contrast to a project, there is also a program. A program is a group
of projects managed in a coordinated way to obtain benets not available from
managing them individually (Turner, 1992). Most programs also include elements
of ongoing operations, as well as a series of repetitive or cyclical undertakings. (It
must be noted, however, that in some applications program management and project
management are treated as one and the same; in others, one is a subset of the other.
It is precisely this diversity of meaning that makes it imperative that any discussion
of program management versus project management must have a clear, consistent,
and agreed-upon denition of each term.)

THE PROCESS OF PROJECT MANAGEMENT

The process of project management is an integrative one. The interactions may
be straightforward and well understood, or they may be subtle and uncertain. These
interactions often require trade-offs among project objectives. Therefore, successful
project management requires actively managing these interactions, so that the appro-
priate and applicable objectives may be attained within budget, schedule and con-
straints.
A process from a project management perspective is the traditional dictionary
denition, which is a series of actions bringing about a result (

American Heritage
Dictionary,

1992). In the case of a project, there are ve basic management processes:
1.

Initiating

: Recognizing that a project should be begun and committing to
do so
2.

Planning

: Identifying objectives and devising a workable scheme to
accomplish them
3.

Executing

: Coordinating people and other resources to carry out the plan
4.

Controlling

: Ensuring that the objectives are met by measuring progress
and taking corrective action when necessary
5.

Closing

: Formalizing acceptance of the project and bringing it to an
orderly end
Operational management the management of ongoing operations also
involves planning, executing, and controlling. However, the temporary nature of
projects requires the addition of initiating and closing. To be sure, these processes
occur at all levels of the enterprise, in many different forms, and under many different
names. However, even though there are many variations, it is imperative to under-
stand that operational management is an ongoing activity with neither a clear begin-
ning nor an expected end.
Finally, it must be understood that these processes (initiating, planning, execut-
ing, controlling and closing) are not discrete, one-time events. They are overlapping
activities that occur at varying levels of intensity throughout each phase of the
project. In addition, the processes are linked by the results they produce: the result

SL3151Ch13Frame Page 601 Thursday, September 12, 2002 6:01 PM

602

Six Sigma and Beyond

or outcome of one becomes an input to another. Among the central processes, the
links are iterated planning provides executing with a documented project plan
early on and then provides documented updates to the plan as the project progresses.
It is imperative that the basic process interactions occur within each phase such that
closing one phase provides an input to initiating the next. For example: closing a
design phase requires customer acceptance of the design document. Simultaneously,
the design document denes the product description for the ensuing implementation
phase. For more information on this concept see Duncan (1994), Kerzner (1995),
and Frame (1994).

KEY INTEGRATIVE PROCESSES

In project management, the key integrative processes are:


Project plan development

taking the results of other planning processes
and putting them into a consistent, coherent document


Project plan execution

carrying out the project plan by performing or
having performed the activities included therein


Overall change control

coordinating changes across the entire project
Although the processes seem to be discrete from each other, that is not the case
in practice. In fact, they overlap and interact in ways that are beyond the scope of
this book. A typical summary of key integrative process is shown in Table 13.1.

TABLE



13.1
Key Integrative Processes

Project Plan Development Project Plan Execution Overall Change Control

1. TABInputs
Outputs of other processes
Historical information
Organizational policies
Constraints and assumptions
1. TABInputs
Project plan
Supporting detail
Organizational policies
1. TABInputs
Project plan
Progress report
Change request
2. TABTools and techniques
Project planning methodology
Stakeholder skills and knowledge
Project management information
systems
2. TABTools and techniques
Technical skills and
knowledge
Work authorization system
Status review meetings
Project management
information system
Organizational procedures
2. TABTools and techniques
Change control system
Progress measurement
Additional planning
Computer software
Reserves
3. TABOutputs
Project plan
Supporting detail
3. TABOutputs
Work results
Change requests
3. TABOutputs
Project plan updates
Corrective action
Lessons learned

SL3151Ch13Frame Page 602 Thursday, September 12, 2002 6:01 PM

Project Management (PM)

603

PROJECT MANAGEMENT AND QUALITY

As we have seen, project management is a problem-solving methodology. On the
other hand, both six sigma and DFSS are a process project that require total
acceptance for improvement. For that improvement to occur, six sigma and DFSS
commitment must be understood and implemented in the entire organization as a
culture change rst and then for the project itself. As such, it ts the prole of project
management. Every component of it is designed to facilitate the solving of complex
problems. It uses teams of specialists. It makes use of a powerful scheduling method.
It tightly tracks costs. It provides a mechanism for management of total improvement
and customer satisfaction. It depends on the integration of several skills and disci-
plines. It encourages monitoring of processes and depends on feedback for evalua-
tion. It requires leaders with clear vision and doable objectives. It requires knowledge
of appropriate and applicable tools. And it plans for success.
Project management makes and at the same time facilitates change(s). By def-
inition, projects have a start, work accomplished, and a nish. The nish comes
when the objectives for the project are satised. Project objectives always address
changes that will be made in some current situation. If an organization does not
want to make a change, then project management is not an appropriate management
method. This does not imply that changes should not be made there, only that there
is no motivation for change. In such an organization, the introduction of project
management would have little support and may even encounter resistance. For a
discussion on change and when change actually takes place see Stamatis (1996).
Since the implementation of both six sigma and DFSS is a project, with a
beginning, work changes, and an end, project management is indeed a method that
can be used in the implementation process. (It is very important to differentiate the
concept of six sigma and DFSS, which are philosophical in nature, and the imple-
mentation of six sigma and DFSS, which is a project. Here we are talking about the
physical implementation of both six sigma and DFSS projects.)

A GENERIC SEVEN-STEP APPROACH
TO PROJECT MANAGEMENT

Much has been written about how to use project management in a variety of
industries and specic situations. Many articles and books have proclaimed specic
approaches for the best results in a given situation. Rather than dwell on a particular
approach, we will present a summary discussion of a generic seven-step approach
for using project management in a quality orientation for any organization. The
seven steps are based on the four-phase cycle of any project.

P

HASE

1. D

EFINE



THE

P

ROJECT

Step 1. Describe the Project

Describing a project is not as simple as it might seem. In fact, this step may be the
most difcult and time consuming. To be successful, the project description should

SL3151Ch13Frame Page 603 Thursday, September 12, 2002 6:01 PM

604

Six Sigma and Beyond

include: simple specications, goals, projected time frame, and responsible individ-
uals, as well as constraints and assumptions. Capturing the essence of highly complex
projects in a few words is an exercise in focus and delineation. However, we must
be vigilant about avoiding becoming too simple and in the process failing to convey
the scope of the project. On the other hand, a detailed, complex description may
cloud the big picture. The key is clarity without an excess of volume or jargon.

Step 2. Appoint the Planning Team

After describing the project, begin to identify the right players. Too many people
on a team can stie the decision-making process and reduce the number of accom-
plishments. Cross-functional teams are among the most difcult to appoint. Except
in the pure project organization, where the team is solely dedicated to completing
the project, roles and priorities can cause conict. In cross-functional teams the
project leader must seek support from the functional managers and identify team
goals.

Step 3. Dene the Work

Once the planning team is in place, team members must dene the work. Since each
member hails from a different department, there will be many different concepts of
the projects work content. There are many ways to divide the work for convenient
use in planning. Two common ways are the process ow diagram and the work
breakdown structure (WBS). The method should be chosen to reect the most useful
division and summarization for the situation. After all, the objective of this step is
to dene the tasks to be done, not the order of doing them.

P

HASE

2. P

LAN



THE

P

ROJECT

Step 4. Estimate Tasks

Before a project schedule is created, each task must be evaluated and assigned an
estimate of duration. There are essentially two ways of looking at this process. The
rst way is to establish the duration of the task by estimating the time it takes to
complete the task with given resources. The second way is to estimate the type and
amount of resources needed and the effort in terms of resource hours that is necessary
to complete the task.

Step 5. Calculate the Schedule and Budgets

The next step is to construct a network logic diagram or a performance evaluation
review technique (PERT) and a budget. The focus of the logic diagram or the PERT
is to develop appropriate scheduling datelines and more importantly to dene the
critical path. The focus of the budget is to estimate the costs of the project based
on all activities. The identication of the critical path will zero in on the bottleneck
areas as well as opportunities for improvement. Tasks not on the critical path may
have a oat that can be calculated and may be used to facilitate the efciency and
utilization of resources without affecting the project nal date.

SL3151Ch13Frame Page 604 Thursday, September 12, 2002 6:01 PM

Project Management (PM)

605

P

HASE

3. I

MPLEMENT



THE

P

LAN

Step 6. Start the Project

The kick-off of the project can really make an impact on project team members
attendance, performance, and evaluation. Kick-off meetings should convey the fol-
lowing ideas:
This is a new project.
Project management is going to be used to manage the project.
A plan exists, open to all, which is going to be followed.
The focus will be on the starts of activities (ends cannot happen without
starts).
Realistic status is needed to allow timely decisions.
The focus will always be on forecasting and preventing problems.

P

HASE

4. C

OMPLETE



THE

P

ROJECT

Step 7. Track Progress and Finish the Project

The essence of this step is to bring the project to closure. That means that the project
must be ofcially closed, and all deliverables must be handed over to the
stakeholders customers. In addition, a review of the lessons learned must take
place, and a thank you for the project team is the appropriate etiquette. Key questions
of this step are:
Where are we?
Where should we be?
What do we have to do to get there?
Did it work?
Where are we now?
Can the process employees take over?
Can the process employees maintain the new system?
What have we learned from the successes in this project?
What have we learned from the failures in this project?
What would we have done differently? Why? Why not?

A GENERIC APPLICATION
OF PROJECT MANAGEMENT
IN IMPLEMENTING SIX SIGMA AND DFSS

Project management brings together and optimizes (the focus is always on allocation
of resources) rather than maximizes (concentrating on one thing at the expense of
something else; maximization leads to suboptimization) resources, including skills,
cooperative efforts of teams, facilities, tools, information, money, techniques, sys-
tems, and equipment.

SL3151Ch13Frame Page 605 Thursday, September 12, 2002 6:01 PM

606

Six Sigma and Beyond

Why should project management, as opposed to other management principles,
be used in the six sigma and DFSS implementation process? There are at least two
reasons. First, project management focuses on a project with a nite life span,
whereas other organizational units expect perpetuity. Second, projects need resources
on both part-time and full-time bases, while permanent structures require resource
utilization on a full-time basis. The sharing of resources may lead to conict and
requires skillful negotiation to see that projects get the necessary resources to meet
objectives throughout the project life.
Since we already have dened the process of both six sigma and DFSS imple-
mentation as a project, indeed then, project management will ensure successes of
the implementation process by following the generic four phases of a projects life.
A typical approach is shown in Table 13.1. Tables 13.2 and Table 13.3 show the
characteristics of the six sigma and DFSS implementation model and process using
project management.

TABLE



13.2
The Characteristics of the DFSS Implementation Model
Using Project Management

Phase 1 Phase 2 Phase 3 Phase 4
Management
Commitment Structure Setup Implementation
Working with
Employees

Establish a six sigma
and DFSS
implementation
team of one person
from each
functional area
Train those selected
in the six sigma and
DFSS requirements
Capture company
objectives
Dene:
Mission
Values
Goals
Strategy
Focus on continual
improvement
Develop policies and
procedures
Reconrm quality
management
commitment
Make goal of six sigma and DFSS
total improvement
Examine internal structure and
compare it to the goals of six
sigma and DFSS
Determine departmental
objectives
Review structure of the
organization
Review job descriptions
Review current processes
Review control mechanisms
Review training requirements
Review all communication
methods
Review all approval processes
Review supplier relationship(s)
Review risk considerations and
how they are addressed
Review all outputs
Review all action plans
Provide applicable and
appropriate training
Prepare the
organization for both
internal and external
audits
Provide and/or
develop appropriate
and applicable
methodology for
corrective action
Continue focus on
improvement

SL3151Ch13Frame Page 606 Thursday, September 12, 2002 6:01 PM

Project Management (PM)

607

T

HE

V

ALUE



OF

P

ROJECT

M

ANAGEMENT



IN



THE

I

MPLEMENTATION

P

ROCESS

Project management is a tool that helps an organization to maximize its effort in
implementing a project. Since the process of implementing both six sigma and
DFSS or any other quality initiative is a project, the value of project manage-
ment can be appreciated in at least two areas:
1. Planning the process
2. Setting reliable, realistic and obtainable goals

Planning the Process

Four steps dene the planning process from a project management perspective. They are:

TABLE



13.3
The Process of Six Sigma/DFSS Implementation Using Project Management

Phase 1 Phase 2 Phase 3 Phase 4
Management
Commitment Structure Setup
Implementation
of plan
Working with
Employees
and Suppliers
Initiate Project Understand Process
Provide Six Sigma
and DFSS Training Monitor Progress

Management planning
and goal setting
Departmental
commitment
Quality team selection
and active
participation
Training philosophy
and tools of quality
Process denition and
selection
Identication of critical
processes and
characteristics
Team ow charting for
process understanding
and analysis
Cause and effect
analysis
Critical in-process
parameters identied
Standard operating
procedures review,
equipment repair,
preventive
maintenance, and
calibration
Process input and
measurement
evaluation
Static process data
collection
Process evaluation
Executive training
Departmental training
Identication of
shortcomings in the
system of quality
(specic areas)
Denition of
boundaries of
responsibility
Denition of
limitations of
resources
Review of system for
completeness
Worker/operator
control in process
Dene quality system
as it relates to current
policies and practices
(quality manual,
procedures,
instructions, and so
on)
Internal audits
conducted
Denition of key
characteristics and
monitoring of process
variables
Application of
statistical process
control in all key
processes
Initiation and follow up
of corrective action

SL3151Ch13Frame Page 607 Thursday, September 12, 2002 6:01 PM

608

Six Sigma and Beyond

1. Identify and prioritize the customer base by contribution to current and
future organizational prots.
2. Identify and weight criteria key customers use in selecting organizations
and assess what changes to criteria or weighting are likely to occur in the
future.
3. Assess the organizations competitive advantages and disadvantages in
each area important to decision makers.
4. Establish long-term strategic objectives by identifying where the biggest
gap exists between what is important to key customers and the organiza-
tions own strengths and weaknesses relative to competition.
To optimize the output of these four steps the following questions may be raised:
Is there a true management commitment for the project?
Does the project address needs of the organizations top priority customer
groups?
Does the project address important needs of the customer?
Is the organization far ahead of competition in this area already?
Does this project truly offer the organization a good chance of making an
improvement large enough to change customer behavior?
Will the project require investment large enough to wipe out potential
gain?
How does the project rank on the above criteria in relation to other possible
projects?
Once the project is selected, is the team continuously assessing whether
or not the project is the best one to move the department and organization
toward their goals?

Goal Setting

There are three basic steps in goal setting from a project perspective. They are:
1. Translate corporate strategy into concrete organizational goals that are
attainable within a reasonable time.
2. Involve department managers in internal audit and benchmarking exer-
cises to identify problem areas related to the goals.
3. With department managers, set specic improvement goals for each
department and each team.

PM

AND

S

IX

S

IGMA

/DFSS

Harry (1997, p. 21.14) posed ve questions in reference to projects. They are:
1. What do you want to know?
2. How do you want to see what it is that you need to know?

SL3151Ch13Frame Page 608 Thursday, September 12, 2002 6:01 PM

Project Management (PM)

609

3. What type of tool will generate what it is that you need to see?
4. What type of data are required of the selected tool?
5. Where can you get the required type of data?
These questions, upon further probing, will deliver some very impressive results.
However, the concern remains: How would a Black Belt or even a Master Black
Belt go about getting the correct answers to these questions? We believe the answer
lies with strategic planning and persistence to the basic format of PM. That is, in
the language of PM identify:
Work breakdown structure
Work packages
Time-scheduled network diagrams
Responsibility-assignment matrix
Risk analysis and quantication
Earned value analysis
Project integration management plans
Resource costing
Project change management
And, in the language of six sigma/DFSS:
Select key project/product.
Dene performance variables.
Create the SIPOC model.
Measure current performance and capability.
Conduct a benchmarking.
Identify and evaluate gap.
Identify success factors and goals of project.
Select the performance variables.
Evaluate new performance.
Conrm causal variables.
Establish operating limits and verify performance improvement.
Validate control system.
Implement control system.
Audit.
Monitor.
Ultimately, all projects in the six sigma/DFSS world are managed in the follow-
ing four categories:
1. Project justication and prioritization techniques
2. Project planning and estimation
3. Monitoring and measurement of project activity
4. Project documentation and related procedures

SL3151Ch13Frame Page 609 Thursday, September 12, 2002 6:01 PM

610

Six Sigma and Beyond

Project Justication and Prioritization Techniques

Justication and prioritization of projects are based upon the following methods:
Benet-cost analysis:
Return on investment (ROI)
Internal rate of return (IRR)
Return on assets (ROA)
Payback period
Net present value (NPV)
Decision analysis and portfolio analysis as applied to project decisions

Benet-Cost Analysis

Project benet-cost analysis is a comparison to determine if a project will be (or
was) worthwhile. The analysis is performed prior to implementation of project plans
and is based on time-weighted estimates of costs and predicted value of benets.
The benet-cost analysis is used as a management tool to determine if approval
should be given for the project go-ahead. The actual data are analyzed from an
accounting perspective after the project is completed to quantify the nancial impact
of the project. The sequence for performing a benet-cost analysis is:
Identify the project benets.
Express the benets in dollar amounts, timing, and duration.
Identify the project cost factors including materials, labor, resources.
Estimate the cost factors in terms of dollar amounts and expenditure period.
Calculate the net project gain (loss).
Decide if the project should be implemented (prior to start) or if the project
was benecial (after completion).
If the project is not benecial using this analysis, but it is managements
desire to implement the project, what changes in benets and costs are
possible to improve the benet-cost calculation?

Return on Assets (ROA)

Johnson and Melcher (1982) give an equation for return on assets (ROA) as:
where net income for a project is the expected earnings and total assets is the value
of the assets applied to the project.

Return on Investment (ROI)
ROA =
Net Income
Total Assets
ROI =
Net Income
Investment

SL3151Ch13Frame Page 610 Thursday, September 12, 2002 6:01 PM

Project Management (PM)

611

where net income for a project is the expected earnings and investment is the value
of the investment in the project.
There are several methods used for evaluating a project based on dollar or cash
amounts and time periods. Three common methods are the net present value (NPV),
the internal rate of return (IRR), and the payback period methods. Project risk or
likelihood of success can be incorporated into the various benet-cost analyses as well.

Net Present Value (NPV) Method

Weston and Brigham (1974) and Johnson and Melcher (1982) give the following
equations:
NPV =
where n = the number of periods; t = the time period; r = the per period cost of
capital for the organization (also denoted as i if annual interest rate is used); and
CF

t

is the cash ow in time period t. Note that CF

0

, the cash ow in period zero, is
also denoted as the initial investment.
The cash ow for a given period, CF

t

is calculated as:
CF

t

= CF

B,t

CF

C,t

where CF

B,t

is the cash ow from project benets in time period t and CF

C,t

is the
project costs in the same time period. The standard convention for cash ow is
positive (+) for inows and negative () for outows.
The conversion from an annual percentage rate (APR) per year, equal to i, to a
rate r for a shorter time period, with m periods per year, is:
R =
If the project NPV is positive, for a given cost of capital, r, the project is normally
approved.

Internal Rate of Return (IRR) Method

The internal rate of return (IRR) is the interest or discount rate, i or r, that results
in a zero net present value, NPV = 0, for the project. This is equivalent to stating
that time weighted inows equal time weighted outows. The equation is
NPV =
The IRR is that value of r that results in NPV being equal to 0 and is calculated
by an iterative process. Once calculated for a project, the IRR is then compared with
CF
r
t
t
t
n
( ) 1
0
+
=

( ) 1 1
1
+ i
m
0
1
0
=
+
=

CF
r
t
t
t
n
( )

SL3151Ch13Frame Page 611 Thursday, September 12, 2002 6:01 PM

612

Six Sigma and Beyond

that for other projects and investment opportunities for the organization. The projects
with the highest IRR are approved, until the available investment capital is allocated.
Most real projects would have an IRR in the range of 5 to 25% per year. Managers
given the opportunity to accept a project that has calculated values for IRR higher
than the companys return on investment (ROI) will normally approve, assuming
the capital is available.
The above equations for net present value and internal rate of return have ignored
the effects of taxes. Some organizations make investment decisions without consid-
ering taxes, while others look at the after-tax results. The equations for NPV and
IRR can be used with taxes, if the cash ow effect of taxes is known.

Payback Period Method

The payback period is the length of time necessary for the net cash benets or
inows to equal the net costs or outows. The payback method generally ignores
the time value of money, although the calculations can be done taking this into
account. The main advantage of the payback method is the simplicity of calculation.
It is also useful for comparing projects on the basis of quick return on investment.
A disadvantage is that cash benets and costs beyond the payback period are not
included in the calculations.
Organizations using the payback period method will set a cut-off criterion, such
as 1, 1


, or 2 years maximum for approval of projects. Uncertainty in future status
and effects of projects or rapidly changing markets and technology tend to reduce
the maximum payback period accepted for project approval. If the calculated pay-
back period is less than the organizations maximum payback period, then the project
will be approved. (Quite often, in the six sigma/DFSS world, the payback is gured
on a preset project savings rather than time. The most common gure oating around
is a $250,000 per-project savings.)

Project Decision Analysis

In addition to the benet-cost analysis for a project, the decision to proceed must
also include an evaluation of the risks associated with the project. To manage project
risks, rst identify and assess all potential risk areas. Risk areas include:
After the risk areas are identied, each is assigned a probability of occurrence
and the consequence of risk. The project risk factor is then the sum of the products
of the probability of occurrence and the consequence of risk.
Project Risk Factor = {(probability of occurrence) * (consequence of risk)}

Business risks Insurable risks

Technology changes Property damage
Competitors Indirect consequential loss
Material shortages Legal liability
Health and safety issues Personnel
Environmental issues


SL3151Ch13Frame Page 612 Thursday, September 12, 2002 6:01 PM

Project Management (PM)

613

Risk factors for several projects can be compared if alternative projects are being
considered. Projects with lower risk factors are chosen in preference to projects with
higher risk factors. A more extensive description of risk management is found in
Kerzner (1995).

WHY PROJECT MANAGEMENT SUCCEEDS

The single most important characteristic of project management is the consistent
ability to get things done. It is a results- or goal-oriented approach, where other
considerations are secondary, so the single-minded concentration of resources greatly
enhances prospects for success. This also implies that the results, success or failure,
are quite visible. Integrative and executive functions of the project manager provide
another inherent advantage in the project management approach that improves the
likelihood for success because of the single point of responsibility for those func-
tions. Specic advantages of the single point integrative characteristic include:
Placing accountability on one person for the overall results of the project
Assurance that decisions are made on the basis of the overall good of the
project, rather than the good of one or another contributing functional
department
Coordination of all functional contributors to the project
Proper utilization of integrated planning and control methods and the
information they produce
Advantages of integrated planning and control of projects include:
Assurance that the activities of each functional area are being planned
and carried out to meet the overall needs of the project
Assurance that the effects of favoring one project over another are known
Early identication of problems that may jeopardize successful project
completion, to enable effective corrective action to prevent or resolve the
problem
Project management is a specialized management form. It is an effective man-
agement tool that is used because something is gained by departing from the normal
functional way of doing things in terms of people, organizations, and methods.
Conict, confusion, and additional costs are often associated with signicant changes
of this nature. Poorly conceived or poorly executed project management can be
worse than no project management at all. Project management should be used well
or not at all. Executives should not permit a haphazard, misunderstood use of project
management principles.
Although simple in its concepts, project management can be complex in its
application. Project management is not a cure-all intended for all projects. Before
project management can succeed, the application must be correct. Executives should
not use project management unless it appears to be the best solution. The use of
project management techniques seems most appropriate when:

SL3151Ch13Frame Page 613 Thursday, September 12, 2002 6:01 PM

614

Six Sigma and Beyond

1. A well-dened goal exists.
2. The goal is signicant to the organization.
3. The undertaking is out of the ordinary.
4. Plans are subject to change and require a degree of exibility.
5. The achievement of the goal requires the integration of two or more
functional elements or independent organizations.
Even though project management may not be feasible, good principles have con-
tributed to the success of thousands of small and medium-sized projects. Many managers
of such projects have never heard of project management but have used the principles.
A wider application of these principles will also help achieve success in smaller projects.
Executives play a key role in the successful application of project management.
A commitment from top management to ensure it is done right must be combined
with the decision to use this approach. Top management must realize that establishing
a project creates special problems for the people on the project, for the rest of the
organization, and for top managers themselves. If executives decide to use this
technique, they should expend the time, decision-making responsibility, and execu-
tive skills necessary to ensure that it is planned and executed properly. Before it can
be executed properly, sincere and constructive support must be obtained from all
functional managers. Directives or memos are not enough. It takes personal signals
from top management to members of the team and functional managers to convey
that the project will succeed and that team members will be rewarded by its success.
In addition, necessary and desirable changes in personnel policies and procedures
must be recognized and established at the onset of the project.
The human aspect of project management is both one of its greatest strengths
and one of its most serious drawbacks. In order for project management to succeed,
it requires capable staff. Only good people can make a project successful. In the
long run, this is true for any organization. Good people alone cannot guarantee
project success; a poorly conceived, badly planned, or inadequately resourced project
has little hope for success. Great emphasis is placed on the selection of good people.
The project leader, more than any other single variable, seems to make the difference
between success and failure. Large projects will require one person to be assigned
the full-time role of project manager. If a number of projects exist but not enough
project managers are available for full-time assignment to a project, assign several
projects to one full-time project manager. This approach has the advantage that the
individual is continually acting in the same role, that of a project manager, and is
not distracted or encumbered by functional responsibilities.
To conclude, project management is an effective management tool used by
business, industry, and government, but it must be used skillfully and carefully. In
review, the following major items are necessary for successful results from project
management in the eld of quality:
Executives provide wholehearted support and commitment when the deci-
sion is made to use this approach.
Project management is the best solution or right application for imple-
menting any quality program.

SL3151Ch13Frame Page 614 Thursday, September 12, 2002 6:01 PM

Project Management (PM)

615

Emphasis is placed on selecting the best people for staff, especially the
project leader.
Good principles of project planning and control are applied.
Effective use of project management will reduce costs and improve efciency.
However, the main reason for the widespread growth of project management is its
ability to complete a job on schedule and in accordance with original plans and
budget.

REFERENCES

American Heritage Dictionary of the English Language

, 3

rd

ed., Houghton Mifin, Boston,
1992.
Duncan, W.R.,

A Guide to the Project Management Body of Knowledge

, Project Management
Institute, Upper Darby, PA, 1994.
Harry, M.,

The Vision of Six Sigma: A Roadmap for Breakthrough,

5th ed., Vol. II, Tri Star
Publishing, Phoenix, AZ, 1997.
Johnson, R.B. and Melicher, R.W.,

Financial Management

, 5

th

ed., Allyn and Bacon, Inc.,
Boston, 1982.
Kerzner, H.,

Project Management: A Systems Approach to Planning, Scheduling and Con-
trolling

, 5

th

ed., Van Nostrand Reinhold, New York, 1995.
Stamatis, D.H.,

Total Quality Service, St. Lucie Press, Delray Beach, FL, 1996.
Turner, J. R., The Handbook of Project-Based Management, McGraw-Hill, New York, 1992.
Weston, J. F. and E.F. Brigham, Essentials of Managerial Finance, 3rd ed., Dryden Press,
Hinsdale, IL, 1974.
SELECTED BIBLIOGRAPHY
Frame, J.D., The New Project Management, Jossey-Bass, San Francisco, 1994.
Geddes, M., Hastings, C., and Briner, W., Project Leadership, Gower, Brookeld, VT, 1993.
Lock, D., Gower Handbook of Project Management, Gower, Brookeld, VT, 1994.
Michael, N. and Burton, C., Basic Project Management, Singapore Institute of Management,
Singapore, 1993.
Stamatis, D.H., TQM Engineering Handbook, Marcel Dekker, New York, 1997.
Stamatis, D.H., Total Quality Management and Project Management. Project Management
Journal, Sept. 1994, pp. 4854.
SL3151Ch13Frame Page 615 Thursday, September 12, 2002 6:01 PM
SL3151Ch13Frame Page 616 Thursday, September 12, 2002 6:01 PM

617

14

Limited Mathematical
Background for Design
for Six Sigma (DFSS)

EXPONENTIAL DISTRIBUTION AND RELIABILITY
E

XPONENTIAL

D

ISTRIBUTION
f(t)
0 t

t
= 1/

0.37
e
- t
F(t)
0 t

t
= 1/
1
0.63
1 - e
- t

SL3151Ch14Frame Page 617 Thursday, September 12, 2002 5:59 PM

618

Six Sigma and Beyond

Probability Density Function and Cumulative Distribution Function

Probability Density Function (Decay Time)
Cumulative Distribution Function (Rise Time)

Mean Time:
Variance:
One parameter:


Reliability Problems

Exponential distribution is used in reliability problems.
Exponential distribution can describe the probability of a failure prior to some
specied time t assuming that failure occurs at a

constant rate



( =


) over time.
Reliability, the chance of

no

failure in time t, is expressed as
Failure is a complement of cumulative probability of reliability:
F(t) is used to compute the probability of failure prior to t.
The derivative of the cumulative distribution function is the probability density
function (pdf).
f t E t
e t
elsewhere
x
t
( ) ( ; )
; ,
;

0 0
0
F t e dt e
t t
t
( )


1
0

t
some use
j
(
\
,
1
1

t
2
2
1

j
(
,
\
,
(
R t e
t
( )

F t R t e
t
( ) ( )

1 1

f t
dF t
dt
e
t
( )
( )



SL3151Ch14Frame Page 618 Thursday, September 12, 2002 5:59 PM

Limited Mathematical Background for Design for Six Sigma (DFSS)

619

Mean Time



to Failure (MTTF) =
Failure Rate:

C

ONSTANT

R

ATE

F

AILURE

Exponential function:
Evaluate at any time t, the time rate of decrease in amplitude is constant:
If we consider equal time increments


t, then exponential has consistent ampli-
tude ratio between increments

Example

Data from 100 pumps demonstrated an average life of 5.75 years and that failures
followed an exponential distribution.
1. Determine the probability of

failure

during the rst year.
2. Determine the probability of

failure

during the rst 3 months.
3. Determine the probability of

failure

prior to the average life.
4. Determine the probability of

reliably

operating for at least 10 years.
5. Plot the

reliability

curve and compare with the pdf curve.
T
MF

1

1 T
MF

A
T
0
0
Pass
(Good)
Fail
(Bad)
Time, T
t
t+t
Ae
t
dAe
dt
Ae
t
t

A e
Ae
Ae
Ae
e
t
t
t n t
t n t
t
+
( )

( )
+
( )
+
( ) ( )

1

SL3151Ch14Frame Page 619 Thursday, September 12, 2002 5:59 PM

620

Six Sigma and Beyond

Solutions:
Given: MTTF = T

MF

= 1/


= 5.75 years
Compute Failure Rate:


= 1/T

MF

= 0.174 per year
Exponential pdf:
Failure cdf:
1. Probability of

failure

during the rst year: 16%
2. Probability of

failure

during the rst 3 months or


year:
3. Probability of

failure

prior to the average life; MTTF = T

MF

= 5.75
4. Probability of

reliably

operating for at least 10 years.
5. Plot the

reliability

curve and compare with the pdf curve.
f t e e
t t
( ) .
.


0 174
0 174
F t R t e
t
( ) ( )

1 1

F t e e
t
t

( )

( )

j
(
\
,

( )
1 1 1 1 0 84 0 16
0 174
1
0 174 1
.
.
. .
F t e
( )

j
(
\
,

( )

( )
0 25 1 1 0 957 0 043
0 174 0 25
. . .
. .
F t e e
( )

j
(
\
,

( )

( )
5 75 1 1 1 0 368 0 632
0 174 5 75
1 0
. . .
. .
.
R t e
( )

( )
10 0 176
0 174 10 .
.
f(t) ; R(t)
0 5 10 15
t , years
1.0
0.50
0.42
= 0.174
R(t)
f(t)
R(t) = e
-
f(t) = e
-

SL3151Ch14Frame Page 620 Thursday, September 12, 2002 5:59 PM

Limited Mathematical Background for Design for Six Sigma (DFSS)

621

P

ROBABILITY



OF

R

ELIABILITY

The exponential distribution as the basis of the reliability function is based on the
probability of samples of an event that describes a general physical situation; i.e.,
time to a (bad) occurrence.

C

ONTROL

C

HARTS

Continuous Time Waveform
Discrete Time Samples

A
T
0
0
Pass
(Good)
Fail
(Bad)
Time, T
t
t+t

A
T
0
0
Pass
(Good)
Fail
(Bad)
Time, T
t
k
t
1
t
2
t+t
t
t
n
t
t
k
= kt

SL3151Ch14Frame Page 621 Thursday, September 12, 2002 5:59 PM

622

Six Sigma and Beyond

Digital Signal Processing

Uniform sampling time or time increment: t

s

=


t
Time increment


t small so as to include only one sample.
Total sampling time:
Total number of samples taken:
Current time sample:
Designate RV event as occurrence of bad sample X.
Probability of a bad sample only in the n-th interval.

S

AMPLE

S

PACE

Sample Space: n = g + b

A
T
0
0
Pass
(Good)
Fail
(Bad)
Time, T
t
k
t
1
t
2
t+t
t
t
n
t
t
k
= kt
t nt n t
n s

n t t
n s

t kt k t
k s

Good
Bad
g
B
G
Sample, n
n = g + b
Population, N
N = G + B
b

SL3151Ch14Frame Page 622 Thursday, September 12, 2002 5:59 PM

Limited Mathematical Background for Design for Six Sigma (DFSS)

623

Sample Bad when outcome exceeds a limit A

T

Sample Good when outcome less than limit A

T

Random variable for this experiment is X.
Good or Bad are the only two possible states for X.
Assign
X = 0 for Good
X = 1 for Bad
Sample Space: n = g + b
One bad sample {b} is assumed to occur exactly on the n-sample.
{b} = {X

n

= 1}
This single bad sample is preceded by a sequence of (n 1) good samples {g}.
{g} = {X

n 1

= 0, X

n 2

= 0, , X

1

= 0}
1 2
k n
Bad
Sample
Good Samples

A
T
0
0
Pass
(Good)
Fail
(Bad)
Time, T
t
k
t
1
t
2
t+t
t
t
n
t
t
k
= kt
1 2 k n


Set Good Samples: {g}
Set Bad
Samples:
{b}

SL3151Ch14Frame Page 623 Thursday, September 12, 2002 5:59 PM

624

Six Sigma and Beyond

If each sample is independent of the proceeding sample then

A

SSIGNING

P

ROBABILITY



TO

S

ETS

Assume only one sample can be measured in any interval


t.
The probability that one

bad

sample occurs in interval is
assumed to be constant
Conversely, the probability of one

good

sample in this interval is
The probability of (n 1) good samples in the range [0


t] is

Reliability

Probability of total set:

Note:

There are two types of probabilities or variables, one when X = 0 for set
{g} and one when X = 1 for set {b}.
To establish an equation, we need to deal with only one variable.
Assume the sample of the increment where also good then
we could write directly:
Differential equation form (take limit as


dt):
P b g P b g P g P b P g

( )


j
(
\
,
( )

( ) ( )
t T t t +
[ ]

p t
P b P X t T t t t p
( )
+
( )
1;
P X t T t t P X t T t t t q +
( )
+
( )
0 1 1 1 ; ;
P g P X T t R t
( )

( )
0 0 ; ( )
P g P b P X t T t P X t T t t
( ) ( )

( )
+
( )
0 1 ; ;
t T t t +
[ ]

P X T t t P X T t P X T t t
P X T t t
+
( )

( )
+
( )

( )

[ ]
0 0 0 0 0 0
0 0 1
; ; ;
;


P X T t dt P X T t P X T t dt +
( )

( )

( )

[ ]
0 0 0 0 0 0 ; ; ;

SL3151Ch14Frame Page 624 Thursday, September 12, 2002 5:59 PM

Limited Mathematical Background for Design for Six Sigma (DFSS)

625

Dividing by dt puts the LHS into the form of a derivative:
First order differential equation (homogeneous):
which can be conveniently expressed in terms of reliability:
Solution:
Initial condition at t = 0:
Hence, the reliability is:

GAMMA DISTRIBUTION

The probability that the nth event (e.g., failure) will occur exactly at the (end) time
t, when the events are assumed to occur at a constant rate


.
The idea of a constant event rate


is the same assumption used for both
exponential and Poisson distributions.
The variable time, t, is said to have a gamma distribution.

G

AMMA

D

ISTRIBUTION

(

PDF

)

Two parameters:
Scale parameter: n (changes scale not shape)
Shape parameter:


(changes shape not scale)
Gamma Function
P X T t dt P X T t
dt
P X T t
+
( )

( )

[ ]

( )
0 0 0 0
0 0
; ;
;
P X T t dt
dt
P X T t
+
( )

[ ]

( )
0 0
0 0
;
;
d R t
dt
R t
( )

( )

R t e C
t
( )
+

1
R C and C 0 1 1 0
1 1
( )
+
R t e
t
( )


1
f t G t n
t
n
elsewhere
e t
n n
t
( )

( )

,
( )

; ,
;
;

1
0
0
,
( )

n x e dx
n x 1
0

SL3151Ch14Frame Page 625 Thursday, September 12, 2002 5:59 PM

626

Six Sigma and Beyond

Mean: Variance:
If 1 < n unimodal shape with mode at:
If n


1 non-modal shape with mode at:

m

= 0

G

AMMA

F

UNCTION

Properties of Gamma Functions

If n = positive integer,
Degrees of freedom (e.g., see chi-square distribution)
If n =


= degrees of freedom (always a positive integer)
If n = even integer:
If n = odd integer:

2
2

n
m
n

,
( )

n x e dx
n x 1
0
,
( )

( )
n n 1 !
,
( )

( )
,
( )
,
( )

( )

,
j
(
,
\
,
(

n n n 1 1
1 0 1
1
2
!

,
j
(
,
\
,
(

j
(
,
\
,
(
n n
2 2
1 !
,
j
(
,
\
,
(

j
(
,
\
,
(

j
(
,
\
,
(
n n n
2 2
1
2
2
5
2
3
2 2
j
(
,
\
,
(
j
(
,
\
,
(


SL3151Ch14Frame Page 626 Thursday, September 12, 2002 5:59 PM

Limited Mathematical Background for Design for Six Sigma (DFSS)

627

Gamma Distribution and Reliability

Gamma distribution is used in reliability where a number of partial failures n
must occur before a system or item completely fails.
The time to the n-th failure is estimated assuming that the times to individual
(partial) failures are exponentially distributed.
Two parameters have the following interpretation:
n is the number of partial failures per complete failure



is the failure rate
Limiting case: When total system failure occurs at the time of the rst partial
failure n = 1 and the gamma distribution reduces to the exponential distribution.

E

XAMPLE

1: T

IME



TO

T

OTAL

S

YSTEM

F

AILURE

To ensure reliability, an important computer system is controlled by a set of four
switches. Each switch has a constant failure rate of two per year. The computer
system is said to totally fail when there has occurred a total of three



switch failures.

A
T
0
0
Pass
(Good)
Fail
(Bad)
Time, T
t
k
t
1
t
2
t+t
t
t
n
t
t
k
= kt
f t
e t
t
t
( )
;
;


0
0 0
Time to total failure, t
Two partial failures
dt
Final
failure
n

SL3151Ch14Frame Page 627 Thursday, September 12, 2002 5:59 PM

628

Six Sigma and Beyond

State the parameters and plot the pdf with its mean and mode for the time to
total system failure.
Solution: The two parameters are: and n
Failure rate: = 2 failure/year
Number of partial failures to total system failure: n = 3
Mean time to system failure:
Mode:
Gamma Distribution and Reliability
1. Probability density function
For case of: n = 3 [failures] and = 2 [failure/year]
Gamma function: ,(3) = (3 1)! = 2! = 2 [f] 1 [f] = 2 [fail]
2
T
n
failures
failures per year
years
MT T

[ ]
[ ]

[ ]

3
2
1 5 .
n
failures
failures per year
years

[ ]
[ ]

[ ]
1
2
2
1 0

.
f t G t n
t
n
e t
elsewhere
n n
t
( ) ( ; ; )
;
;

,


1
0
0
f t
t
n
e
t
e t e
n n
t t t
( )
,


1 3 2
2 2 2
2
2
4
Time (years) pdf
f (0)
f (.05)
f (1.0)
t = 0:
t = 0.5
t = 1.0
=
=
=
0.0000
0.3679
0.5416
SL3151Ch14Frame Page 628 Thursday, September 12, 2002 5:59 PM
Limited Mathematical Background for Design for Six Sigma (DFSS) 629
2. The case when the failure rate is reduced to 1 per year and the total system
failure occurs after only two switches fail.
Failure rate: = 1 failure/year
Number of partial failures to total system failure: n = 2
Mean time to system failure:
Mode:
f (1.5)
f (2.0)
f (2.5)
f (3.0)
f (4.0)
t = 1.5
t = 2.0
t = 2.5
t = 3.0
t = 4.0
=
=
=
=
=
0.4481
0.2931
0.1685
0.0892
0.0215
0
0.1
0.2
0.3
0.4
0.5
0.6
f
(
t
)
n = 3

= 2
0 1 2 3 4 t


[ ]
[ ]

[ ]
n
failures
failures per year
years
2
1
2 0 .
n
failures
failures per year
years

[ ]
[ ]

[ ]
1
1
1
1 0

.
SL3151Ch14Frame Page 629 Thursday, September 12, 2002 5:59 PM
630 Six Sigma and Beyond
Gamma function: ,(2) = (2 1)! = 1! = 1 [fail]
3. For special case of n = 1; Gamma Exponential Distribution
f t
t
n
e
t
e te
n n
t t t
( )
,


1 2 1
1
1
1
Time (years) pdf
= 0.0000
= 0.3033
= 0.3679
= 0.3347
= 0.2707
= 0.2052
= 0.1494
= 0.0733
t = 0:
t = 0.5
t = 1.0
t = 1.5
t = 2.0
t = 2.5
t = 3.0
t = 4.0
f (0)
f (.05)
f (1.0)
f (1.5)
f (2.0)
f (2.5)
f (3.0)
f (4.0)
0
0.1
0.2
0.3
0.4
0.5
0.6
n = 2

= 1
0 1 2 3 4 t
f
(
t
)
f t E t
e t
t
x
t
( ) ( ; )
;
;



0
0 0
SL3151Ch14Frame Page 630 Thursday, September 12, 2002 5:59 PM
Limited Mathematical Background for Design for Six Sigma (DFSS) 631
Consider case of: n = 1 [failures] and = 2 [failure/year]
Gamma function: ,(1) = (1 1)! = 0! = 1 [unitless]
f t e e
t t
( )


2
2
f (0)
f (.05)
f (0.6)
f (1.0)
f (1.5)
f (2.0)
f (2.5)
f (3.0)
f (4.0)
t = 0.2:
t = 0.5
t = 0.6
t = 1.0
t = 1.5
t = 2.0
t = 2.5
t = 3.0
t = 4.0
=
=
=
=
=
=
=
=
=
1.3406
0.7358
0.6024
0.2707
0.0996
0.0366
0.0135
0.0050
0.0007
Time (years) pdf

f(t)
= 2
n = 1
0.6
0.5
0.4
0.3
0 1 2 3 4 t
0
0.1
0.2
0.7
SL3151Ch14Frame Page 631 Thursday, September 12, 2002 5:59 PM
632 Six Sigma and Beyond
Reliability Relationships
Sample data approach:
n identical systems are placed in operation at t = 0
g(t) good; number operating or survive to time t
b(t) bad; number to fail or survive to time t
where b(t) = n g(t)
Random variable approach:
T is a continuous random variable representing the time to failure or failure time
of a system.
Reliability Function
Sample Space: n = g(t) + b(t)
H Set bad samples: {b(t) = x}
Set good samples: {g(t) = n x}

A
T
0
0
Pass
(Good)
Fail
(Bad)
Time, T
t
k
t
1
t
2
t+t
t
t
n
t
t
k
= kt
Time to total failure, t
Two partial failures
dt
Final
failure
n
1 2
k n
. . . . . .
t
t
0
SL3151Ch14Frame Page 632 Thursday, September 12, 2002 5:59 PM
Limited Mathematical Background for Design for Six Sigma (DFSS) 633
Reliability is the probability that a system can operate successfully over the time
interval from 0 to t. Reliability can also be viewed as the probability that the system
will survive beyond a given point in time, t.
As time increases, chance of failure increases and reliability decreases; as t
or n , then R(t) 0.
DATA FAILURE DISTRIBUTION
Probability of systems failure as a function of time.
FAILURE RATE OR DENSITY FUNCTION
Failure rate is ratio of the number of failures occurring in time interval t to the
size of original population, divided by time interval.
A measure of the overall speed at which failures are occurring
Alternatively, can express the failure rate as the probability that a failure occurs
in any given time interval t by taking the time derivative (rate) of the failure
distribution Q(t).
R t P T t
g t
n
n b t
n
b t
n
( )
>
( )

( )



( ) ( )
1
Q t P T t R t
b t
n
( )
>
( )
1 ( )
( )
f t
b t t b t
n t
n g t t n g t
n t
g t g t t
n t n
d
dt
g t
t
t
( )
( )
lim
lim

+
( )

+
( )
[ ]

( )
[ ]

( )
+
( )

( )

0
0
1
f t
d
dt
Q t
Q t t Q t
t
P T t t t
t
t
t
( )

( )

+
( )

( )

< +
( )

lim
lim

0
0
SL3151Ch14Frame Page 633 Thursday, September 12, 2002 5:59 PM
634 Six Sigma and Beyond
HAZARD RATE FUNCTION
Hazard rate is ratio of the number of failures occurring in time interval t to the
number of survivors at the beginning of the time interval t, divided by time interval.
Hazard rate as the conditional probability of failure in the interval (t, t + t]
given that the system has survived up to the time t:
RELATIONS BETWEEN RELIABILITY AND HAZARD FUNCTIONS
Recall last derivation relating hazard to reliability functions:
h t
b t t b t
g t t
n g t t n g t
g t t
g t g t t
g t t g t
d
dt
g t
nf t
g t
f t
g t n
t
t
t
( ) lim
( )
lim
lim
( )
( )
( ) /

+
( )

( )

+
( )
[ ]

( )
[ ]
( )

( )
+
( )
( )

( )
( )

( )

0
0
0
1

f t
R t
( )
( )
h t
P T t t t T t
t
P T t t t
P T t
P T t t P T t
P T t
Q t t Q t
R t t
R t
dQ t
t
t
t
t
( ) lim
lim
lim
lim
( )
( )

< + >
( )

< +
( )
<
( )

< +
( )
<
( )
<
( )

+
( )

( )

0
0
0
0
1
))

dt
f t
R t
( )
( )
SL3151Ch14Frame Page 634 Thursday, September 12, 2002 5:59 PM
Limited Mathematical Background for Design for Six Sigma (DFSS) 635
A separable rst order differential equation whose solution is:
Raising both sides to the power of exponential e:
Reliability function:
Failure cumulative function:
POISSON PROCESS
Probability of exactly x failures occurring in any order within a given time
interval (or spatial region):
1. Time interval
2. Spatial region (e.g., typos on a page)
Sample Space: n = g + b
H Set bad samples: {b = x = 5}
Set good samples: {g = n x}
h t
f t
R t R t
dQ t
dt
Q t
dQ t
dt
R t
d
dt
R t
R t
d R t
dt
d
dt
In R t
( )
( )
( ) ( )
( )
( )
( )
( )
( )
( )
( )
( )

[ ]

[ ]



[ ]
1 1
1
1
1
1
In R t h t dt
t
( )
( )

( (
0

f t h t R t h t e
h t dt
t
( ) ( ) ( ) ( )
( )

( (
0
Q t f t dt R t e
h t dt
t t
( ) ( ) ( )
( )

( (
( (
1 1
0
0
0 0 +
[ ]

[ ]
T t t T n t
a y b
[ ]
1 2
k n
. . . . . .
t
t
0
SL3151Ch14Frame Page 635 Thursday, September 12, 2002 5:59 PM
636 Six Sigma and Beyond
Characteristics of Poisson Process
1. Number of outcomes in a given time interval, nt, is independent of the
number that occurs in any other interval (no memory).
2. At most only one single outcome can occur within a small time increment
of duration t (or spatial increment y).
3. Probability of an outcome is therefore proportional to the duration of the
sample time increments t and is given by
p = t
Poisson Distribution
The probability distribution of a Poisson random variable, X, representing the num-
ber of outcomes occurring in the time interval t is given by
where = average number of outcomes per unit time mean time between events
or failures and t = n t = np mean number of outcomes in time t.
There are two possible (mutually exclusive) situations for having x failures in
this interval depending upon whether or not a failure occurs in the last t increment.
1. One failure in last increment, then must have
(x 1) failures in the interval
1 failures in the increment
Sample Space: n = g + b
H Set bad samples: {b = x = 5}
Set good samples: {g = n x}
2. No failure in last increment, then must have
x failures in the interval
0 failures in the increment
Sample Space: n = g + b
H Set bad samples: {b = x = 5}
Set good samples: {g = n x}
P x t
t e
x
x
O
x
t
;
!
; , , ,


( )

( )

0 1 2 K
0
[ ]
T t
t T t t +
[ ]

1 2
k n
. . . . . .
t
t
0
0
[ ]
T t
t T t t +
[ ]

SL3151Ch14Frame Page 636 Thursday, September 12, 2002 5:59 PM


Limited Mathematical Background for Design for Six Sigma (DFSS) 637
Probability of conguration (1), where one failure occurs in the last increment,
is given by:
(x 1) failures in the interval or
1 failures in the increment or
Sample Space: n = g + b
H Set bad samples: {b = x = 5}
Set good samples: {g = n x}
P({b} {g}) = P({b} j {g}) P({g}) = P({b})({g})
where we again assume that the success probability of nding one bad sample
occurs in increment is a constant
Likewise, the average number of bad samples in an interval ,
which is composed of n t increments, is
Probability of conguration (2), where zero failure occurs in the last increment,
is given by:
x failures in the interval or
0 failures in the increment or
1 2
k n
. . . . . .
t
t
0
0
[ ]
T t 1 1
( )
[ ]
k n
t T t t +
[ ]
k n
[ ]
1 2
k n
. . . . . .
t
t
0
P X x T t t P X x T t P X t T t t
P X x T t dt
+
( )

( )

( )
+
( )

( )

( )[ ]
; ; ;
;
0 1 0 1
1 0

t T t t +
[ ]

p t
0 +
[ ]
T t t
a n p n t n t
( )

0
[ ]
T t 1 1
( )
[ ]
k n
t T t t +
[ ]
k n
[ ]
SL3151Ch14Frame Page 637 Thursday, September 12, 2002 5:59 PM
638 Six Sigma and Beyond
Sample Space: n = g + b
H Set bad samples: {b = x = 5}
Set good samples: {g = n x}
P({b} {g}) = P({b} j {g}) P({g}) = P({b})({g})
where we again assume that the success probability of nding no bad sample
occurs in increment is a constant
As before, the average number of bad samples in an interval ,
which is composed of n t increments, is
The combined probability of these two mutually exclusive congurations of x
failures is given by the sum the probability associated with each conguration:
Regrouping like probabilities terms of like kind and dividing by dt
1 2
k n
. . . . . .
t
t
0
P X x T t t P X x T t P X t T t t
P X x T t p
+
( )

( )

( )
+
( )

( )

( )

[ ]
; ; ;
;
0 1 0 1
1 0 1

t T t t +
[ ]

p q t 1 1
0 +
[ ]
T t t
a n p n t n t
( )

P X x T t t P X x T t dt
P X x T t dt
+
( )

( )

( )[ ]
+
( )

( )

( )
[ ]
; ;
;
0 1 0
0 1

P X x T t t P X x T t
dt
P X x T t P X x T t
+
( )

( )

( )

( )
[ ]

( )
+
( )
[ ]

( )

( )
; ;
; ;
0 1 0
0 1 0


SL3151Ch14Frame Page 638 Thursday, September 12, 2002 5:59 PM
Limited Mathematical Background for Design for Six Sigma (DFSS) 639
Dening a Reliability of having exactly x failures in a given time interval as
the probability:
we can write the non-homogenous differential equation:
where the non-homogenous term depends upon the values of R obtained for all
previous values of x. That is, for x = 2, we need to successively nd the solutions
for R(1;t) and R(0;t)
Initial Condition at t = 0 satisfying the certain and impossible outcomes,
respectively
yields a Poisson distribution probability density function (a = t):
Example
An average of four (4) private aircraft landings per hour in a small local airport,
= 4L/h. What is probability that six (6) aircraft will land in one hour?
The Poisson distribution is
One parameter: a ( = np = t)
Mean: = a Variance:
2
= a
Solution:
Number of aircraft landing in one hour: X = 6L
Average number of landings in one hour: a = 4L
That is, a = t = 4 L/h 1h = 4L
0
[ ]
T t
R x t P X x T t ; ;
( )

( )
0
d R x t
dt
R x t R x t
;
; ;
( )
+
( )

( )
1
R x
if x
if x
( ; ) 0
1 0
0 0


>

R x t P X x T t
t e
x
x
x
t
; ;
!
; , , ,
( )

( )

( )

0 0 1 2


K
P x a
a e
x
x
O
x a
;
( )
!
; , , ,
( )

0 1 2 K
SL3151Ch14Frame Page 639 Thursday, September 12, 2002 5:59 PM
640 Six Sigma and Beyond
The probability of six aircraft in one hour is then given by
There is a 10.4% chance that six aircraft will land in one hour.
WEIBULL DISTRIBUTION
One of the more popular models for time-to-failure (TTF), Weibull distributions take
many shapes and are typically identied as in the following illustration.
Weibull probability density function (pdf)
Cumulative distribution
Two parameters:
P
e
O
6 4
4
6
0 1042
6 4
;
( )
!
.
( )

f(t)
a = 2
a = 0.5
a = 1
a = 4
0 1 2 3
t
4
Weibull
Distribution
= 1
f t W t a
a t e t
t
a
t
a
( ) ; ,
;
;

( )

( )

<

( )



1
0
0 0
F t a t e dt e
a
t t
t
a a
( )
( )

( )

( )



1
0
1
SL3151Ch14Frame Page 640 Thursday, September 12, 2002 5:59 PM
Limited Mathematical Background for Design for Six Sigma (DFSS) 641
Shape parameter: a (changes shape not scale)
Scale parameter: (changes scale not shape)
Some authors dene = 1/ and a =
In a typical Weibull distribution shown below, there are some general charac-
teristics
Mean:
Variance:
1/ also referred to as characteristic life or time constant, the life or
time at which 63.2% of population has failed.
If a = 1, the Weibull reduces to the exponential distribution.
If a = 2, the Weibull reduces to the Rayliegh distribution.
If a 3.5, the Weibull approximates the normal distribution.
For a < 1, reliability function decays less rapidly.
For a > 1, reliability function decays more rapidly.
A useful model for the failure time (or length of life) distributions of
produces and processes.
Does not assume that the failure rate, , is a constant as do the Exponential
and Gamma distributions.
Has the advantage that the distribution parameters can be adjusted to t
many situations; because of this adaptability it is widely used in reliability
engineering.
The cumulative distribution has closed form expression that can be used
to compute areas under the Weibull curve.
f(t)
a = 2
a = 0.5
a = 1
a = 4
0 1 2 3
t
4
Weibull
Distribution
= 1

, +
j
(
,
\
,
(
1
1
1
a

2
2
2
1
1
2
1
1
, +
j
(
,
\
,
(
, +
j
(
,
\
,
(
,

,
,
]
]
]
]
a a
SL3151Ch14Frame Page 641 Thursday, September 12, 2002 5:59 PM
642 Six Sigma and Beyond
Estimates of the two parameters, and a, can be obtained when ranked
sample data are plotted on scale adjusted cumulative percentile (See
Probability Plots).
Note:
Characteristic life
t = 1/
corresponds to
the 63.2%
Weibull reliability or survival function:
Weibull failure distribution: (same as cumulative distribution)
99
95
80
60
40
5
20
2
90
70
50
30
10
1
10000 100000 1000000
xxxx
xxxx
xxxx
xxxx
Mileage (miles)
O
c
c
u
r
r
e
n
c
e

(
C
D
F
)
x

1
Eta Value Beta Value r
2
Value
xxxx xxxx xxxx xx/yy
Mileage (miles)

n/s where n =
samples and
s = suspended
samples
R t P T t f t dt
a t e dt let u t
e du e
u t
e
t
a
t
t
a
u u t
u t
a
a
( ) ( )
;
( )
( )
( )
>
( )

( )

( )

( )

( (
( ( (
(

1
SL3151Ch14Frame Page 642 Thursday, September 12, 2002 5:59 PM
Limited Mathematical Background for Design for Six Sigma (DFSS) 643
Weibull hazard rate function:
The shape parameter a, can be used to adjust the shape of the Weibull
distribution to allow it to model a great many life (time) related distribu-
tions found in engineering.
THREE-PARAMETER WEIBULL DISTRIBUTION
If failures do not have the possibility of starting at t = 0, but only after a nite time
t
O
, a time-shift variable can be used to redene the Weibull reliability function:
where the time t
O
is called the failure free time or minimum life.

Q t P T t f t dt
R t e
t
t
a
( ) ( ) ( )
( )
( )

( (
0
1 1

h t
f t
R t
a t e t
e t
a t
a a
a
a
( )
( )
( )

( )

( )

( )

( )


1
1
f(t)
a = 2
a = 0.5
a = 1
a = 4
0 1 2 3
t
4
Weibull
Distribution
= 1
R t e
t t
O
a
( )

( ) ( )

SL3151Ch14Frame Page 643 Thursday, September 12, 2002 5:59 PM


644 Six Sigma and Beyond
TAYLOR SERIES EXPANSION
Determines the value of a function f(x) at any x from the value of the function
and all its derivatives at a given location x
o
(provided no discontinuities occur).
Taylor series expansion evolves into a power series
1. Series about x
0

f x f x
x x
n
f x
df x
dx
x x
d f x
dx
x x d f x
dx
x x
n
n
n
n
x x
n
n
x
n
( )
!
! !

( )

( )

( )
+
( )

( )
+
( )

( )
+ +
( )

( )
( )

0
0
0
0 0
2
2
0 0
0 0 0
2
L
f x f x slope x x x curvature x
x x
f x f x f x x x f x
x x
( )

( )
+
( )

( )
+
( )


( )
+
( )

( )
+
( )

( )
+
( )


( )
+
( ) ( )
0 0 0 0
0
2
0
1
0 0
2
0
0
2
2
2
L
L
O
f(x)
f(x)
f(x
o
)
(x - x
o
)
x
o
x
x
O
f(x)
f(x)
f(x
o
)
f(x
o
)
x
o
x
x
[x -x
o
]
(x
o
) [x-x
o
]
2
/2 f
( 2)
(x
o
) [x-x
o
] f
( 1)

f x f x
x x
n
f x f x x x
f x
x x
f x
n
x x
n
n
n
n
n
( )
!
!
!

( )

( )

( )
+
( )

( )
+
( )

( )
+
+ +
( )

( )
+
( )

( )
( )
( )
0
0
0
0
1
0 0
2
0
0
2
0
0
2
L
L L
SL3151Ch14Frame Page 644 Thursday, September 12, 2002 5:59 PM
Limited Mathematical Background for Design for Six Sigma (DFSS) 645
2. Series about origin x
0
= 0
Observations:
1. An arbitrary function f(x) can be expressed as a power series:
2. Coefcients of power series are related to the derivative of the function
evaluated at origin.
3. A linear function consists of only the rst two terms:
TAYLOR SERIES EXPANSION
To establish linear relationship about ambient state:
Stress Strain constitutive relation in elasticity
Pressure Density equation of state
Voltage Current about quiescent point
Input Output
Linear implies: input disturbance (x x
0
) small enough that output

f x f
x
n
f f x
f
x
f
n
x
a a x a x a x
n
n
n
n
n
n
n
( )
!
! !

( )
( )

( )
+
( )( )
+
( )
+ +
( )
+
+ + + +
( )

( )
( ) ( )

0
0 0
0
2
0
0
1
2
2
0 1 2
L L
L L
a
f
n
n
n

( )
( )
0
!
f x a a x
( )
+
0 1
f x f x
df x
dx
x x f x m x x
x
( )

( )
+
( )

( )

( )
+
( )
0 0 0 0
0
f(x
o

)
f(x)
f(a)
f(b)
O
x
o
b a
i
n
p
u
t
output
Linear response
about x
o
x
SL3151Ch14Frame Page 645 Thursday, September 12, 2002 5:59 PM
646 Six Sigma and Beyond
Output is a linear function of input:
Recombine: changes in independent variable(input).
Provide linear changes in the dependent variable(output).
Slope m serves to adjust units and is called sensitivity.
Exponential function e
ax
Taylor series about x = x
0
in interval < x <
Factoring out the common exponential term:
MacLaurin series about x
0
= 0 in interval < x <
OUTPUT INPUT
Input Output
Linear
System
x
f(x
f(x)
) = mx + c
[ f ( x) - f (x
o
) ] ( x - x
o
)
1

m
f(x
o
)
f(x)
f(a)
f(b)
O
x
o
b a
i
n
p
u
t
output
Linear response
about x
o
x

e e ae x x a e
x x
a e
x x
n
ax ax ax ax ax
n
+
( )
+

( )
+ +

( )
+
0 0 0 0
0
2
0
2
2
0
2! !
L L
e e a x x
a
x x
a
n
x x
ax ax
n
n
+
( )
+
( )
+ +
( )
+
,

,
,
]
]
]
]
0
1
2
0
2
0
2
0
L L
!
SL3151Ch14Frame Page 646 Thursday, September 12, 2002 5:59 PM
Limited Mathematical Background for Design for Six Sigma (DFSS) 647
Normal density-like function: e
bx
2

Taylor series about x = 0
Standard Normal Distribution:
Derivatives of exponential e
bx
2
about origin x = 0
e ax
ax ax ax
n
ax
n
ax
n n
n

+
( )

( )
+ +

( )
+

( )

1
2 3
2 3
0
! ! ! !
L L
e
b
x
b
x
b
x
bx bx bx
bx
2
1 0
2
2
0
3 2
24
0
12 2
6
1
1
2
2
15
2
2
4
3
6
2 2
2
2
3
+ +
( )
+ +
( )
+ +
( )
+
+ +
( )
+
( )
+
!
L
L
N z e
z
; ,
/
0 1
1
2
2
2
( )

1
2
1
2
1 2 8 60
2
2 2 4 6

e z z z
z
+ +
[ ]
/
/ / / L
Exact vs. Two Three Four Terms
z = 0.5 0.3521 0.3490 0.3522 0.3519
z =0.675 (Q
1,3
) 0.3177 0.3080 0.3184 0.3178
z = 1.0 0.2420 0.1995 0.2494 -0.2427
-3 -2 -1 0 1 2 3 Z
0.40
0.004
0.05
0.24
N(z; 0,1)
99.74%
68.26
95.46
2.5 13.5 34.0 34.0 13.5 2.5
= 0
= 1

SL3151Ch14Frame Page 647 Thursday, September 12, 2002 5:59 PM


648 Six Sigma and Beyond
Zero:
First:
Second:
Third:
Fourth:
Fifth:
Sixth:
e
bx
x
2
0
1

de
dx
de
dx
de
du
du
dx
bx e
bx u u
bx
2
2
2
d e
dx
d
dx
d
dx
e
d
dx
bx e
b e bx e b
bx
x
bx bx
bx bx
x
2
2
0
2
0
2
2 2
2 2
2
2 2 2

,
]
]
]

( )
,

,
]
]
]
+
( )
,

,
]
]
]

d e
dx
d
dx
d
dx
e
d
dx
b e bx e
b bx e bx b e bx e
bx
x
bx bx bx
bx bx bx
x
3
3
0
2
2
2
3
0
2
2 2 2
2 2 2
2 2
2 2 2 2 2 2 0

,
,
]
]
]
]
+
( )
,

,
]
]
]

( )
+
( )( )
+
( )
,

,
]
]
]

d e
dx
d
dx
b bx e bx b e bx e
b e b bx e b e
b bx e bx b e bx
bx
x
bx bx bx
bx bx bx
bx bx
4
4
0
2 3
2 2 2
2 2
2
2 2 2
2 2 2
2 2
2 2 2 2 2 2
2 2 2 2 2
2 2 2 3 2 2 2

( )
+
( )( )
+
( )
,

,
]
]
]

( )
+
( )
+
( )
,

,
+
( )( )
+
( ) ( )
+
( ))
]
]
]

4
2
0
e
bx
x
d e
dx
bx
x
5
5
0
2
0

SL3151Ch14Frame Page 648 Thursday, September 12, 2002 5:59 PM


Limited Mathematical Background for Design for Six Sigma (DFSS) 649
Sine function sin x Taylor series about x = x
0
in interval < x <
MacLaurin series about x
0
= 0 in interval < x <
Partial Derivatives
Dependent variable has two or more independent variables
f(x, y)
Differentiate wrt to only one independent variable while holding the other
variable constant e.g.,y = y
o
Taylor Series in Two-Dimensions
Taylor series of f(x, y) about point (x
o
, y
o
):
d e
dx
b
bx
x
6
6
0
3
2
12 2

( )
sin sin cos sin
!
cos
!
x x x x x x
x x
x
x x
+
( )


( )


( )
0 0 0 0
0
2
0
0
3
2 3

sin
!
!
x x
x x
n
n
n
n
+ +
( )
+
( )
+

0 0
3
1
2 1
3 2 1
0
L

( )



( )



( )


( )

f x y
x
f x y
x
f x y
x
f x y
y y
o
x o
o
, , ,
,

F(x
o
,

y
o
)
(x
o
,

y
o
)
(x , y)
f(x
,
y)

Surface: f(x,y)
Line:
f(x , y
o
)
Slope:
f
y
(x
o
, y
o
)
Height:
f(x ,y )
Height:
f(x
o
,y
o
)
Slope:
f
x
(x
o
, y
o
)
Line:
f(x
o
,y)
x
o
x
(x
o
, y
o
)
y
o
y
(x, y)
SL3151Ch14Frame Page 649 Thursday, September 12, 2002 5:59 PM
650 Six Sigma and Beyond
Linear terms:
Taylor Series of Random Variable (RV) Functions
Arbitrary function of two random variables X
1
and X
2
Y (X
1
, X
2
)
Mean:
Variance and Covariance
Consider only linear terms of the Taylor series expansion about the mean of each
random variable,
Variation of function about its mean:
Variance and covariance:
f x y f x y x x
f x y
x
y y
f x y
x
o o o
o
x x
o
o
y y
o o
, ,
, ,
( )

( )
+
( )

( )

+
( )

( )

+

L
f x y f x y x x f x x y y f x y
o o o x o o y o o
, , ,
( )

( )
+
( )

( )
+
( ) ( )

Y X X
E Y X X Y
( )
[ ]

( )
1 2 1 2
, ,

Y X X
Y
( )
1 2
,
Y X X Y X
Y
X
X
Y
X
X X X X
X X X X
1 2 1 2 1 1
1
2 2
2
1 2 1 2
, ,
, ,
( )

( )
+
( )

+
( )



Y X
Y
X
X
Y
X
Y X
X
X
X

( )

( )

+
( )

( )


1 1
1
2 2
2



Y Y
X X
X
X
X
X
E Y
E X
Y
X
X
Y
X
Y
X
Y
X
X X
2
2
1 1
1
2 2
2
2
1
2
1
2
2
2
2
2

( )
,

,
]
]
]

( )

+
( )

j
(
,
,
\
,
(
(
,

,
,
,
]
]
]
]
]


( )

j
(
,
,
\
,
(
(
+

( )

j
(
,
,
\
,
(
(
+ 22
1 2
2
1 2


X X
X X
Y
X
Y
X

( )

j
(
,
,
\
,
(
(

( )

j
(
,
,
\
,
(
(
SL3151Ch14Frame Page 650 Thursday, September 12, 2002 5:59 PM
Limited Mathematical Background for Design for Six Sigma (DFSS) 651
Note: If X
1
and X
2
are independent RV, covariance .
Functions of Random Variables
Sum or difference:
Mean:
Variance and covariance:
Again, if X
1
and X
2
are independent RVs then the covariance is zero.
Product: Y = a
o
X
1
X
2
Mean:
Variance and covariance:
Division of Random Variables
Mean:
Variance and covariance:

X X 1 2
0
Y a X a X
1 1 2 2

Y X X
a a
1 1 2 2


Y X X X X
X X X X
Y
X
Y
X
Y
X
Y
X
a a a a
2
1
2
1
2
2
2
2
2
1 2
2
1 2
1
2
1
2
2
2
2
2
1 2 1 2
2
2
2

j
(
,
\
,
(
+

j
(
,
\
,
(
+

j
(
,
\
,
(

j
(
,
\
,
(
+

( )


( )


( )


( )

Y
X
a X a
Y
X
a X a
X
o
X
o X
X
o
X
o X


1
2 2
2
1 1
;

Y o X X
a
1 2


Y X X X X
o X X o X X o X X X X
Y
X
Y
X
Y
X
Y
X
a a a
2
1
2
1
2
2
2
2
2
1 2
2
1 2
2
2
1
2
1
2
2
2 2
1 2 1 2
2
2
2

j
(
,
\
,
(
+

j
(
,
\
,
(
+

j
(
,
\
,
(

j
(
,
\
,
(

( )
+
( )

( )
Y
a X
X
o

1
2

( )


j
(
,
\
,
(


( )



j
(
,
\
,
(


Y
X
a
X
a
Y
X
a X
X
a
X
o
X
o
X
X
o
X
o X
X


1 2 2 2
1
2
2
1
2
2
;

Y
o X
X
a

1
2
SL3151Ch14Frame Page 651 Thursday, September 12, 2002 5:59 PM
652 Six Sigma and Beyond
or normalizing by the square of the mean of the quotient
Y
.
Again, if X
1
and X
2
are independent RVs, then the covariance is zero.
Powers of a Random Variable
Single RV: X
1
Mean:
Variance:
or normalizing by the square of the means
Exponential of a Random Variable
Single RV X
1
:
where units of the RV X
1
are those of 1/b, and units of the RV Y are the same as
those of a
o
.

Y X X X X
X
o
X
X
o X
X
X X
o
Y
X
Y
X
Y
X
Y
X
a a a
2
1
2
1
2
2
2
2
2
1 2
2
1 2
1
2
2
2
2
2 1
2
2
2
1 2
2
2
2

j
(
,
\
,
(
+

j
(
,
\
,
(
+

j
(
,
\
,
(

j
(
,
\
,
(

j
(
,
\
,
(
+

j
(
,
\
,
(
+
XX
o X
X
a
2
1
2
2
j
(
,
\
,
(

j
(
,
\
,
(


Y
X X
X
X
X X
X X
2
2
2
1
2
1
2
2
2
2
2
1 2
2
1
2
2
2
2
+
Y a X
o
b

( )

Y
X
a b X
a b X
X
bY
X
b
X
o
b
X
o
b
X X
Y
X


1
1
1 1
1 1 1

Y o X X
a
1 2

Y X
X
X
X
Y
X
bY
2
1
2
1
1
2
1
2
1
2


( )

j
(
,
,
\
,
(
(


j
(
,
\
,
(

Y
Y
X
X
b
2
2
2 1
2
1
2

Y a e
o
bX


1
SL3151Ch14Frame Page 652 Thursday, September 12, 2002 5:59 PM
Limited Mathematical Background for Design for Six Sigma (DFSS) 653
Mean:
Variance:
or normalizing by the square of the means
Consider a constant raised to RV power:
then
Variance:
Constant Raised to RV Power
Single RV X
1
:
where units of the RV X
1
are those of 1/b and units of the RV Y are the same as
those of c
then
Mean:
Variance:
Logarithm of Random Variable
Single RV X
1
:
where units of the RV X
1
are those of 1/b and units of the RV Y are the same as
those of a
o
then

( )

Y
X
a be bY b
X
o
b X
Y
X X


1
1

Y o
bX
a b e

1


Y X
X
X o
b
X Y
Y
X
a b e b
X
2
1
2
1
1
2
1
2
2
1
2 2 2
1


( )

j
(
,
,
\
,
(
(

( )


Y
Y
X
b
2
2
2
1
2

Y c
bX


1
Y c e e
bX Inc
bX
b In c X

( )

( )
1
1
1


Y
Y
X
b in c
2
2
2
1
2

( )
Y c
bX


1
Y c e e
bX Inc
bX
b In c X

( )

( )
1
1
1



Y
b
b In c
c e
X
X


( )
1
1


Y
Y
X
b In c
2
2
2
1
2

( )
Y a In bX
o

( )
1

( )


Y
X
a
X
a
X
o o
X
X

1 1 1
SL3151Ch14Frame Page 653 Thursday, September 12, 2002 5:59 PM
654 Six Sigma and Beyond
Mean:
Variance:
Example: Horizontal Beam Deection
Deection of the center of the beam of length L [m] under uniform loading W [N/m]
is deterministically given by:
where E = elastic modulus of the beam material [N/m] and I = moment of inertia
of beam cross section about its center of area [m
4
].
Load and length can be considered r.v. with mean and one standard deviation
is given as:
Find: The fractional standard deviation of the deection Y
Mean deection:
Variance of deection:
Fractional variance of deection of beam: divide by
Y
2

Y o X
a In b
( )
1


Y
o
X
X
a
2
1
2
1
2

j
(
,
\
,
(

Y
WL
E I
a WL
o

3
3
48
W N N
L m m
W W
L L




1 4000 40
1 20 0 2 .

Y o W L
a
3



Y W
W L
L
W L
W o L L o W L
Y
W
Y
L
a a
2 2
2
2
2
2 3
2
2 2
2
3


( )

j
(
,
,
\
,
(
(
+

( )

j
(
,
,
\
,
(
(

( )
+
( )
, ,

Y
Y
w
w
l
l
j
(
,
\
,
(

j
(
,
\
,
(
+
j
(
,
\
,
(
2 2
2
2
3
SL3151Ch14Frame Page 654 Thursday, September 12, 2002 5:59 PM
Limited Mathematical Background for Design for Six Sigma (DFSS) 655
For the case given the fractional standard deviations of the two variables are
equal:
Numerical value for the fractional variance of the deection:
Numerical value for the fractional standard deviation:
Observations:
1. Although W and L have the same fractional standard deviation (0.01), the
length because it is a third power term in the deection is seen to
have more signicance on the standard deviation of the deection.
2. The fractional standard deviation of the deection Y is considerably larger
than those of either the weight W or length L.
Example: Difference between Two Means
Examples:
1. Clearance
2. Before and after comparison (e.g., treated vs. untreated)
3. Comparison of two suppliers

W
W
L
L


40
4000
0 01
20 0 01
.
.

Y
Y
j
(
,
\
,
(

( )
+
( )

( )

2
2
2
2 2
0 01 3 0 01 10 0 01 0 001 . . . .

Y
Y
j
(
,
\
,
(
0 032 .
Y X X
1 2
SL3151Ch14Frame Page 655 Thursday, September 12, 2002 5:59 PM
656 Six Sigma and Beyond
Mean:
Variance (assume independent so covariance is zero):
Standardized form of sample difference
t-distribution form of sample difference:
Introduce effective sample variance
then
MISCELLANEOUS
In Chapter 11, we discussed axiomatic design and its four mapping domains (CAs,
FRs, DPs, and PVs). Now, let us examine some of the mathematical relations for
these domains. If, for example, we are interested in the functional requirements
[FR(CTS)], then this can be expressed in the traditional six sigma notation of y =
f(x) as FR = f(DP), where DP (design parameter) is an array of the mapped to DPs
of size m. If we let each DP in the array be written as DP
i
= g(PV
i
), where PV
i
, i
= 1,,m, is an array of process variables that the mapped to DP
i
, soft changes may
be implemented using sensitivities in physical (FR and DP) and process (DP and
PV) mapping. Using the chain rule, we have:
where PV
ij
is a process variable in the array PV
j
that can be adjusted to improve the
problematic FR. The rst term represents a design change while the second one
represents a process change. An efcient DFSS strategy should utilize both terms
in all potential improvements. After all, the ideal DFSS outcome is a design that a)
exceeds customer wants, needs and expectations, b) exceeds competition market

Y X X
or Y X X
1 2 1 2

Y X Y Y
or s s s
2 2 2 2
1
2
2
2
+ +
Z
X X
s n s n


+
1 2
1
2
2
2 2
/ /
s
n s n s
n n
Y
2
1 1
2
2 2
2
1 2
1 1
2


( )
+
( )
+
( )
T
X X
s n s n
Y Y


+
1 2
2
1
2 2
/ /

j
(
,
\
,
(

j
(
,
\
,
(

FR
PV
FR
DP
DP
PV
f g PV g PV
ij i
i
j
i ij
'[ ( )] ' ( )
SL3151Ch14Frame Page 656 Thursday, September 12, 2002 5:59 PM
Limited Mathematical Background for Design for Six Sigma (DFSS) 657
performance as measured by reliability, robustness, and life cycle costs indices, and
c) the rest of product features.
This is very important because as we said earlier, DFSS is not for all designs
and processes. We must be selective in how we use it. Table 14.1 may be of help.
A nal point about the axiomatic designs. The importance of the design or
problem matrix has many perspectives. The main one is the revealing of coupling
among the CTs. Knowledge of coupling is important because it gives the designer
clues about where to nd solutions, and make adjustments or changes and how to
maintain them over the long term with minimal drift.
So, for the uncoupled matrix we have
For the coupled matrix we have
For the decoupled matrix we have
TABLE 14.1
Possibilities of Selecting a DFSS Problem
Z
s
Z
s
does not exist Z
s
exists
X
s
does not exist No problem; this type of design may not
exist
Need conceptual change; DFSS has
potential while six sigma has no
potential
X
s
exists Trivial problem; this type may be solved
with design of experiments (DOE)
Both six sigma and DFSS have
potential
y
y
A
A
A
x
x
m mm m
1 11
22
1
0 0
0
0
0 0
.
.
.
.
.
.
.
.

,
,
,
,
,
]
]
]
]
]
]
]

y
y
A A A
A A
A
A A A
x
x
m
p
m p
m m p mp
p
1
11 12 1
21 22
1
1 1
1
.
.
.
.
. .
.
.
.
( )
( )

,
,
,
,
,
]
]
]
]
]
]
]

y
y
A
A A
A A A
x
x
m m m mm m
1 11
21 22
1 2
1
0 0
0
0
.
.
.
.
. . .
.
.
.

,
,
,
,
,
]
]
]
]
]
]
]

SL3151Ch14Frame Page 657 Thursday, September 12, 2002 5:59 PM


658 Six Sigma and Beyond
These design matrices are obtained in a hierarchy when the zigzagging method
is used (see Chapter 11). At lower levels of hierarchy, sensitivities can be obtained
mathematically as the CTSs take the form of basic physical and engineering quan-
tities. In some cases they are not available, and that means that the experimenter
has to rely on some kind of simulation or modeling.
CLOSING REMARKS
This chapter, especially, has focused on some mathematics that will allow the
experimenter to pursue design for six sigma (DFSS). The rationale for this mathe-
matical background (review) was to present a case for the integration of six sigma
methodology with scientically based design methods in particular, reliability,
axiomatic designs and the Dene, Characterize, Optimize and Verify (DCOV) model
in general.
In Volume VII of this series we are going to use this background to show how
important the mathematical base is and how one may apply this knowledge to
optimize designs over two phases: 1. the conceptual design for capability phase, and
2. the tolerance optimization phase. Needless to say, all that may be done with
understanding and application of robustness in our designs, products, processes,
and so on.
SELECTED BIBLIOGRAPHY
Chase, K.W. and Greenwood, W.H., Design issues in mechanical tolerance analysis, Manu-
facturing Review, 1, 5059, 1988.
El-Haik, B. and Yang, K., An Integer Programming Formulations for the Concept Selection
Problem with an Axiomatic Perspective (Part I): Crisp Formulation, Proceedings of
the First International Conference on Axiomatic Design, MIT, Cambridge, MA, Oct.
2123, 2000.
El-Haik, B. and Yang, K., An Integer Programming Formulations for the Concept Selection
Problem with an Axiomatic Perspective (Part II): Fuzzy Formulation, Proceedings of
the First international Conference on Axiomatic Design, MIT, Cambridge, MA, Oct.
2123, 2000
Hubka, V., Principles of Engineering Design, Butterworth Scientic, London, 1980.
Hughes-Hallett, D., et al., Calculus, 2nd ed., Wiley, New York, 1998.
Kacker, R.N., Off-line quality control, parameter design, and the Taguchi method. Journal of
Quality Technology, 17, 176188, 1985.
Kapur, K.C., An approach for the development for specications for quality improvement,
Quality Engineering, 1(1), 6377, 1988.
Kapur, K.C., Quality engineering and tolerance design, Concurrent Engineering: Automation,
Tools and Techniques, Kusiak, A., Ed., John Wiley & Sons, NY, 287306, 1992.
McCormick, N.J., Reliability and Risk Analysis, Academic Press, New York, 1981.
Stewart, J., Multivariable Calculus, 4th ed., Brooks/Cole Publishing Co., New York, 1999.
Strang, G., Linear Algebra and Its Applications, 2nd ed., Academic Press, New York, 1980.
Suh, N., Design and operation of large systems, Journal of Manufacturing Systems, 14(3),
1995.
SL3151Ch14Frame Page 658 Thursday, September 12, 2002 5:59 PM
Limited Mathematical Background for Design for Six Sigma (DFSS) 659
Suh, N.P., Development of the science base for the manufacturing eld through the axiomatic
approach, Robotics & Computer Integrated Manufacturing, Vol. 1 (3/4), pp. 397415,
1984.
Suh, N.P., The Principles of Design, Oxford University Press, New York, 1990.
SL3151Ch14Frame Page 659 Thursday, September 12, 2002 5:59 PM
SL3151Ch14Frame Page 660 Thursday, September 12, 2002 5:59 PM

661

15

Fundamentals of Finance
and Accounting for
Champions, Master
Blacks, and Black Belts

This chapter is unique in the context of the six sigma and design for six sigma
(DFSS) methodology. Our intent is not to present a complete course in nancial
management, but to introduce some key nancial concepts for the Black Belt and
Master Black Belt in dealing with projects, Champions, and management in general.
As we have repeated many times, the intent of six sigma/DFSS is to satisfy the
customer and make a prot (however dened) for the organization. Well, for Black
Belts as well as Master Black Belts, that may be a goal, but the truth of the matter
is that the majority of them have no clue about accounting or nancial issues. In
this chapter, we hope to sensitize all those individuals who are about to x or improve
or even contemplate a change in the system of operations with some understanding
of the

consequences

of their recommendations to the organization as a whole. We
do not pretend to have covered the topic exhaustively, but we believe that this is the
minimum information that Champions, Shoguns (Master Black Belts), and Black
Belts must have to be effective not only in selecting their projects but also in
evaluating their outcome.
We hope that the reader will understand that the discussion here is very broad
and covers small and large organizations. As a consequence, not everyone pursuing
six sigma/DFSS will encounter all the issues presented here. However, regardless
of the organization, regardless of the project, somebody, somewhere, somehow in
the organization will be asking or being asked the questions addressed in this chapter.

THE THEORY OF



THE FIRM

Ask a roomful of business people what the goal of their business is. Maximize
prot will be the answer you hear most often, maybe exclusively. But is the
individual manager

really

concerned with maximizing prot, or maximizing any-
thing for that matter? Are you?
The basic nancial decisions of a company are concerned with (a) capital
investments for plants, equipment, working capital, etc., (b) pricing, (c) the level
of production, and (d) the source of the money either debt or equity to do it
all. How we make those decisions was the subject of Adam Smiths book,

Wealth
of Nations

, published in that distinguished year 1776 (also known for the issuance
of Gibbons

Decline and Fall

and some local political events).

SL3151Ch15Frame Page 661 Thursday, September 12, 2002 8:00 PM

662

Six Sigma and Beyond

Smith told how the pernicious sins of covetousness, gluttony, sloth, and greed
were somehow led by an invisible hand to benet society. His remarkable work
was the cornerstone of those studies now called microeconomics, and a good many
business decisions can still be explained with Smiths basic doctrine: Knowing their
products demand, competition, and cost, business people will act to maximize
prots.
However, despite its simplicity and power, the theory suffers in real world
application for two reasons. First, our information about product demand and com-
petition is usually slight, at best, and even costs though largely under our own
control occasionally veer away from expectations. Second, the wide separation
of ownership and management in the modern corporation has brought additional
motives to the mix; the invisible hand now guides by remote control, and the guided
managers have ideas of their own.
New theories have come forth since the 1950s to improve and update the original
model. They attempt to include the impact of the managers motives, and because
they concern you and me and what abides in our hearts, they are rather interesting.
One theory has it that companies are concerned more with maximizing

sales

than prots. That might explain, for example, the current fascination with mergers
and acquisitions that produce instant sales growth yet from a prot standpoint are
often failures, about one third of them according to experts.
You cannot be in management very long without seeing examples of prots
sacriced to sales: special discounts, loose credit terms, prestige products, low
bids. Why? Because the size of a company, as measured by sales (a la the Fortune
500 list), is what brings managers the greatest satisfaction, salary, distinction, and
seeming success.
Moreover, we all identify with the company we keep and the one that keeps us.
We take unto ourselves a bit of the power, reputation, and recognition associated
with our employer. That may be only a small satisfaction, but it is considerably
larger than the one we get from making prots for unknown shareholders. In fact
prots, if they are too large, may be thought unseemly and become an embarrassment
to us. When we are offered a bonus that is tied to net income, it is in part an attempt
to overcome our natural qualms about excess prots.

BUDGETS

Another modern theory suggests it is the size of his or her

budget

that gives a
manager the most satisfaction. How often have you ascribed this motive to your
governmental and not-for-prot colleagues? But it might apply to most corporate
middle managers, as well. The number of employees you direct has a bearing on
your salary; the amount of money that ows into and out of your control is a measure
of your importance; the size of your department often dictates the size of your
expense account, company car, ofce, etc. These things create far stronger urgings
that a few extra pennies added to the EPS.

SL3151Ch15Frame Page 662 Thursday, September 12, 2002 8:00 PM

Fundamentals of Finance and Accounting

663

OUR ROMANCE WITH GROWTH

A third model styles

rate of growth

as the principal objective of management. In
annual reports, the obligatory sales graphs (that look like stairways to heaven), the
inevitable percentage change gures, and the discussions of future products all
bear witness to our fascination with growth. On the darker side of this enchantment
are some questionable effects: the use of creative accounting to shift prots from
one year to another (in order to smooth out the growth curve); the sacrice of research
and development (R&D) and other long term efforts so as to maintain current prot
growth; and the indiscriminate purchase of earnings through acquisition and merger.
Corporate growth, particularly fast growth, is a stage, not a characteristic. Its
prime cause is customer demand for our products, something we can exploit but do
little to create. Nothing we are aware of, including the universe itself, can expand
forever, and a fast rate of growth (say 25+%) for businesses rarely lasts more than
a decade.

THE NEW INDUSTRIAL STATE

In his writings, John Kenneth Galbraith has argued that the executives of our larger
corporations were moved more by a desire to remain

secure

and expand their
inuence than a longing to maximize the gain of a faceless, uncaring, avarice-driven,
constantly changing body of shareholders. Security is a rare commodity in American
business management, and, like atmosphere, it thins out the higher you go. Yet so
potent is anxiety that when our security is threatened we may sacrice our dignity,
our better judgment, our friends, even our health to regain it.
Now I am the rst one to admit that everyone (me, too) needs a kick in the
fanny now and then. But those rms that promote or even permit a rat race mentality
can expect, and deservedly so, that their executives will make the securing of their
own positions the rst order of their business.

BEHAVIORAL THEORY

Finally, there is the theory that suggests that maybe neither prots nor anything else
is being maximized in the modern corporation. The rm is not one body under a
single direction, but at least four bodies, each contributing a required input, and each
seeking a different reward. The basic four are the shareholders, executives, employ-
ees, and government. They cluster together as does a cloverleaf, four distinct parts
joined at the center. That center is a shifting axis representing the economic prot
of the business. Each group demands its share in the form of taxes, dividends,
security, better working conditions, and the like. Each has the power to close down
or sabotage the business.
If you accept this theory, then the task of management is not to maximize the
shareholders immediate prots but by satisfying all groups, to forge a cooperative
effort (optimize resources) that will yield a bigger reward for each. It is rare when

SL3151Ch15Frame Page 663 Thursday, September 12, 2002 8:00 PM

664

Six Sigma and Beyond

the various factions of a business pull together, but when they do the results are
astonishing.

ACCOUNTING FUNDAMENTALS
A

CCOUNTING


S

R

OLE



IN

B

USINESS

To understand accountings role in business we might rst look at the principal task
of management. The managers job is to control and direct the business affairs under
his or her command. To do so, the manager must understand the effects of past
business transactions and thereby be able to estimate the effects of proposed future
undertakings. Accounting has the dual role of (1)

recording

every occurrence that
has a nancial impact on the business, and (2)

reporting

these nancial data in a
form useful to management. Let us rst look at the reports that accounting prepares
for management, then later at the way transactions are recorded.

F

INANCIAL

R

EPORTS

The balance sheet, income statement, and other reports summarize the results of a
companys activities. When all of the talking is done, it is to them you look to see
how well the company is really doing. This is done through an evaluation of the
assets, liabilities and owners equity or a balance sheet.
Accountants are nancial historians. Their task is rst, to record every event in
the life of a business that has a monetary impact; and second, to report those
proceedings in forms that show management how far the company has come and in
which direction it is heading.

The Balance Sheet

Balance sheet is the age-old name of a report that sets forth the assets, liabilities,
and equity of a company. As accountants have become more educated and higher
priced they have tried to substitute fancier names such as statement of nancial
position or statement of condition, but the old name lingers on. There are two
important balancings or equalities in this report. The rst is usually referred to as
the balance sheet equation:
Assets = Liabilities + Equity
The counterpoise of these factors is the essence of the double-entry bookkeeping
system, which says that for every action there is a reaction, for every benet received
a benet bestowed, or an obligation to do so. Thus, for every dollar of assets owned
by a company, someone among the creditors and shareholders holds a claim check.
The other equality in the balance sheet is hidden, or rather, undisclosed. Although
each item listed has a dollar value, there is another quality about it that is not revealed:
The dollar amount is either a

debit

balance or a

credit

balance. The assets have debit
balances while the liability and equity accounts have credit balances, so that on
every balance sheet

SL3151Ch15Frame Page 664 Thursday, September 12, 2002 8:00 PM

Fundamentals of Finance and Accounting

665

Debits = Credits
In our system of accounting, debits also represent expenses on the income
statement, while revenues normally have a credit balance.
The balance sheet represents the condition of the company on a particular day
in fact, the last working moment of that day. Every subsequent transaction changes
it to some degree; an employee coming through the door to work the next morning,
for example, starts the meter running on the liability called accrued salary expense.
An undated balance sheet, therefore, is meaningless.
Most condition reports actually give two balance sheets the current one and
one from the year before so that a quick comparison can be made. Often, the
changes in a balance sheet, from one year to the next, are more signicant than the
ending numbers themselves.

Current Assets and Liabilities

The balance sheet is normally divided by debits and credits; that is, the assets appear
on the left side (or at the top), while the liabilities and equity accounts are on the
right (or on the bottom half of the page). On each side (or in each section), the items
are listed in the order of their exigency: their nearness to being converted to cash
in the case of assets; their nearness to being paid off in the case of liabilities and
equity.
In the evolution of the various assets to and liabilities to maturity, a sharp line
is drawn at one year beforehand. Those assets such as inventory, accounts receivable,
and cash itself that are expected to convert to cash in the ensuing twelve months are
called

current assets.

Likewise, those debts that will come due before the next annual
nancial statement are classied as

current liabilities.

The accuracy of these clas-
sications is important in measuring a companys

liquidity

its ability to pay debts
on time.

Fixed Assets

Items that are used in running the business, as distinguished from those things that
are made or held for resale, are called xed assets. Fixed assets are typically listed
below the current assets, something like this:
Property, plant, and equipment
Less: accumulated depreciation
Net xed assets
Accumulated depreciation shows how much of the cost of existing xed assets
has been expensed. It amortizes the cost over a period roughly akin to the useful
life of the assets. Here is the accounting entry for the yearly write-off:
Debit: depreciation expense
(An income statement account)
Credit: accumulated depreciation
(The balance sheet account)

SL3151Ch15Frame Page 665 Thursday, September 12, 2002 8:00 PM

666

Six Sigma and Beyond

Accumulated depreciation has a credit balance. When it is listed, therefore,
among the assets (which are debit balances), it is a negative amount.

Other Slow Assets

Slow refers to the fact that in the ordinary course of business these assets are not
likely to be converted to cash in the coming year.
Goodwill represents the premium over book value paid by one company when
buying the assets of another. Down-to-earth accountants call goodwill, goodwill.
Others label it something like, Excess of cost over book value of acquired assets...
For example, Company A has assets with a book value of $1 million. Company B
negotiates to buy those assets for $1.2 million. The accounting entry on Bs books
would look like this:
Because goodwill represents one buyers estimate of a worthwhile premium,
bankers and other nancial analyzers often eliminate it from consideration as an
asset. (More on that later.)

Current Liabilities

Obligations that are due to be paid are called current liabilities. Current liabilities
is an important classication to analysts because it represents money that must be
paid from future receipts. Most current liabilities are renewable (or revolving), as
long as creditors have condence in the debtor.

Working Capital Format

Now and then you will see a balance sheet in a working capital format. (Working
capital equals current assets minus current liabilities.) The layout might be something
like this:
While the creators of this format probably had good intentions, it is confusing
to read and even irritating because there is no gure for total assets. The working
capital gure is of little use and may even be harmful if it is taken to be something
it is not. You may subtract current liabilities from current assets on paper, but you
cannot do it in real life; current liabilities are reduced only by cash.

Debit: Assets (various kinds) 1,000,000
Debit: Goodwill 200,000
Credit: Cash 1,200,000
Current assets $1000
Current liabilities 500
Working capital $500
Other assets 1200
$1700
Other liabilities $800
Shareholders equity 900
$1700

SL3151Ch15Frame Page 666 Thursday, September 12, 2002 8:00 PM

Fundamentals of Finance and Accounting

667

Noncurrent Assets

The noncurrent assets are those that take longer than a year to liquidity (e.g., long-
term receivables), and those that the company has no intention of selling, such as
property, plant, equipment, vehicles, and other so-called xed assets. The xed
assets are listed at what they cost, less depreciation, and on the balance sheet itself
no attempt is made to show their current market or replacement value.
Intangible assets such as patents, organization expense, and goodwill (usually
called something like cost in excess of book value of acquired assets) are also
shown in the noncurrent assets section, although they may not be labeled as intan-
gible.

Noncurrent Liabilities

Among the noncurrent liabilities are bonds payable and other long-term debts,
deferred compensation, and maybe accrued pension liabilities. Any part of these
obligations that falls due within the next 12 months is listed in the current liability
section. Also frequently found here is the deferred income tax account, which is a
liability in theory but seldom in practice; accountants (and everybody else) are so
unsure about how to categorize this account that they usually skip giving a total
liability gure on the balance sheet just to avoid having to classify it.
On about one out of ve balance sheets you will run into minority interest. It
is usually found in between liabilities and equities because it is neither one nor the
other. Minority interest represents the outside shareholders of not fully owned
subsidiary corporations; the amount is not payable to them unless the subsidiary is
closed down and liquidated.

Shareholders Equity

The remainder of the balance sheet is given over to the equities. Some accountants
refer to them as a form of liability. They are if you strain a little and reason that
the company assets that are not owed to the creditors are owed to the stockholders.
But in modern usage, equity is distinguished from liabilities, which are obligations
to make payments on specied dates. Shareholders may be entitled to the equity
share of the assets, but cashing out is a practical impossibility unless a majority
of them act to liquidate the company.
Of course, shareholders may sell their interest if the stock is publicly traded or
they can nd a buyer. Stockholders of today think of themselves more like depositors
in an institution than owners of a company. The security, comfort, and convenience
of modern investing has been purchased with the power and inuence shareholders
once had.
As we stressed earlier, the balance sheet is constantly changing, and the changes
year to year often give a clue as to where the business is heading. That information
is given in the statement of changes, discussed below, but one item in the equity
section retained earnings has a whole separate report to show how much and
why it changed. That report is called the income statement.

The Income Statement

This statement is a report of a companys sales, less the expense involved in getting
those sales, and the resulting prots. It used to be called the prot and loss

SL3151Ch15Frame Page 667 Thursday, September 12, 2002 8:00 PM

668

Six Sigma and Beyond

statement and was nicknamed P&L but in the turbulent sixties corporations
became sensitive to the word prot, and it has all but disappeared from their public
utterances. (Nonprot organizations are supersensitive about the word, as you might
imagine, and refer to their prots as the excess of revenues over expenses, or some
such dignied euphemism.)
Income statements begin with the grandest number found in the business, rev-
enues...the fount of all prots. In most rms the term revenues means sales, but
there may be other forms of revenue, too interest income, rents, royalties, and so
on. The sales gure is usually net of returns and allowances.

Gross Prot

The rest of the income statement is a process of distilling the revenues by boiling
off expenses at various stages until you are left with the essence of net prot. The
gures you get along the way vary



in importance. The rst step is the deduction of
the cost of sales (or cost of goods sold) the largest expense in most companies.
Sales Cost of Sales = Gross Prot
From these gures you can derive the gross prot margin (it is rarely given in
the report)
Gross prot (GP) and gross prot margin (GPM) are important because they
reect the basic climate of the business. In the typical rm, the gross prot margin
will not vary more than two or three percentage points from year to year. If the
gure is trending down, it may mean the companys product line is getting old or
the pressure from competitors is increasing both of which are major problems.

A Gaggle of Prots

Like a gaggle of geese whose symmetrical formation in ight points gracefully
toward their goal, prot calculations often taper gently inward as they descend to a
point on the bottom line. Along the way you might nd gures for:
Operating income
EBIT (earnings before interest and taxes)
Income before nonrecurring items
Income before extraordinary items
Income from continuing businesses
Income before taxes
You might well ask whether all



these numbers clarify the prot picture or deform
it. Perhaps the biggest benet of EBIT is that it gives management a bigger number
to talk about. Most companies borrow money and pay interest with regularity; they
Gross Profit
Sales
Gross Profit margin =

SL3151Ch15Frame Page 668 Thursday, September 12, 2002 8:00 PM

Fundamentals of Finance and Accounting

669

would not stop if they could, which they cannot, so there is not much point to
deriving a prot without such a routine expense.
The same could be said for prot before income taxes. It is like saying look
how much money we could make if we did not have to pay taxes. So what? It
would be as useful, and perhaps more interesting, to show us income before
presidents salary, or income before expense accounts.
On the other hand, the income before nonrecurring expense or rather, the non-
recurring expense itself can sometimes be revealing. Most often these charges are
the bite-the-bullet kind; the company has a losing product or division or subsidiary
that management decides to dump.
There is some psychology at work here. The thought of prots being attrited
year after year by some feeble division is depressing; the cost of getting rid of such
a ball and chain is almost inconsequential, so long as it can be tagged

nonrecurring.

Management is saying sure, there have been some problems or mistakes, but now
they are behind us and we can look to a brighter future.
If you nd such a write-off in some companys glossy annual report, just turn
to the front pages where the recent acquisitions and new products are described with
unfettered optimism; see if you can guess which of them will be

tomorrows

non-
recurring expense.

Earnings per Share

The income statements of public corporations also give an earnings per share (EPS)
gure. From the net income is deducted dividends, if any, on the preferred stock,
and the remainder is divided by the number of common shares outstanding.

The Statement of Changes

The Statement of Changes in Financial Position is descended from a family of funds
statements that include (a) the sources and applications of funds, (b) the sources
and uses of cash, and (c) the where-got, where-gone statement.
The purpose of the report is to describe whence money has come into the
business, and how it has been used. There are, of course, thousands or millions of
little pieces to that puzzle, so the statement of changes does some wholesale netting
to get the report down to a manageable size.
All of the transactions involving sales and expenses are combined in a net income
or loss gure; to this are added back those deducted expenses that did not take any
cash, such as depreciation, amortization, and deferred income taxes. The total of
these items is often called the sources of funds from operations. The changes in
the current accounts may be grouped together as a change in working capital (current
assets minus current liabilities).

Sources of Funds or Cash

Besides the prots and noncash expenses, any increase in a companys liabilities is
considered a source of cash. Think of borrowing from a bank; you sign a note that
increases your debts, and you walk out with a pocket full of cash. On the other hand,
any decrease in assets is also a source of funds as when you sell one of your
trucks for cash.

SL3151Ch15Frame Page 669 Thursday, September 12, 2002 8:00 PM

670

Six Sigma and Beyond

Use of Funds

Typically, the principal use of funds is for additions to property, plant, and equipment;
also found here are increases in other slow assets, dividends paid, and net reductions
in debts. A balancing gure the change in working capital is either included
here or listed just below this section.

Changes in Working Capital Items

Some statements of changes have a section showing the changes in current assets
and current liabilities. The net changes in the current assets please pay attention,
this is not easy the net changes in the current assets minus the net changes in the
current liabilities equals the net change in working capital. This gure will match
the change in working capital calculated in the sections dealing with noncurrent
assets and liabilities and equity.
Say what? If after this simple explanation of the statement of changes you feel
as if your brain is turning to mush, be assured it is not your brain that is the problem;
it is the statement. Anyone can have a dud in his or her bag of tricks, and this is
one the accountants have. The statement is hard to understand and has so many
exchangeable opposites that the words

increase

and

decrease

tend to lose their
meaning after a few minutes.
Not many people, I have found, bother to read this report; but of the non-
accounting stalwarts who do, most fail to understand it, or worse, they misinterpret
it. Nevertheless, the changes in the balance sheets, one year to the next, may be
important. If that is the case, you can usually get just as good information and
sometimes better by simply subtracting the side-by-side numbers in the two
balance sheets listed, rather than from struggling with this unfortunate report.

The Footnotes

There is a cliche among analysts and accountants that the real lowdown on a rm
will be found in the footnotes. There is usually plenty of information there, all right,
maybe four times again as much as in the nancial statements themselves. But the
footnotes in a nancial report are, like footnotes anywhere else, related information
of lesser importance. Anything with a serious nancial consequence will be
expressed on the statements, and while additional details can often be found in the
footnotes, they may or may not be of interest to you.
A classic example on footnotes is the 1986 annual report of General Motors
Corporation. It had a total stockholders equity gure on its balance sheet of $30.7
billion. In the footnotes, however, there was more than a full page of crammed data
that reconciled changes in amounts for ve different classes of stock, capital surplus,
and retained earnings. While it may give some people comfort knowing that the
extra information is there, for most readers it is not likely to add anything to the
impression made by the single number. We suggest, therefore, not to bother with
the footnotes unless you have a particular need for more details about an item.
Another reason to go easy on the footnotes is that the language there is largely
technical, and if you are unfamiliar with it you might be led to a wrong conclusion.
Besides, all of us in business these days have more information available to us than
we have the time to look at it. Excessive information is no friend to a good decision,
and it is an enemy to action.

SL3151Ch15Frame Page 670 Thursday, September 12, 2002 8:00 PM

Fundamentals of Finance and Accounting

671

Accountants Report

Financial reports that have been audited by independent CPA rms will contain
a letter from them stating the scope of their involvement and giving their opinion
about the nancials. It is usually written in accounting boilerplate.
If the letter is signed by an accounting rm, and it contains in our opinion, if
it does not contain except for or subject to, and if it has no more than a few
sentences, you are looking at a clean opinion and can feel very comfortable about
the gures. For any exceptions to the above you had better wade through the whole
letter depending on how important it is to you.

How to Look at an Annual Report

Since I have looked at quite a few nancial reports over the last 30 years, allow me
to recommend a

best

way to go about it. The fact is, that there is no one best way
for everybody. It is an individual thing, a little like the way you observe a member
of the opposite sex walking toward you. You look rst at one thing, then another,
and if you are still interested you may turn around and look at a third. But each
person develops the pattern that suits his or her own individual needs. Same with
nancials, so here is my pattern.

Step 1.

Look rst to see if the statement has the independent clean opinion
described earlier. Anything less means that you need to take a more careful
look at the numbers (i.e., testing them against poor common sense and
experience), and read every single word on the statements.

Step 2

: Turn to the income statement and look at:
a. The latest net income gure. A loss is a red ag.
b. The prior years net income to see the

direction

of prots. Two years
losses back to back means standby the lifeboats.
c. Total revenues or sales to see the direction they are heading. Sales are
a proxy for

demand

for the product the single most important
requirement for success in business.

Step 3

: Now the balance sheet. Here is the sequence I follow:
a. Right side, second to the last gure from the bottom shareholders
equity. Compare it with last years gure; if the company was prot-
able, equity should have gone up. Now compare it with the bottom
number (total debt plus equity) just below; if the bottom gure is more
than twice the equity, the rm may have too much debt.
b. Run your eye up the page to the total current liabilities amount; remem-
ber the number (rough rounded).
c. Now over to the current assets. Check the total of that section; if it is
only slightly higher than the total current liabilities, that is bad; twice
as much is good.
d. Finally, look at cash (plus marketable securities). If it is less than


the current liabilities, it is not so hot; a good ratio would be 30% or
more.

SL3151Ch15Frame Page 671 Thursday, September 12, 2002 8:00 PM

672

Six Sigma and Beyond

Step 4:

The next step is to sit back and reect for a moment. You have made
mental tests of statement reliability, protability, leverage, and liquidity;
now form a preliminary opinion of the overall condition: excellent, good,
fair, poor, or lousy. If you have a mixture of good news, bad news and/or
you have to explain your judgment to others, go on to Step 5.

Step 5

(Optional): For a second opinion, ask for professional advice.
You will recall that we started this section by saying that accountants have a
dual role in business: (1) to record every nancial transaction, and (2) to report this
nancial data in a form useful to management. We have looked at the reports prepared
by accountants. Now let us examine how transactions are recorded.
The history of humankind, that is, the written record of human activities, goes
back about ten thousand years. The earliest evidence of writing that we have dis-
covered consists of some lumps of clay on which Sumerian farmers recorded their
livestock what we might fairly term accounting records.
Today almost all accounting is done by the double-entry bookkeeping method,
which was developed by the Roman Catholic Church. Thus the term accounting
clerk derives from the word cleric. The rst evidence of this system dates back
to Genoa, Italy in the fourteenth century.
I think we would all agree that the accounting profession has had plenty of time
to settle on all the right procedures. But judging by the changes still going on in
accounting, and the liveliness of the debates about them, you wonder if the devel-
opment of accounting is even half complete.
One reason for the continual changes may be that there is no unifying theory
of accounting similar to, say, the supply-demand concept of economics or the ego-
-id theory of personality. Instead, accounting is based on conventions, that is, rules
established by general consent, usage, and custom. These rules are called

generally
accepted accounting principles

(GAAP), and they change from time to time.
Accounting, then, is very much alive if not completely well and the challenges
and opportunities it offers to good management are as fresh as ever.

R

ECORDING

B

USINESS

T

RANSACTIONS

Every business transaction involves both give and take. The double-entry bookkeep-
ing system is an ingenious method of recording these activities in a complete, quick,
and puzzling manner.
The double-entry bookkeeping system (there are also single-entry systems
your checkbook is an example) dominates accounting in all the industrialized coun-
tries. The outstanding characteristic of this system is that it records both sides of
every transaction, and every commercial transaction has two sides: there is the thing
you give to the other party, and the thing you take in return. To account for a piece
of business this give and take must be expressed in dollar values, and the double-
entry system always records an equal dollar amount. Prot or loss is frequently part
of the transaction the factor that makes the give and take balance.

SL3151Ch15Frame Page 672 Thursday, September 12, 2002 8:00 PM

Fundamentals of Finance and Accounting

673

Debits and Credits

The terms

debit

and

credit

were devised to represent the take and the give; they are
from the Latin language because our modern accounting system traces its origin to
Catholic clerics of the fourteenth century. In general, debits represent what is taken
from a transaction, and credit what is given up. Debits and credits have no more
meaning than that, and we could just as easily have chosen other terms in their place,
black and red, for example, or left and right.
Of course the words debit and credit have other meanings in our language, but
it will only confuse you to try to match them with the narrow usage in accounting.
Table 15.1 summarizes the use of these terms in accounting.

Sources and Uses of Cash

In order not to make this come too easily to the uninitiated, nancial people some-
times dene debits as the uses of cash, and credits as the sources of cash. These are
what I call denitions +; normal denitions plus one mental broad jump. Debits are
said to be uses of cash because when cash is spent (that is the credit part of the
transaction), something such as an asset is taken and recorded as a debit. They make
a similar convolution to label credits as a source of cash. If you borrow money from
a bank, the cash they give you is a debit something received, an asset but the
source of that cash was the bank loan a liability and a credit.

How Debits and Credits Are Used

Every item asset, liability, or equity on the balance sheet has a dollar value
assigned to it; it is the companys and their CPAs best estimate of the worth of that
item. But each account also has another quality about it, one that is hidden and not
expressed: Every dollar amount on the balance sheet is also either a debit balance
or a credit balance. If you look back at Table 15.1, you will see that assets are debit
balances, while the liabilities and equity accounts are credits.

The Balance Sheet Equations

You will recall our earlier discussion of the balance sheet equation:

TABLE



15.1
A Summary of Debits and Credits

Debits Credits

Abbreviations Dr Cr
Represent what is Taken Given
They often designate what is Owned Owed
Or sometimes Benets received Money spent
They are also the normal balances of Assets
Expenses
Liabilities
Equity
Revenues

SL3151Ch15Frame Page 673 Thursday, September 12, 2002 8:00 PM

674

Six Sigma and Beyond

Assets = Liabilities + Equity
And from the foregoing you can see also that
Debit balances = Credit balances
They will stay that way so long as every future transaction is recorded with
equal amounts of debit and credit dollars.
Forget + and . In the use of debits and credits, they do not stand for plus and
minus. Both may be either; it depends on the account they are applied to. If a debit
is applied to an account that already has a debit balance, the two amounts are added
together, and a larger debit balance results. If a credit is applied to an account with
a debit balance, then the amounts are subtracted from one another. A similar rule
holds for accounts with credit balances. Debits and credits, in other words, are added
to their own kind but subtracted from their opposite number.

C

LASSIFICATION



OF

A

CCOUNTS

In accounting there are ve basic types of accounts. On the balance sheet: assets,
liabilities, and equity. On the income statement: revenues and expenses. In Table 15.2
is a summary of their normal debit/credit balances.

TABLE



15.2
Summary of Normal Debit/Credit Balances

Normal Balance
Type Denition Debit Credit Balance Sheet Income Stmt

Asset What is owned x x
Liability What is owed x x
Equity The rest x x
Revenue Sales, etc x x
Expense Costs x x

The Balance Sheet The Income Statement

Assets Sales
Current Less: Cost of Sales
Miscellaneous Gross Prot
Fixed Selling and Administrative Expense
Intangible Interest Expense
Liabilities Other Income and Expense
Current Income Taxes
Long term and Deferred Net Income
Equity Less: Dividends
Common and Preferred Stock Added to Retained Earnings
Retained Earnings

SL3151Ch15Frame Page 674 Thursday, September 12, 2002 8:00 PM

Fundamentals of Finance and Accounting

675

Recording Transactions

Remember the basic rule in accounting that in the recording of a transaction debits
must equal credit. We can readily see that every business dealing has both a give
and a take to it. When a company buys merchandise it takes the goods and gives
money in return. The opposite occurs when the goods are re-sold. When it hires a
worker, a company takes the fruits of his or her labor and gives back cash in the
form of wages. In a broad sense, debits represent the take in a business transaction,
and credits the give.
For example, your company buys a new computer, paying $900 in cash.
The give and take aspects of double-entry accounting are easily seen here (they
are not always so readily apparent). The debit is what has been received; the credit
is what has been given in exchange; the debit also represents a use of cash. In this
example, both of the affected accounts are assets. The cost of the computer will be
added to the cost of previously acquired ofce equipment, which is already shown
on the balance sheet as an asset (debit). As we are combining a debit entry with a
debit balance, the result will be a larger asset account on the next balance sheet.
However, the cash used to buy the computer was also an asset (debit). Now
when we combine the credit to cash with our beginning debit balance of cash, the
result is a smaller debit balance of cash. Since only assets were affected by this
transaction, the total of assets was unchanged. There were no effects at all on
liabilities, equity, revenues, or expenses.
Another example: Your company makes a sale amounting to $2150. As soon as
an invoice is issued, the accountants will record the transaction. If the sale is for
cash the entry will be
In this situation, both entries are plus amounts. The debit is added to Cash and
the credit is added to Sales, for Sales is a revenue account that normally has a credit
balance.

Note:

In this case there are no effects on liabilities, equity, or expenses. The cost
of the goods that were sold will be recorded in a separate transaction at the time we
derive a new inventory gure.

The Two Books of Account

The transactions we just looked at and similar ones are recorded in a book called
the General Journal. It sets out in chronological order all of the rms business
dealings. It is like a diary of business transactions. Large rms have special journals,
such as the Cash Receipts Journal, for recording certain classes of transactions,
which are summarized at the end of the month or year in the general journal.

Debit (Dr) Ofce equip $900
Credit (Cr) Cash $900
Dr Cash $2150
Cr Sales $2150

SL3151Ch15Frame Page 675 Thursday, September 12, 2002 8:00 PM

676

Six Sigma and Beyond

The second book of account is the General Ledger (GL), in which each and
every account has its own page, on which all of the journal entries relating to that
particular account are transcribed. With the GL you can look up an account such as
Cash or Notes Payable or Salaries and see all of the transactions made during the
year and the current balance of the account.

The Trial Balance

The process of closing out the books at the end of the year can be rather elaborate.
There are often many adjusting entries to be made, and various accounts must be
combined and tted to form the nal nancial statements.
The rst step in that process is the preparation of the Trial Balance (TB). The
TB is a listing of all the accounts in the General Ledger with the current balances
shown in either a debit or a credit column. Since debits and credits are equal in
every transaction, the two columns of the trial balance should also be equal. Finding
the two columns equal is the trial part, for if they are not, the accountants must
locate and correct the errors before they can proceed. The accounts listed in the trial
balance are then divided among the balance sheet and the income statement to form
those reports.

The Mirror Image

To the neophyte, the discussion about debits and credits may contribute to some
confusion and inconsistencies to the understanding of these two concepts. For
example, when we say that a debit to cash adds to our cash balance, it can create
understandable confusion. And when we put money into our checking account the
teller, if he or she speaks to us at all, may tell us that the bank is crediting our
account. But is this not debiting instead?
The confusion arises from the fact that our accounting entry in recording a
transaction is often a mirror image of the other partys. The cash that we take in
(our debit) is the same cash that the other person has paid out (his credit). And the
goods we delivered to him are deducted (by a credit) from

our

balance sheet, and
added (by a debit) to his. When the bank tells you that your deposit is being credited
to your account, they are speaking from their viewpoint, not yours. Their accounting
entry is:
Since your demand deposit is a credit on their books, an entry crediting your
account is one that increases the balance.

ACCRUAL BASIS OF ACCOUNTING
Accrual accounting has made us as dependent upon CPAs as we are on MDs and
JDs. But if you want to know how much your business is really earning, it is the
only way to go. First of all, accrual is hard to pronounce without sounding as if you
had a mouth full of bubble gum. Even most accountants, who are steeped in reverence
Dr Cash
Cr Demand deposits
SL3151Ch15Frame Page 676 Thursday, September 12, 2002 8:00 PM
Fundamentals of Finance and Accounting 677
for the accrual basis, give the quick two-syllable pronunciation, a-krool rather
than the proper three-step version, a-kroo-al. Second, it rouses no sense of rec-
ognition or meaning. It is one of those words that requires you suspend all other
thoughts while you struggle for its gist. And third, even when you remember that
accrue means to accumulate or increase, it is still hard to make the connection with
the accrual basis of accounting.
Now that that is out of my system, accrual is the accounting principle that counts
sales as income even though the cash has not yet been received, and records expenses
in the period they produced sales although they may have been paid in some other
period.
If we look at the income statement in a companys annual report, the rst item
is sales, for the whole year, though it is likely the invoices from the nal month are
still uncollected. By the same token some expenses, such as telephone and utilities,
are counted although those incurred in the last month may not be paid until the
following year. Sometimes there is the opposite effect, where the cash moves rst,
and the recording of income and expense comes later. For example, a customer sends
in a check along with an order; the check may be deposited right away, but no sale
is recorded until the goods are delivered.
Or, say a company builds an elaborate display in December 2001 for a convention
in January 2002. Assuming the company operates on a calendar year basis, the
money spent in December would not be counted as an expense until 2002, the year
in which the benets of the expense are derived.
Accrual Basis versus Cash Basis
As individuals, most of us use a cash basis for tax purposes. We do not report money
owed to us as income only the cash we have received. Nor do we take a deduction
for a medical expense before we have paid the bill. Some businesses usually
small operate that way, too. There are a few advantages. It is a clean and simple
way of accounting; the cash receipts and payments records serve for the income and
expense statement as well and for tax purposes. The cash basis allows some maneu-
vering through the use of delayed billing or accelerated payments. The big disad-
vantage of the cash basis is that it is a crude measure of a fast-paced activity so
crude that a lot of damage can be done before a true assessment is made.
Details, Details
The accrual basis of accounting sometimes appears overly concerned with particu-
lars. When the year ends one day after payday and an accountant spends hours
calculating that one days accrued salaries so they can be charged to the old year,
you may well wonder if the accounting profession is not feathering its bed.
But if there is any one thing about business that is essential for management to
know, it is an accurate picture of prot and loss. And given the large numbers we
deal with and the slender prot margins that accompany them, getting accurate
income gures is worth a lot of expense and bother.
SL3151Ch15Frame Page 677 Thursday, September 12, 2002 8:00 PM
678 Six Sigma and Beyond
Birth of the Balance Sheet
Accrual accounting is father to the balance sheet. Look at the assets side of a balance
sheet, and draw a line under accounts receivable. Just about everything beneath that
line is a prepaid expense, an expenditure waiting to take its place in the expense
section of some future income statement.
On a cash basis, these assets would be counted as lost costs a gross distortion
of the truth. But with accrual basis accounting we assign them a value commensurate
with their potential for producing future revenue. The accrual method is often a pain
both to apply and to understand, but consider the investment and credit decisions
(and if those do not move you, the management bonuses) that are dependent upon
it. The more accurate the measure of our past activities, the better will be our future
decisions.
Prots versus Cash
Businesses are in business to make a prot, but they run on cash. And if you think I
am kidding, try getting on a bus with just your income statement. Because accrual
accounting distinguishes between the prot effect and the cash effect of transactions,
it is necessary for management to have a cash plan as well as a prot plan. Yes, it is
possible to be protable and still go broke, that is, run out of cash. The problem can
be acute for highly seasonal businesses or those with a uctuating cash/credit sales mix.
Things Are Measured in Money
Most annual reports begin with a hymn of praise by management for themselves,
followed by colorful pictures of shiny products and smiling workers. It is the CPAs
job to express all this in terms of dollars in the income statement and balance sheet.
From time to time, sentimentalists wonder why the human worth of the employees
is not reected on the nancial statements. Most of us know workers who might qualify
as assets, and others more properly described as liabilities, but no one has yet come up
with an acceptable way of putting a number to these characteristics.
Values Are Based on Historical Costs
The value of an asset is continually changing as a result of wear and tear on the one
hand, ination on the other. Perhaps the true value of an asset is revealed only at
those moments in time when it is sold. Only at those times are we certain of the
assets hard cash value.
For convenience, accountants have fastened on the moment of acquisition to value
the xed assets. The amount originally paid for an asset is the balance sheet value, and
no attempt is normally made to adjust that value except for scheduled depreciation.
For this convenience we pay a price, particularly in times of high ination when
assets often appreciate. One of the most often-heard criticisms of CPAs is their
failure to value xed assets at the current or replacement value. Leaving aside the
controversy there are two things to keep in mind:
SL3151Ch15Frame Page 678 Thursday, September 12, 2002 8:00 PM
Fundamentals of Finance and Accounting 679
1. Most assets are difcult to appraise, and there is often a wide difference
of opinion.
2. Business assets acquire value from the revenue stream they are able to
produce. If the value of assets does indeed rise during ination, the proof
should be found in higher earnings.
UNDERSTANDING FINANCIAL STATEMENTS
ASSETS
Most assets are little more than deferred expenses. Their value lies in the future
sales they can generate. And, as with baking a cake, that takes the right proportion
of ingredients. Assets are the things a company owns. All assets have value, but not
all things of value are assets under present accounting rules only those that have
a money value. This means that one will nd no listing on a balance sheet for the
trust customers might have in a company or the team spirit of the employees because
those qualities cannot be measured in dollars.
What gives an asset its true value is not the materials and workmanship that
went into its creation, but rather its ability to help generate a future stream of income.
It is not the bricks and mortar, glass, and metal that make a fast food restaurant on
a busy street an asset, but the sales that will result from operating the place. The
same facility lying in the middle of a wheat eld would be of no value at all. The
dollar amounts shown on the balance sheet are the original costs (accountants like
to say the historical costs) of the assets. From the original costs is deducted the
accumulated depreciation, if any, against those assets.
THE INFLATION EFFECT
For the time being, balance sheets do not reect the replacement costs of the assets, a
fact that has inspired much derision of the accounting profession, especially in periods
of high ination. Beleaguered with various theories of value, accountants have at least
picked one where the numbers can be veried. Every dollar on the balance sheet can
be traced back to an invoice, contract, or some piece of paper in the les.
SUMMARY OF VALUATION METHODS
Historical Cost
Historical cost is the present basis of balance sheet values. We can all agree that a
thing is worth what someone will pay for it, so for at least one moment in time this
was the unquestioned value.
Liquidation Value
Liquidation value is the under-the-hammer price, as in a bankruptcy sale. Most assets
sold off this way would bring only a small fraction of their balance sheet value.
(Nine cents on the dollar is a historical average in bankruptcy liquidations.)
SL3151Ch15Frame Page 679 Thursday, September 12, 2002 8:00 PM
680 Six Sigma and Beyond
Investment or Intrinsic Value
This is the present value of all future income expected to be derived from the asset,
discounted at a rate commensurate with business risk (typically 10% to 15%).
While most nancial experts would agree that this is an assets truest value,
it is all based on the tricky task of estimating future income. When you apply
mathematical precision to a guess forecast, you also get another version of a guess
estimate with angular numbers instead of round ones.
Psychic Value
Often a factor in mergers and acquisitions, psychic value looks to the buyers state
of mind rather than any characteristic of the asset. Unfortunately, trying to divine
the hopes and dreams rattling around in the mind of a potential buyer is not any
easier than estimating future income.
Current Value or Replacement Cost
These, as a result of double-digit ination a while back, got a lot of attention from
the Securities and Exchange Commission and the accounting profession, if not
businesspeople themselves. The burden of their studies, however, was not that asset
values are really much higher than stated, but that depreciation allowances based on
historical costs understated the true expense, and thus led to overstated prots.
The current value issue, like ination itself, is about as predictable as the
common cold, and as frustrating to cure. Bad as historical costing is, CPAs just have
not found anything they like better.
Assets versus Expenses
Granted that it may sound like a contradiction, assets and expenses are very much
alike. Except for nancial assets (discussed below) and land, assets are little more
than prepaid expenses. The reason we just do not call them expenses is that they
still have some juice left in them some power to generate future sales.
All expenditures except payments of debt result in either an expense or
an asset. The distinction rests on how long the item purchased will be of use. If it
will be used up by the end of the year it is an expense; if its usefulness extends
beyond the present accounting year it is an asset. Therefore, money spent for wages,
electricity, or travel results in an expense, while money spent to acquire carpeting,
a lathe, or a jet liner creates an asset.
Some distinctions are not so easy. Money spent to incorporate a business may
be listed as an asset (organization expense) on the theory that it will benet the
company throughout its life. On the other hand, it might be written off at once as
just another legal expense. Most of the asset/expense decisions will be made by your
CPA using established principles, but there are always some arguable cases. The
key question is, do you want to bear the entire expense now or stretch it out? Since
most managements exist at the sufferance of the bottom line, it is more than an
academic issue. More often than not, if it is a borderline case, the course of action
SL3151Ch15Frame Page 680 Thursday, September 12, 2002 8:00 PM
Fundamentals of Finance and Accounting 681
is to expense off what you can and still keep your job. Remember that the issue will
not affect your cash balances the money has already been spent.
TYPES OF ASSETS
Assets may be classied according to their tangibility. This is not the usual way we
distinguish them in nancial reports, but it can add depth to our understanding of
the nature of modern business.
Financial Assets
These include cash, marketable securities, accounts and notes receivable, and invest-
ments.
Cash is the premier asset it always gets the rst position on the balance sheet.
There is little need to explain why, for while other assets may interest us, cash
generates something more akin to a fascination. I am reminded of something attrib-
uted to the Roman poet, Ovid, who is best known for writing The Art of Love,
and its antidote, The Remedies of Love. He said: How little you know this world
if you fancy that honey is sweeter than cash in the hand. Now if the poet Ovid
sounds a little like an economist, the economist John Kenneth Galbraith sounds a
little like a poet when he discusses money: It ranks with love as the source of our
greatest pleasure, and with death as the source of our greatest anxiety.
Accounts receivable are the monies due from customers. Nearly all rms that
sell to other businesses sell on open account credit, so receivables usually represent
one of the larger kinds of assets you need to run a company. Receivables are claims
on money, and as such are maybe halfway to being cash. Some people are fond of
reminding us that you still have to collect the account before you have something,
but the average amount that ends up as bad debts is only about a third of a percent.
Other cash claims such as receivables from and investments in afliated com-
panies may or may not be nancial assets, depending on how readily redeemable
they are. Like a loan to your brother-in-law, these may be more in the nature of gifts
than nancial assets.
Financial assets make a shiny impression on those you deal with; they give you
tactical exibility; they invite opportunities to knock on your door; and they give
you a sense of security and well-being. On the other hand, these nancial assets are
given to you (management) to do something with besides bathe in their glow, and
by themselves they produce limited income. Later we will discuss the question of
how much cash is too much. Unlike property and equipment, nancial assets do not
wear out or become obsolete. They do, however, suffer from ination and, in the
case of marketable securities, from uctuation in market price.
Physical Assets
These include the inventory, land, buildings, equipment, and anything else you can
paint. Most physical assets are subject to depreciation the process of writing off
(accountants say expensing) the assets over their useful life.
SL3151Ch15Frame Page 681 Thursday, September 12, 2002 8:00 PM
682 Six Sigma and Beyond
Exceptions are inventory, which is not kept long enough to depreciate (that is,
it had better not be), and land, which is assumed not to depreciate (but do not try
convincing the folks in the vicinity of Mt. St. Helens).
Physical assets other than these two can be viewed as lost costs or prepaid
expenses. Their value is manifest not in their cost, size, sturdiness, or beauty, but in
their ability to create customers.
Operating Leverage
For the past 250 years, business has gradually increased its physical assets in
proportion to the people it employs, through the process we call automation the
substitution of machines for people. This is referred to as operating leverage. In
general, higher operating leverage (a higher assets to people ratio) results in higher
prots in expansion and higher losses in a sales decline. That factor, so often
neglected in capital budgeting decisions, can have a profound effect on a companys
long-term prospects.
Determining the Value of Inventory
Businesses use three principal methods to assign a value to their inventories: FIFO,
LIFO, and the weighted average method. We may think of inventory as a reservoir
of goods for sale. At the beginning of the year it stands at a certain level; during
the year we add to it by purchasing or manufacturing more goods; from it we take
the goods that we sell. And at the end of the year, we measure the level at which it
stands. The value of our inventory is what it cost us to make (not what we think we
can sell it for), but during the year costs may have uctuated because of ination
or changes in the supply of and demand for the raw materials or goods we purchase.
Moreover, in most companies businesspeople are not sure which goods the higher
costing or the lower costing were sold and which remain in inventory at the end of
the year. Therefore, the amount at which we value our ending inventory as well as the
cost of goods sold during the year will depend on the valuation method we choose.
FIFO
The First-in First-out (FIFO) method assumes that the oldest goods on hand are the
rst to be sold, and the inventory remaining consists of the latest goods to be
purchased or made. This is a very reasonable assumption since most companies will
sell their products in roughly the same chronological order they acquired them.
LIFO
The Last-in First-out (LIFO) method assumes that the latest goods acquired were
the rst ones sold, and the year-end inventory consists of the oldest goods on hand.
In most of the rms that use LIFO, this is clearly a ction. The reason it is accepted
is to defer the payment of income taxes. How that works can be explained in a three-
step thought process:
1. The history of the world is ination; we have always had it (except for
brief periods), and there is no sign of it disappearing.
2. In ination, the goods purchased or made earlier in the year are likely to
cost less than those acquired near the end of the year. By assuming the
SL3151Ch15Frame Page 682 Thursday, September 12, 2002 8:00 PM
Fundamentals of Finance and Accounting 683
year-end inventory comprises the old lower-cost goods, we tend to under-
state the value of the inventory. Conversely we tend to overstate the cost
of goods sold by assuming that the products delivered were the new
higher-cost goods.
3. By overstating the cost of goods sold we will understate prots, and with
lower reported prots we will have lower income tax payments.
Weighted Average
With this method, the goods remaining in inventory will be valued at the same
average cost as those that have been available for sale during the year.
Depreciation
One of our most useful nancial concepts for setting prices, providing funds to
replace capital assets, and postponing taxes is depreciation. Depreciation is the
value a xed asset loses through our use of it and the passage of time. In business,
we recognize that depreciation, along with the cost of labor, materials, and taxes, is
an expense of running the company. Accountants recognize (record) depreciation in
an unimaginative, mechanical way that approximates real life in the long run but
may vary widely from it in the short.
We must recognize the expense of depreciation in order to correctly price our
products and get a true picture of prots or losses. Suppose we bought an ice cream
machine for $2000 and started selling cones for 50 cents, after determining that our
out-of-pocket expense to make them was 30 cents. It is obvious we are not making
a prot of 20 cents on each cone even though we have that much extra in our pocket,
because the 2000-dollar machine is gradually wearing out and losing its value
(especially when making my favorite, pecan-praline, because the little crunchies in
there wear it out faster). It is possible that at a half a buck a cone we will wind up
losing money.
Useful Life Concept
Under present accounting practices, a company that buys equipment or some other
xed asset must estimate the number of years it will use the item and what salvage
or residual value it will have when the company is nished with it. The depreciable
amount, that is, the cost minus the salvage value, is apportioned to expenses over
the useful life of the asset in either equal or formulated amounts.
Because depreciation is a legitimate expense, because it does not involve a cash
payment to anyone, because it is based upon estimates of future wear and tear, and
because a variety of depreciation methods are acceptable to tax authorities and CPAs,
the depreciation process is almost an irresistible invitation to tax strategies and scal
manipulation. The manipulators are themselves manipulated by the government,
which frames depreciation rules so as to encourage businesses to buy new production
equipment. (Not all organizations, however, bother with depreciation. A list of those
that do not would include tiny companies with too little income to deduct depreci-
ation from, as well as giant nonprot institutions that neither charge for their services
nor pay any taxes.)
SL3151Ch15Frame Page 683 Thursday, September 12, 2002 8:00 PM
684 Six Sigma and Beyond
Understanding depreciation can be tricky. In the typical business it has four
different applications, each one giving a separate and sometimes opposite aspect,
and it is easy to be exactly wrong about depreciation. Here are the four viewpoints:
Depreciation as an Expense
Depreciation reduces prots. It always makes them lower. It never adds to or in any
other way benets the bottom line. Got that? In that way, depreciation is like rent,
salaries, income tax, or any other expense: the more you have of it, the lower your
net income. Yes, depreciation is a non-cash expense, but in the preparation of the
income statement or the prot plan of the future, depreciation expense reduces
prots.
Depreciation as a Valuation Reserve
The word depreciation is also found in the phrases accumulated depreciation or
the slightly old-fashioned reserve for depreciation. In this guise, it represents the
total amount of depreciation expense recorded for an asset since it was acquired.
Accountants have a peculiar way of recording depreciation. You might think that
the accounting entry would be something like this:
Not so. For reasons best known to themselves, accountants like to preserve the
original cost of the asset. And so they create a valuation reserve that accumulates
the depreciation expensed each year; the accumulation is then deducted on the
balance sheet from the original cost of the xed assets to produce a book value.
Here is the accounting entry:
As you can see, accumulated depreciation has a credit balance; it is located,
however, in the xed asset section of the balance sheet as a negative gure a
subtraction from the original costs of the assets. As a credit nestled in among the
debits it is spoken of as a contra account or, being where it is, a contra asset.
Do not think of accumulated depreciation as any sort of cash fund. It is simply
a number, which when deducted from the original cost of the assets, gives their
current book value, that is their undepreciated value. The depreciation of an asset
continues until (a) the accumulation equals the depreciable amount, or (b) the asset
is disposed of, in which case both the asset and the accumulated depreciation are
written off the books.
Depreciation as a Tax Strategy
Imagine your income tax for this year. Imagine an expense that you could legiti-
mately deduct. Now imagine this expense could be varied up or down within a
certain range, thereby giving you some exibility in setting your income and,
therefore, your income tax. If you have any imagination left, think of this expense
Debit Depreciation expense XXX
Credit The asset XXX
DR Depreciation expense XXX
CR Accumulated depreciation XXX
SL3151Ch15Frame Page 684 Thursday, September 12, 2002 8:00 PM
Fundamentals of Finance and Accounting 685
as existing merely on paper and not requiring any cash. Can you see how, by adjusting
this non-cash expense upward, you can actually save yourself cash by lowering your
income tax bill? This is the magic of depreciation. Increase any other expense and
you have less cash; increase this one and you have more.
Behind the enchantment, however, lie some essential truths realities that are
frequently overlooked to the extent, that is, that real life permits such a thing:
1. Depreciation is not so much a saving of cash as a recovery of cash already
spent. Money had to be laid out in the rst instance to acquire the assets.
In recognition of this some countries and to a very minor extent our
own permit depreciating the entire cost of the asset in the year it is
acquired.
2. When an organization accelerates its depreciation, it is merely borrowing
tax deductions from future years, so that the cash saved out of reduced
taxes is in theory, at least only a loan that will have to be repaid
when the company has used up all of its deductible expense.
Depreciation as Part of Cash Flow
A simplied denition of cash ow is prot + depreciation. The idea behind it is to
measure the extra cash generated by a business cash received from sales minus
cash paid out for expenses. Depreciation expense reduces prots in the rst place,
but since it is a non-cash expense it is restored to prots when estimating cash ow.
The positive role of depreciation in cash ow is so impressive that it often leads
people to the mistaken notion that depreciation is a benet to prots, also. As you
can see, however, it is merely a restoration of money that was taken from prots in
the rst place. It is sort of like what they do when they make white bread they
mill out 65 nutrients and put back a half dozen and call the bread enriched.
Investing in a business is not unlike making a loan to someone. When a payment
is made to you on the loan, only part of it is interest income; the rest is principal,
for the balance due you afterward is less. In a similar way, the surplus cash generated
by a business comprises interest and principal. Prot is like the interest income,
but the rest of the cash ow represented by depreciation expense is a partial
return of the original investment, for the value of the assets is now less. The most
common methods of depreciation are:
Straight Line
This is the standard method used by most companies for nancial (but not tax)
purposes.
Straight line depreciation is the easiest to compute and understand. You simply
spread the amount to be written off equally over the years of useful life. The formula
is:
Original Cost Salvage Value
Years of Useful Life
SL3151Ch15Frame Page 685 Thursday, September 12, 2002 8:00 PM
686 Six Sigma and Beyond
Suppose, for example, you purchased an ofce copier for $5000, estimated its
useful life at 4 years, and thought you could afterward trade it in for $1000. The
depreciation would be:
($5000 $1000)/4 yr = $1000 per year
Sum-of-the-Years Digits (SYD)
This is a modest form of accelerated depreciation. That is, it makes the charges heavier
in the early years, lighter in the later. There are two reasons we may want accelerated
depreciation. First, it more nearly matches the way the market value of used equipment
drops; just think of the drop in value of a new car the minute you drive it out of the
showroom. Second, there may be a tax advantage to speeding up the deductions.
The calculation of SYD is a cunning little arithmetic exercise that has nothing
to do with real life except that it gets the job done. We start by adding the years
digits in the estimated useful life. Sticking with our copier example, the calculation
would be:
1 + 2 + 3 + 4 = 10
The 10 becomes the denominator in a fraction, the numerator of which for the
rst year is the last number in the sum: 4. The fraction is applied to the depreciable
amount, thus:
(4/10) $4000 = $1600
The second years calculation is: (3/10) $4000 = $1200 and so on through
each digit until a total of 10/10, or 100 of the depreciable amount has been expensed.
As you can see, the rst years depreciation under SYD is signicantly greater
than under straight line. More expense means less income and income tax. Since
the total deductible amount is $4000 in either case, however, SYD will have to
compensate later on for the big numbers in the early years.
Getting the sum of the years digits can be tedious if it is a big number, such
as 15 years, so here is a formula you can use. Where N = the number of years
useful life,
SYD = [N(N + 1)]/2
For example,
[15 (15 + 1)]/2 = 240/2 = 120
Double Declining Balance (DDB)
This method, which accelerates depreciation even more than SYD, also has the
blessing of the IRS. It depreciates at twice the rate of the straight line method,
applied to the full cost. In our example, you can see that the rate of the straight line
SL3151Ch15Frame Page 686 Thursday, September 12, 2002 8:00 PM
Fundamentals of Finance and Accounting 687
depreciation was 25% per year, because the useful life is 4 years and 4 25% =
100%. If the useful life had been 5 years, the rate would be 20%, and so on. Under
DDB this rate is doubled and applied to the beginning book value each year. For
the rst year in our example, the depreciation is:
(25% 2) $5000 = $2500
As you can see, we have stopped kidding around; we are really talking deprecia-
tion now. At the end of Year 1 the book value of our copier is:
$5000 $2500 = $2500
and the second years depreciation is 50% $2500 = $1250.
The calculations continue in that manner until the book value is reduced to the
salvage value. That usually means there is no depreciation at all in the last years.
Unit of Production
This is a non-accelerated method based on usage the number of units produced
or the hours of use. Suppose you thought your copier would give you a half million
copies before you traded it in, and the rst year you got 100,000 copies. The
depreciation under this method would be:
Replacement Cost
Replacement cost is a theoretical method not used for either nancial or tax purposes;
the depreciation is based on future replacement rather than original cost. If you
thought that at the end of ve years you would have to pay $10,000 to replace your
copier, you might consider the true depreciation cost to be:
[$10,000 $1000]/5 = $1800 per year
Keep in mind, though, that this method is not authorized for nancial reporting
or income tax purposes.
Advantages of Accelerated Depreciation
Depreciation expense is known as a non-cash expense because there is no out-of-
pocket payment associated with it, as there is with almost every other expense. Of
course, a payment is made at the time the asset is acquired. Afterward, however, the
amount or rate of depreciation has no effect on a companys cash except as it affects
prots and prots affect the amount of income tax that is paid.
By selecting an accelerated method of depreciation, a company can postpone
the payment of some taxes that would be paid using the standard straight line method.
This postponement is very like an interest-free loan from the government.
100, 000
500, 000
= $ $ 4000 800
SL3151Ch15Frame Page 687 Thursday, September 12, 2002 8:00 PM
688 Six Sigma and Beyond
FINANCIAL STATEMENT ANALYSIS
Financial statement analysis has long been considered an art. Though the numbers
that analysts deal with are specic, the interpretation of those numbers is not, and
even the question of which numbers relate in a meaningful way is largely unsettled.
For example, in my own research, I have come across nearly 150 different ratios
used to gauge nancial condition. Each has its coterie of followers bankers,
nancial managers, investment analysts, and CPAs who daily seek some clue to
the future by examining the nancial statements of the past.
Is it possible to predict the outcome of a business venture? Many people think
so. Yet there is very little evidence that businesses evolve in a linear way. The random
walk of the stock market, the whims of fashion, the variety of lifes experiences all
testify that if the future held no surprises, that would be the biggest surprise of all.
In recent years, however, a number of developments have acted to make nancial
statement analysis more scientific and less unpredictable. Included among these are
1. A broader use of mathematics and statistics in dening the major elements
of a business and their relationships
2. The use of computers, with their enormous capacity for storing and
classifying business data
3. The renement of accounting practices, which has given us more reliable
nancial statements
To be sure, the main difculty with nancial statements is what may be called
the good news/bad news syndrome. Seldom do nancial statements look completely
good or completely bad; they nearly always exhibit both qualities. There are two
principal ways of analyzing the nancial strength of a company. One is through a
ratio analysis of recent nancial statements. The other involves a nancial forecast
of the near future. Ratio analysis is the easiest to learn and the fastest to use, and
that is the method we will examine rst. Financial forecasting is more difcult to
learn and complex to apply, but it gives superior results.
Forecasts often require us to make difcult estimates of unknowns, but they deal
in specic goals and dates, such as earnings in the coming year or cash ow in the
next 15 months. Ratios, on the other hand, are usually easy to calculate, but the
results are often abstractions that may he hard to apply to real world problems. Does
knowing that the current assets are 200 percent of the current liabilities tell you if
you can pay your bills on time?
As we discuss ratios, keep in mind that they are nothing but little numbers unless
we have some standard by which to measure them. The 2:1 current ratio mentioned
above does not help you much unless you know what number constitutes a good
current ratio and whether it gets better as it gets higher, or vice versa.
RATIO ANALYSIS
The dollar values of items on the income statement and balance sheet have little
signicance by themselves. Rather it is the proportion of accounts, or groups of
SL3151Ch15Frame Page 688 Thursday, September 12, 2002 8:00 PM
Fundamentals of Finance and Accounting 689
accounts, one to another that tells us whether a company is nancially viable or not.
For example: Suppose a businessperson tells you his company has $85,000 in its
checking accounts. The gure means virtually nothing unless you can relate it to
other aspects of the business. If the man runs a local shoe store, he may be well
xed, but if he turns out to be the president of the Eastman Kodak Company, he is
talking about the amount of cash that ows in and out of the company approximately
every minute of the business day.
A major problem with this kind of analysis has been the proliferation of different
ratios. Every nancial statement lists several accounts; they may be compared to
each other or to the same accounts in previous periods; combinations of accounts
are related to individual items or to other combinations; and ratios themselves are
often divided by other ratios to produce super ratios for determining trends. The
possibilities and the confusion seem to be without limit. As an example let us look
at a balance sheet and income sheet for a hypothetical Company X.
BALANCE SHEET Company: X
Analyst: Christin R. Date May 15, 2002 $Millions
----------------------------
Statement Date: 122800 122801
ASSETS
Cash & Short Term Investments 1585 613
Accounts Receivable (Net) 1678 2563
Inventories 1703 2072
Deferred Taxes 230 348
Prepaid Expense 50 215
Current Assets 5246 5811
Property, Plant & Equipment 6861 12919
Less: (Accumulated Depreciation) 3426 6643
Net Fixed Assets 3435 6276
Investments
Goodwill and Other Intangibles 5 383
Other Assets 68 432
Total Assets 8754 12902
DEBT + EQUITY
Notes Payable
Current Maturities of LT Debt
Accounts Payable 1564 3440
Accrued Liabilities
Taxes Payable 482 209
Dividends Payable 201 142
Current Liabilities 2247 3791
SL3151Ch15Frame Page 689 Thursday, September 12, 2002 8:00 PM
690 Six Sigma and Beyond
Long Term Debt 66 911
Deferred Taxes 271 1209
Other 142 603
Total Liabilities 2726 6514
Preferred Stock
Common Stock 674 936
Retained Earnings 5354 6533
Less: (Treasury Stock) 1081
Equity 6028 6388
Debt + Equity 8754 12902
INCOME STATEMENT Company: X
Analyst: Christine R. Date: May 15, 2002 $Millions
Statement Date: 12/28/00 12/28/01
Period: Year Year
Net Sales or Revenues 9734 11550
Less: Cost of Sales 6085 7613
Gross Prot 3649 3937
Expenses:
Selling, G&A 1753 2693
Depreciation
Research and Development
Operating Expenses 1753 2693
Operating Income [EBIT] 1896 1244
Interest Expense or (Income) 86 71
Other Expense or (Income) 19 55
Nonrecurring Expense or (Income) 520
Income Taxes 809 224
Net Income 1154 374
Cash Dividends 517 551
SL3151Ch15Frame Page 690 Thursday, September 12, 2002 8:00 PM
Fundamentals of Finance and Accounting 691
Ratio analysis encompasses scarcely a half dozen generations of analysts, so
ratio names are not well settled or precisely dened. They are more so within
particular groups such as CPAs, bankers, and stock market analysts. But between
those groups the same ratio may have different names and the same name may be
used for different ratios. Since this book is meant for a broad range of executives,
I have used names and denitions as I found them in general business use, rather
than in the specialty elds.
Liquidity Ratios
Liquidity refers to the ease with which an asset can be converted to cash. The liquid
assets in a business are cash itself and those things that are near to being cash, such
as accounts receivable, or that are readily convertible, such as marketable securities.
The Securities and Exchange Commission and countless analysts have dened
liquidity as the ability to pay debts when they come due. A gutsier denition might
be simply enough cash. But enough for what? The answer to that is usually found
in the denominator of liquidity ratios. Enough cash to pay the bills coming due;
enough to pay recurring expenses such as payroll; and enough to cover unexpected
needs and opportunities. In addition, that simple question often yields a perplexing
answer. The elements of liquidity are in an active state of ux. Both the amount of
cash a business has on hand and the amount it is obligated to pay changes with
virtually every transaction that occurs. And even a modest-sized company may have
1000 employees spending its money and 10,000 customers sending cash in.
It is difcult if not time-wasting, therefore, to contemplate cash needs moment
to moment. Most rms try to forecast cash ows in and out for a day, a week, or a
month, and then add a cushion to cover normal variances. An even more serious
problem in managing liquidity is that we are obliged to weigh an uncertainty against
a certainty.
There is little in life that is as xed, certain, and unremitting as a debt owing.
On the other hand, few things are as inconstant, ckle, and capricious as payments
promised, loans pending, and sales forecasted. In using liquidity ratios it helps to
identify the certain and uncertain elements, and how much of the latter it takes to
balance the former. For example, in the popular current ratio (current assets/current
liabilities), we see a blend of uncertainties in the numerator. We can count on the
cash we already have, but the timing of receivable collections is somewhat uncertain,
and the sale of inventory even more so. The amounts and due dates of the current
liabilities, however, are known and xed. In matching up the two elements, therefore,
we know instinctively there should be more current assets than current liabilities in
order to offset the uncertainty of the former.
Can a rm be too liquid? Can it have too much cash for its own good? There
are certain conventional truths that circulate among businesspeople that do not
bear close scrutiny so well. One of them says that the company with abnormally
high cash balances may be missing investment opportunities that could bring growth
and prots. In my view, cash is the beginning asset in business and the nal asset.
And during the game it is like the queen on a chessboard. It travels in all directions,
any number of squares. That is, it is the most powerful and exible of assets, and
SL3151Ch15Frame Page 691 Thursday, September 12, 2002 8:00 PM
692 Six Sigma and Beyond
as long as you have plenty of it you are in a superior position for taking advantage
of opportunities that oat your way.
Financial Leverage
Financial leverage is the mix of debt and equity in a business. The perfect mix is
one that exactly balances the entrepreneurs love of leverage and the creditors fear
of it.
Leverage is the relationship between the amount of money creditors put in a
business and the amount the owners contribute. Where there is plenty of debt and
not much equity we speak of high leverage. Where there is little debt and lots of
equity we talk of low leverage. Since leverage refers to the relationship of a rms
debt and equity, it stands to reason that a ratio of debt to equity will measure it. And
debt to equity is in fact the most popular ratio for gauging leverage.
Other well-known leverage ratios include equity/debt, assets/equity, and
debt/assets. All of these ratios have a direct mathematical link and tell exactly the
same story. Only the scale is different. The problem is not in measuring leverage so
much as it is in knowing when a company reaches a reasonable debt limit. Unfor-
tunately, we cannot tell how much leverage is enough except by noting when there
is too much. When a company goes bankrupt, we can say with a measure of
condence that the company should have had a little less leverage. At that point,
however, the question itself is usually academic.
Coverage Ratios
Coverage ratios are intended to measure a companys ability to pay the interest on
its debt from its earnings. Some nancial people consider these a form of leverage
ratios, but in reality they are nothing more than earnings ratios, when they are useful
at all. The most popular coverage ratio is the Times Interest Earned ratio. The formula
is
[Prot before interest and taxes]/Interest
Earnings
Of the three major nancial characteristics liquidity, leverage, and earnings
the last is the most complex but the easiest to understand. Simply stated, earnings
are derived out of accountings most fundamental formula:
Sales Expenses = Earnings
It is a formula that applies to the largest oil company, the smallest lemonade
stand, and everything in between.
The complex nature of earnings becomes apparent when you try to analyze them.
Why are some companies protable while others are not? And why do some rms,
protable for decades, suddenly turn stagnant? The key elements of protability are:
SL3151Ch15Frame Page 692 Thursday, September 12, 2002 8:00 PM
Fundamentals of Finance and Accounting 693
Demand for the rms products or services
The severity of competition
The effectiveness of cost control
Employee motivation
Management knowledge, experience, and judgment
To a larger extent than we usually acknowledge luck
Of these, demand for the companys products or services is not only the greatest
inuence on earnings, but whatever is in second place (probably luck) is way behind
it. Demand is the condition of being sought after, and it is made manifest in business
by the willingness, coupled with the ability, of customers to buy what you are selling.
This is the reason this section on accounting and nance is included in a discussion
of six sigma/DFSS. Unless we internalize the concept of demand in relation to the
functional requirements that the customer is ever seeking, we are not going to be
protable.
Demand is a ckle friend; it comes often without warning and disappears the
same way. It is not something we have a great deal of control over. Rather, it is a
condition that arises within our customers and is difcult to predict even by the
customers themselves unless we spend some time and investigate their needs,
wants, and expectations. Businesses can stimulate demand a little with advertising
and other marketing efforts, but by and large it is created by the customer in a way
that we do not completely understand.
All of this leads us to the basic business risk the reason companies are
deserving of making a prot. When you start a business, you have to create a product,
gather the people and materials needed to make it, set up a distribution system, and
advertise the product, all before you have the rst evidence that people will buy the
product from you.
Earnings Ratios
There are a number of earnings or protability ratios in current use. Just about all
of them use numbers from both the balance sheet and the income statement. Some
are better than others, and we will touch on the most popular ones.
Le ROI
The king of the earnings ratios is often referred to as ROI Return on Investment.
That is the ratio of prot to equity. But in recent years, the interest in these mea-
surements has multiplied so that there is now a whole family of ROI ratios, and ROI
has become a generic term for several different kinds of measures.
Most earnings ratios are called Return on Something, and the method of calcu-
lation is fairly standard. Return on indicates that some prot gure is in the
numerator, and the something is the denominator of the fraction. The result usually
falls into a range between 0 and .5 and is normally expressed as a percentage gure.
Many of the return ratios come in two colors, prot before tax and prot after
tax (PAT). Both types are commonplace, but the latter is about twice the size of the
former, so you have to pay attention to what you are looking at. I will always be
SL3151Ch15Frame Page 693 Thursday, September 12, 2002 8:00 PM
694 Six Sigma and Beyond
referring to PAT unless I say otherwise. Here is a brief description of the three most
popular Return ratios, all three of which are calculated by most companies.
ROE: Return on Equity
ROE is the last word in protability ratios. When the smoke and mirrors of this
special factor and that extra adjustment are put aside, this is the measurement
that tells you whether you really have a business or not.
ROA: Return on Assets
You will remember that Assets = Debt + Equity, so ROA is like ROE except
that the denominator is bigger and the percentage return is therefore smaller than
ROE. This ratio is popular among larger companies for measuring the performance
of subsidiaries and divisions. ROE would be a better measure, but when companies
cannot determine a true equity gure for a division, this works pretty well.
ROS: Return on Sales
ROS tells how many cents out of each sales dollar go into the owners pockets.
Nearly every company calculates this ratio, but it is not really very useful because
there is no standard to gauge it by, as there is with ROE. For example, Company A
may cater to the carriage trade and have limited sales but a high ROS. Company B
may be a mass merchandiser with a low ROS, yet B could have a higher ROE than
A because prot is the product of ROS and the volume of sales.
The average ROS for all companies in the US is between 5 and 6%, but the
number varies widely from company to company, even in the same industry.
Other Return Ratios
There are dozens of other return ratios in active use, but their denitions are not
well settled. Here are a few of the more common 3+ letter jobs, but even with these,
denitions vary among users.
RONCE: Return on Net Capital Employed The denominator, net capital
employed, usually refers to total debt plus equity minus non-interest-bearing debt
such as accounts payable. But this is not always what it means, so if it is important
for you to know the precise meaning, ask the user to dene the ratio.
Profit
Equity
Profit
Assets
Profit
Sales
SL3151Ch15Frame Page 694 Thursday, September 12, 2002 8:00 PM
Fundamentals of Finance and Accounting 695
ROAM: Return on Assets Managed Often used in management bonus
plans; similar to RONA, which follows.
RONA: Return on Net Assets Here the denominator starts out with total
assets and then certain ones are deducted. Often the assets taken out are those not
directly related to the running of the business for example, investments.
ROGA: Return on Gross Assets This is likely to be the same as ROA,
Return on Assets.
FINANCIAL RATING SYSTEMS
We rst encounter rating systems on our grade school report cards and spend the
rest of our lives complaining about their inadequacies. Financial rating systems, too,
have their limitations, but they can be an effective means of getting a quick measure
of nancial strength.
Financial rating companies have been around for most of this century. Even now,
in this age of rigid accounting standards and computer assisted analyses, business-
people continue to rely heavily on rating services rather than doing the rating
themselves. Apparently, the virtues of the rating companies thoroughness, con-
sistency, and conservatism outweigh their draw-backs of old information, obscure
criteria, and cost in the minds of many users.
Rating services are best employed where the user is dealing with a large number
of companies, and the investment or credit risk the user is taking with any one of
them is small. But if the risks are concentrated or a lot of money is at stake with
particular companies, the user should learn to make the analysis personally or at
least be able to conrm the judgment of the rating companies. Research studies have
shown that the rating rms are often slow to react when a companys nancial
strength is on a downslide.
BOND RATING COMPANIES
The generally wealthy institutions and investors who buy most bonds make extensive
use of rating services; over 4000 issues are rated on a regular basis. Stripping it to
its essentials, the analytical process is one of comparing:
1. Total debts against expected future prots, which are the primary source
of interest and principal payments
2. Total debts against total assets, the liquidation of which is a secondary,
albeit dire, source of repayment
Moodys et al.
Moodys and Standard and Poors are the best known of the bond rating companies.
Neither of these rms reveals exactly how they arrive at their ratings, but the
following criteria gure prominently in their classications.
1. Financial leverage; the lower the leverage, the better the rating.
2. Protability or rather, the avoidance of losses.
SL3151Ch15Frame Page 695 Thursday, September 12, 2002 8:00 PM
696 Six Sigma and Beyond
3. Steadiness of prots, the importance of which has led many rms to
attempt managing or smoothing out their year-to-year earnings.
4. Total revenues or extent of market share; being an industry leader makes
you a stronger competitor or is it the other way around?
Both Moodys and Standard and Poors use letter ratings beginning with triple
A. Here are samples of the denitions they give their classications.
Moodys
Aaa Bonds carry the smallest degree of investment risk and are generally referred
to as gilt edge. Interest payments are protected by a large or an exceptionally
stable margin and principal is secure.
Caa Bonds are of poor standing. Such issues may be in default or there may be
present elements of danger with respect to principal or interest.
Standard and Poors
AAA is the highest rating and indicates an extremely strong capacity to pay principal
and interest. Bonds rated BB, B, CCC, and CC are regarded, on balance, as pre-
dominantly speculative with respect to the issuers capacity to pay interest and repay
principal in accordance with the terms of the obligation.
As one can see, neither company gets very specic about the meaning of the
ratings, and no attempt is made to predict the future of the subject rm. Rather, the
ratings convey a feeling about it. That feeling represents the risk side of the
investment equation:
Risk = Return
The return, on the other hand, is represented by a specic number the bonds
yield. Now, if the bonds rating could also be expressed as a specic number the
percentage probability of loss investment decisions would be greatly simplied.
The rating agencies maintain a rigid independence from the companies they
analyze. Rigidity is necessary because of the millions of dollars of higher or lower
interest costs that often ride on the change of a rating (that is, in subsequent bond
issues, not the ones outstanding). Now and then you will see a company emit an
outraged howl at being downgraded.
RATINGS ON COMMON STOCKS
In addition to rating bonds, these rms rate preferred stock and commercial paper
on similar scales. As to common stocks, there are hundreds of companies that
dispense investment advice. While a few conne themselves to issuing data, most
have some subjective or objective method of selecting stocks for purchase or sale.
None give full descriptions of the selection process, of course, because their advice,
and the mystique that surrounds it, are all that they have to sell.
Among the major services there are two that have established rating systems for
use in selection of common stocks: Standard and Poors and The Value Line Investment
Survey. Standard and Poors publishes a monthly stock guide that is crammed with
SL3151Ch15Frame Page 696 Thursday, September 12, 2002 8:00 PM
Fundamentals of Finance and Accounting 697
nancial data on about 5000 common and preferred stocks. Most of the stock issues
are rated on an eight-level scale running from A+ (highest) to D (in reorganization).
The S&P Rating Method
The rating formula is based on a computerized scoring system that traces the trends
of earnings and dividends over the previous ten years. The basic scores are then
adjusted for growth, stability, and cycles; nal scores are measured against a matrix
of a large sample of stocks. The Standard and Poors (S&P) rating serves well enough
as a measure of a companys past performance; but as it ignores the condition of
the balance sheet and future earnings estimates, it is only a starter in an investment
analysis. Considering the price of the analyses, however, about ten cents a gross,
there hardly seems to be grounds for complaint.
The Value Line Method
The Value Line Investment Survey tracks over 1700 companies on a regular basis.
Using unpublished equations, it rates each company for safety and investment
timeliness. Both factors are ranked on a scale of 1 (highest) to 5 (lowest), the
rankings being relative to all 1700 stocks, not to some absolute standard. The Value
Line safety ranking is based on such factors as leverage, xed charge coverage (the
number of times over that prots could pay the annual interest expense), liquidity,
and the riskiness of that type of business. The timeliness factor is a comparison of
a stocks price trend against its expected earnings. A company may have a terric
near-term prot outlook, but if the market price of its stock is hovering somewhere
in the stratosphere, it may not be a timely buy.
Good Ole Ben Graham
Among the published formulas for investing in stocks, one of the most famous and
enduring is based on the intrinsic value theory of the late Benjamin Graham.
Professor Graham, a pioneer in fundamental analysis, did most of his research in
pre-computer days when computations were made on those clunky mechanical
calculators and laboriously recorded by hand. Grahams notion was that stocks at
any given time are likely to be undervalued or overvalued; that is, many investors
are buying and selling shares for reasons other than their fundamental value. The
smart investor will appraise that value (relative to the price) and snap up the under-
valued bargains. Among the criteria Graham proposed were the following:
1. For nancial soundness, a ratio of total liabilities to current assets of no
more than 60%
2. A ratio of equity to total debt of at least 1007; Mr Graham often said, a
company should not owe more than it is worth.
3. One buy signal: a price less than 2/3 of the net-current-asset value,
dened as
Current assets (Total liabilities + Preferred stock)/Number of shares outstanding.
SL3151Ch15Frame Page 697 Thursday, September 12, 2002 8:00 PM
698 Six Sigma and Beyond
4. Another price criterion: the earnings to price ratio (the reciprocal of the
P/E ratio) should be at least double the average Triple-A bond yield for
industrials. So if the bonds were averaging 10%, the E/P ratio should be
at least 207, which means a P/E ratio of 5 or less.
Special warning: Before you rush out and sell the farm to try one of these stock
market systems, be aware that no system has ever had a consistent, long-term run
of higher than average prots. Even Ben Grahams common-sense method suffers
from the perverse nature of the market: No matter how adept you become at nding
undervalued stocks, the only way you can make money on them is if the rest of
the investors come to the same realization soon after you have bought some shares.
Sometimes they never do, and you could be left sitting on top of an undiscovered
gold mine until it is, well, too late.
COMMERCIAL CREDIT RATINGS
Dun & Bradstreet
Dun & Bradstreet is the granddaddy of commercial credit rating rms. While its
main business is furnishing information about a companys nances and paying
habits, it also assigns two ratings to those rms about which it has sufcient infor-
mation. The rst is a 15-level scale of a rms estimated nancial strength or
equity. The highest classication is 5A for companies with a net worth of $50 million
or more; midway down the list is BB, $200,000 to $299,999; the lowest rank is HH,
covering an equity less than $5,000. D&B will only issue a rating when it can obtain
an equity gure, usually from the subject company, and has no reason to think it is
inaccurate.
The second rating is a composite credit appraisal, which is derived by an
unpublished formula, presumably taking into account a companys liquidity, lever-
age, and protability, as well as paying habits and any adverse events such as tax
liens. This appraisal has four grades:
Each credit appraisal is done in conjunction with the nancial strength rating,
so that the 1 in a rating of EE1 does not reect the same standards as the 1 in
4A1. While there are smaller credit agencies that also issue ratings, D&Bs system
stands virtually alone. So extensively is it used by vendors granting trade credit that
for thousands of rms a good D&B rating is all that is necessary to establish an
open account.
Grade Meaning
1 High
2 Good
3 Fair
4 Limited
SL3151Ch15Frame Page 698 Thursday, September 12, 2002 8:00 PM
Fundamentals of Finance and Accounting 699
Other Systems
Sometimes, a do-it-yourself approach is used in evaluation. That means the analyst
uses multiple statistical tools to create a scoring system. Two major approaches are
known for this endeavor.
The rst type consists of systems that base their ratings on a statistical analysis.
The methodology is to gather a bunch of nancial statements, several years worth,
from companies that have gone bankrupt, and a bunch from otherwise similar
companies that have remained aoat. Various ratios are then calculated in an effort
to nd those indicators that best distinguish the one group from the other.
Perhaps the best-known work in this eld has been done by Professor Edward
I. Altman of New York University, who in 1968 developed a statistical model for
the prediction of corporate bankruptcy. Altmans formula, known as the Z score,
combined ve ratios selected by an advanced statistical method called discriminant
analysis, which searches out the best combination of ratios rather than the single
best ratio for predicting the future. The ve ratios, shown in Table 15.3, are multiplied
by conforming factors, and the products are added together to get the score.
The critical Z Score was 1.81, according to the study, which found that all of
the rms in its group with a score that low had gone bankrupt. Moreover, the model
correctly classied 959 of the total sample one year prior to bankruptcy. (A note for
this analysis: This test did not hold its effectiveness for the long term as an increasing
proportion of its predictions became false. In 1977, Professor Altman revised the Z
score but again it has not caught on.)
COMPANY AND PRODUCT LIFE CYCLE
Businesses, like governments, like all institutions created by human beings, are mortal.
In the course of time they are created, enjoy their season, and then vanish. Their
evolution can be traced through four stages tryout, growth, maturity, and decline.
Products, too, have a limited life, and if a demand for them develops, they can
be expected to obey a somewhat similar pattern, that depicted in Figure 15.1.
Although few companies or products will imitate this design exactly, the concept is
useful in forecasting sales and cash ow.
TABLE 15.3
The Z Score
Ratio Factor
Working Capital/Total Assets .012
Retained Earnings/Total Assets .014
EBIT/Total Assets .033
Shares Market Value/Total Debt .006
Sales/Total Assets .999
SL3151Ch15Frame Page 699 Thursday, September 12, 2002 8:00 PM
700 Six Sigma and Beyond
The tryout period is one of experimentation, nding the product, the price, the
method of distribution, the niche that will create customers. If and when the growth
stage develops, a heavy investment in promotion and production is needed. During
this period, which may last a decade or two, it is usual for more cash to be spent
than received, even though the operation is highly protable. With maturity the cash
ow turns positive as sales level off. The last stage is the least predictable, some
companies going out with a bang, some with a whimper, others merging themselves
quietly into the operations of a more viable rm.
CASH FLOW
Cash ow has been dened as:
Prot + Depreciation
In recent years a new denition has been taking shape:
Prot + Depreciation + Deferred Taxes
Cash ow is intended to represent discretionary funds that are over and above
what is needed to continue running the business, and may, therefore, be used to
expand the company, pay off loans, pay extra dividends, and so on.
When it was rst conceived, the idea of adding prot and depreciation to get
cash ow found overnight acceptance among business executives. If prot was
vanilla ice cream, cash ow was a chocolate sundae. But it also produced an
unwanted side effect the non-cash illusion.
Cash ow is a popular term with business managers. It is a phrase that is vague
enough to make you sound like you know what you are talking about even when
you do not. It is also useful in cases where you do know what you are talking about
but do not want to talk about it. As when a supplier calls you about an overdue bill.
Which would you rather say?
FIGURE 15.1 Life cycle of a typical company or product.
Tryout Growth Maturity Decline
Time
SL3151Ch15Frame Page 700 Thursday, September 12, 2002 8:00 PM
Fundamentals of Finance and Accounting 701
We do not have the money right now.
or
We are experiencing a temporary cash ow problem.
The latter statement conveys your understanding of mysterious economic forces
and implies that the solution to the problem is just around the corner. Unfortunately,
such a ne phrase as cash ow has been used in so many different ways you have
to verify its meaning each time you hear it. Often, as in the example above, it means
the same as cash; but sometimes it is the amount of cash owing into the business
each month or year; at other times it is the difference between the inow of cash
from sales and the outow for expenses; and to those who enjoy elevating obscurity
to its highest plane it is the funds available as working capital and for expansion.
Most of the time when professionals, especially nancial professionals, speak of
cash ow they are talking about the specic dollar amount derived by adding
depreciation back to prot.
When we analyze cash ow we are asking what activities brought money into
the business and what activities caused it to ow out. In the simplest terms, where
did the cash come from, where did it go? Most of the cash coming into a business
is the proceeds of sales; most of it going out is to pay expenses. So we can start our
cash ow analysis with the difference between the two, which is prot.
Prot = Sales Expenses
Included in the expenses is depreciation and maybe amortization, but these are
non-cash expenses; that is, there is no money paid out for this expense because it
was all paid out at the time the asset was bought.
The concept of cash ow is lame in one respect. It fails to recognize the need
to replenish xed assets. Plants and equipment must be replenished, just as inventory
is. To say that funds from depreciation do not have to be spent on new xed assets
is as deceptive as saying that cash received from a sale does not have to be spent
to buy new merchandise. (If you need any further convincing, look at some published
annual reports the statistical section where you often nd gures for depreciation
and new capital investment going back ve or ten years. Count the number of years
that the value of new equipment exceeded the depreciation charges; chances are it
will be at least nine times out of ten.) In other words, companies are not only using
all of the depreciation money to buy new xed assets, they need a good deal more
besides. A better formula for calculating cash ow would be:
Prot + Depreciation New Fixed Assets
Even if a company is not growing, chances are that ination will push replace-
ment costs higher than depreciation rates.
A FINAL THOUGHT ABOUT CASH FLOW
Because of the non-cash illusion, the concept of cash ow has little relevance to
day-to-day management. It is useful in calculating the return on proposed capital
SL3151Ch15Frame Page 701 Thursday, September 12, 2002 8:00 PM
702 Six Sigma and Beyond
investments, as we will see later, but for the management of cash and cash planning
it is not. Those activities are best managed with detailed budgets and forecasts.
Moreover, the mystique of cash ow has been known to replace common sense, as
in the airline industry where enormous depreciation charges often mask treacherous
losses; likewise in some tax shelter schemes where non-cash charges are used to
reduce taxes and thus appear to actually generate money.
In the last analysis all rms, all tax schemes, must be protable to be successful.
Prots are the true test of any investment, and to the extent that cash ow confuses
this ultimate reckoning it does us a disservice.
It is apparent that a company can lose money and still have a positive cash ow.
What is not so clear is that a rm can have a positive cash ow and still go broke
a common hazard for rapidly growing companies.
The term working capital, like the term cash ow, is frequently heard in the
daily chatter about business nance. It, too, suffers from liberties taken with its
denition and usage. Most often, and especially when nancial people are talking,
working capital means the specic dollar amount derived from the formula
Current assets Current liabilities
However, it has also been used to mean cash, or cash + receivables, or
current assets, or funds. When it comes to applying the idea of working capital
in some useful business way, we encounter two nearly fatal aws.
1. Working capital is a concept that has no existence in the real world. You
cannot hold working capital in your hand or put it in your pocket. Nor
can you actually offset current liabilities with current assets. Nearly all
current liabilities can only be satised with cash.
2. While businesspeople are fond of calculating working capital, no one has
yet come up with a rule stating how much of it a company should have.
It seems reasonable enough that the more sales a business has, the more
working capital there should be also. But we cannot seem to pin it down
to an actual standard. Working capital, therefore, is a measurement without
much meaning. In business there is only one excuse for an expense: it
will help to produce revenue.
Some expenses, however, have nothing to do with producing revenue the entire
accounting department, for example but they are necessary nevertheless. Others,
such as income tax and vacation pay, have only a roundabout effect on your sales but
are also unavoidable. Some pay our debts to society, such as the expense of unemploy-
ment insurance, or make us good neighbors, such as a little landscaping, while the sole
purpose of some expenses is to reduce overall expense, that is, increase productivity.
Unlike revenues, which are the result of a customer taking action, expenses
result when you take action. They are largely controllable and therefore a direct
reection of your management ability. It may be hard to measure the value of what
you do, but the cost of your doing it is right there in the printout for all to see.
SL3151Ch15Frame Page 702 Thursday, September 12, 2002 8:00 PM
Fundamentals of Finance and Accounting 703
A HANDY GUIDE TO COST TERMS
Actual Actual costs are distinguished from standard costs; the latter are
estimates used for convenience, and an adjustment must be made to the
actual costs at least yearly.
Alternative The costs of optional solutions; used in what-if analyses.
Controllable Costs for which some manager can be held responsible.
Cost of Sales Also called cost of goods sold; the cost of making or buying
the products a business sells. In a manufacturing rm it comprises direct
labor, materials, and manufacturing overhead.
Differential The difference in the costs of two or more optional activities.
Direct Costs that can be laid solely to a particular activity. In manufactur-
ing, the wages of the workers who make the products and the cost of the
materials used are direct costs; they are often referred to as direct labor
and direct materials.
Discretionary More or less unnecessary but desirable outlays, such as the
ofce Christmas party or management seminars.
Estimated Predetermined by an informed guess.
Extraordinary Expenses due to abnormal events, such as an earthquake.
Costs in this category should be not only unexpected, but rare.
Fixed Costs that remain the same despite changes in sales or some other
output. Examples are lease payments on property and depreciation on
equipment. Compare to variable costs. As used here the xedness is a matter
of degree; almost every cost is affected somewhat by the volume of sales.
Historical The original cost of an asset.
Imputed The imagined or estimated cost of a sacrice; not a cash outlay
but the giving up of something you could have had; a cost often recognized
in the decision process but not recorded on the books. When a company
has accounts receivable, for example, there is an imputed cost of the interest
it could be earning on the funds tied up in receivables.
Incremental A cost that will be added or eliminated if some change is
made. Similar to differential cost.
Indirect A general or overhead cost that is allocated to a product or
department on the theory that the receiver shares in the benet of the thing,
and, besides, somebody has to pay for it.
Joint Also called common cost; A cost shared by two or more products
or departments, as for example the expense of a company lunchroom.
Noncontrollable Costs that are prerequisites to doing business, such as a
city license or smog control equipment. These costs are often allocated to
a department, but there is no point in holding the manager accountable for
them.
Opportunity A theoretical cost of not using an asset in one way because
you are using it in another. For example, the opportunity cost of a company-
owned headquarters building is the money the company could get by renting
it to others.
SL3151Ch15Frame Page 703 Thursday, September 12, 2002 8:00 PM
704 Six Sigma and Beyond
Out of Pocket Expenses requiring a cash outlay, as opposed to the expense
of using facilities you already own. When you use your car for company
business, the cost of gas, tolls, and parking are out of pocket expenses, but
not the depreciation on the car.
Period Costs related to a time period rather than an amount of output or
activity. Examples are rent and the controllers salary.
Prepaid Expenses paid before, rather than as, they are used. A prepaid
expense, such as a years insurance premium, is really a current asset that
gradually converts to an expense with passing time.
Prime In manufacturing, direct labor and material costs.
Product Costs related to output or the amount of activity, as opposed to
period costs.
Production A term used by the oil and gas industry in referring to the cost
of operating a well.
Replacement An estimate of the current cost of an asset as contrasted with
its historical cost. It is often used in estimating the true cost of the current
years depreciation.
Standard The estimated average or budgeted cost of making a product.
When a product is nished, it is often more convenient to record its value
at a standard rather than its actual cost. At least once a year the total actual
costs are compared with the total standard costs recorded, and the latter is
adjusted to the real gure. Standard costs must be changed from time to
time if labor and material costs change.
Sunk A cost already incurred that cannot be undone or readily put to some
other use.
Variable A cost closely tied to the level of output or activity. Most of these
costs vary directly with sales. Classifying costs between variable and xed
is necessary in order to calculate a breakeven point.
USEFUL CONCEPTS FOR FINANCIAL DECISIONS
THE MODIFIED DUPONT FORMULA
The duPont system of nancial analysis combines prot margin and asset turnover
to produce the return on assets. The modied version brings nancial leverage into
the equation to produce return on equity as well. The formulas are:
1. Asset Turnover Return on Sales = Return on Assets
Sales/Assets Prot/Sales = Prot/Assets
2. Return on Assets Financial Leverage = Return on Equity
Prot/Assets Assets/(Equity ) = Prot/Equity
A visual approach to duPonts concept is shown in Figure 15.2.
SL3151Ch15Frame Page 704 Thursday, September 12, 2002 8:00 PM
Fundamentals of Finance and Accounting 705
BREAKEVEN ANALYSIS
The breakeven point for a business is that volume of sales at which the revenues
equal the expenses. Above that point lie glory and prot; below lie infamy and loss.
At least that is the theory. In real life, it is very difcult to calculate a breakeven
point because the expenses of most businesses do not t comfortably into just a
xed or variable category. Breakeven analysis can be done visually using a graph
like the one in Figure 15.3, or mathematically.
Prot = Sales Fixed Costs Variable Costs
If Fixed Costs = $12,000 and Variable Costs = 40% of Sales, then
Prot = Sales $12,000 .4*Sales
or Prot = .6*Sales $12,000
At breakeven, prot will be 0, therefore 0 = .6*Sales 12,000 and Sales =
$12,000/.6 = $20,000 at that point.
FIGURE 15.2 A pictorial approach of DuPonts formula.
Subtracted from
Plus
Current
assets
Operating
expense
Interest
Less other
income
Cost of
goods sold
Depreciation
Taxes
Total costs Sales
Net
income
Sales
Divided into
Multiply by
Return
on sales
Asset
turnover
Return on total assets
Total
assets
Sales
Fixed
assets
Inventories
Accounts
receivable
Cash
Marketable
securities
Divided into
SL3151Ch15Frame Page 705 Thursday, September 12, 2002 8:00 PM
706 Six Sigma and Beyond
CONTRIBUTION MARGIN ANALYSIS
We have seen that a rm will break even when its total sales exactly equal the sum
of its variable and xed costs. Beyond breakeven only the variable costs need be
paid, the xed costs having been taken care of for the year. The difference between
the sales price of the companys products and the variable costs is called the
contribution margin. In our example where variable costs equaled 50% of sales, the
contribution margin was the other 50%. Knowing the contribution margin gives you
another way of calculating the breakeven point:
Using the gures in our illustrative example
Most rms have a variety of products or services that contribute to prots at
different rates. In comparing margins, you must also take into account the propor-
tionate xed costs associated with each. It is possible that a product with a smaller
contribution margin would be the more protable because the other product bears
enormous xed costs.
FIGURE 15.3 Breakeven analysis.
Revenue
Expense
Profit
Fixed costs
Loss
0 10 20 30 40
------------------------------------------------------------------------------- Units
$000s
20
10
5
Break - even sales =
Fixed Costs
Contribution Margin
Break - even sales =
$10, 000
.50
= $ , 20 000
SL3151Ch15Frame Page 706 Thursday, September 12, 2002 8:00 PM
Fundamentals of Finance and Accounting 707
PRICEVOLUME VARIANCE ANALYSIS
A pricevolume analysis of prot plan variances often helps management zero in
on problems or quickly exploit market advantages. Here is a simple example of such
an analysis.
Suppose a rm planned to sell 1000 units of Product A at $20 each but in fact
1100 units were sold at a price of $21 each. The planned revenue was
1000 $20 = $20,000
Actual revenue was 1100 $21 = $23,100 and the revenue variance = $3100.
That variance can be broken down as follows:
Effect of price change only: $1 1000 = $1000
Effect of quantity change only: $20 100 = $2000
Effect of both price and quantity changes: $1 100 = $100
Total effect = $3,100
INVENTORYS EOQ MODEL
The Economic Order Quantity model is designed to minimize the total cost of
ordering and carrying inventory items. Here is the formula:
EOQ = (2*Q*P/C)
.5
where Q = quantity needed for the period; P = the cost of placing one order; and C =
the cost of carrying one unit for one period.
EXAMPLE
Standard Ofce Furniture sells 1800 B desks more or less evenly over 12 months.
The cost of placing and receiving an order from the manufacturer is $45. Standards
annual carrying costs are 20% of the inventory value. The B wholesales for $75,
so the annual carrying cost per desk is
.20 $75 = $15
The economic order quantity can then be calculated using the model:
EOQ = (2*45*1800/15)
.5
= 104 desks
We can also calculate Standards optimal inventory cycle for these desks:
[365 104]/1800 = 21 days
SL3151Ch15Frame Page 707 Thursday, September 12, 2002 8:00 PM
708 Six Sigma and Beyond
RETURN ON INVESTMENT ANALYSIS
EXAMPLE: ROI CONCEPTS
This is an example where a company has to decide between two different manufac-
turing machines it wants to purchase. The costs and benets of each are set out below.
Payback Method: Payback answers the question, how long will it take us to recover
our original investment?
Average Rate of Return: Average rate of return is our old friend ROE, Prot/Equity
(or in this case, Prot/Investment), except we call for the average return over the
period covered.
MACHINE A $50,000
End of Year 1 2 3 4 5
Revenues 30,000 30,000 30,000 30,000 30,000
Direct cost, mtl, labor, etc 5,000 5,000 5,000 5,000 5,000
Operating exp, Selling, G&A 5,000 5,000 5,000 5,000 5,000
Depn, Straight Ln 10,000 10,000 10,000 10,000 10,000
Prot Before Tax 10,000 10,000 10,000 10,000 10,000
Income Tax (50%) 5,000 5,000 5,000 5,000 5,000
Net Income5,000 5,000 5,000 5,000 5,000 5,000
Cash Flow 15,000 15,000 15,000 15,000 15,000
Investment 40,000 30,000 20,000 10,000 0
Machine B $50,000
End of Year 1 2 3 4 5
Revenues 45,000 40,000 32,000 25,000
Direct cost, mtl, labor, etc 7,500 7,500 5,000 2,500
Operating exp, Selling, G&A 5,000 5,000 5,000 5,000
Depn, Straight Ln 12,500 12,500 12,500 12,500
Prot Before Tax 20,000 15,000 10,000 5,000
Income Tax (50%) 10,000 7,500 5,000 2,500
Net Income5,000 10,000 7,500 5,000 2,500
Cash Flow 22,500 20,000 17,500 15,000
Investment 37,500 25,000 12,500 0
Year 1 2 3 4 5
MACHINE A
Balance to Recover 50,000 35,000 20,000 5,000 0
Cash Flow 15,000 15,000 15,000 15,000 15,000
Cumulative Years 1.00 2.00 3.00 3.33 3.33
MACHINE B
Balance to Recover 50,000 27,500 7,500 0
Cash Flow 22,500 20,000 17,500 15,000
Cumulative Years 1.00 2.00 2.43 2.43
SL3151Ch15Frame Page 708 Thursday, September 12, 2002 8:00 PM
Fundamentals of Finance and Accounting 709
Net Present Value (NPV)
NPV equals the cash receipts from an investment minus the cash outlays, all dis-
counted at an acceptable rate, sometimes called the hurdle rate. The formula is
NPV =
where n = the number of periods; t = the time period; r = the per period cost of
capital; and CF
t
= the cash ow in time period t.
Internal Rate of Return (IRR)
IRR is at present the truest rate of return we know how to calculate. Technically,
it is the hurdle or discount rate that produces an NPV equal to zero. The formula is
NPV = 0 =
A special caution is needed here. The IRR calculation can turn awkward when
there is more than one sign change in the cash ow stream. You may get more than
one answer for the same series of payments.
One way around the problem is to do a modied IRR in which you calculate
the present value of all the outows (negatives) using, say, the companys average
interest rate on loans; then compute the IRR using the single outow gure (CFCM).
The Financial Management Rate of Return (FMRR) developed by Findlay and
Messner in 1973 goes one step further. It starts by calculating the present value of
End of Year 1 2 3 4 5
MACHINE A
Prot 5,000 5,000 5,000 5,000 5,000
Average Prot: 5000
Beginning Investment = $50,000 Ending Investment = 0
Average Investment = $25,000
Average Rate of Return = $5000/$25,000 = 20%
MACHINE B
Prot 10,000 7,500 5,000 2,500
Average Prot: 6250
Beginning Investment = $50,000 Ending Investment = 0
Average Investment = $25,000
Average Rate of Return = $6250/$25,000 = 25%
CF
r
t
t
t
n
( ) 1
0
+
=

CF
r
t
t
t
n
( ) 1
0
+
=

SL3151Ch15Frame Page 709 Thursday, September 12, 2002 8:00 PM


710 Six Sigma and Beyond
all cash outlays, as does the modied IRR, and then calculates a future value for
the positive cash ows (inows). The rate for this future value calculation is the
expected rate at which the inows will be employed.
PROFIT PLANNING
The most critical element of all in nancial planning is the revenue, or sales, forecast.
It is the basis of the cost and prot forecasts, and the key element in planning a
rms people, money, and material needs. At the same time, the revenue or sales
forecast is the most difcult to make of all forecasts. Anyone who can project sales
a year ahead and come in less than 10% off the mark is doing pretty well.
THE NATURE OF SALES FORECASTING
Sales are mostly a function of demand for the product, and demand is largely out
of the control of the seller. Numerous factors inuence a companys sales; most of
them cannot be controlled, and many of them are not even known.
But the overall effect of these factors usually changes rather slowly over time.
For that reason a naive projection of sales always merits consideration in the fore-
casting process. A naive forecast is one that simply extrapolates past gures and
trends into the future.
However, most companies use a goals downplans up approach to sales fore-
casting. Top management denes the ballpark by specifying what is in their opinion
an achievable sales goal; it is then up to the sales staff to submit detailed plans on
how that goal will be met.
Another form of forecasting is the sales goal form. This form or template helps
top management set a sales goal. It starts with the latest 12 months actual sales and
a tentative goal for the coming year. The difference between the two will have several
causes. Here are some causes for a change in sales volume and a typical annual
impact of each cause.
Ination Up 2 to 12 percent
Demand for the product Up or down 010%
State of the economy Up or down 010%
New products Up 010%
The sales goal form also aids in the forecasting of gross prot, operating prot,
net income, and earnings per share. In addition, it permits what-if analyses, showing
the effect on net income of a change in the sales or cost inputs.
The Plans Up Form
This format is meant as an aid to individual salespeople who forecast the revenues
in their own territories. The cumulative totals of these estimates must eventually be
reconciled to the sales goal gures issued by top management.
SL3151Ch15Frame Page 710 Thursday, September 12, 2002 8:00 PM
Fundamentals of Finance and Accounting 711
Statistical Analysis
A few things in business lend themselves to statistical projections. Chief among
them is revenues or sales forecasting. Statistics are tricky. They can describe with
precision the behavior of two or more business components, but it is left to you to
decide if the activities are related, unrelated, or coincident, and in the case of the
former, which is the cause and which the effect. A statistical forecast of sales, on
the other hand, is useful mainly as an anchor for the subjective guesses of salespeople
and top management, or as a starting point in developing judicious estimates. While
the end purpose of a sales forecast is a projection of prots, direct statistical analyses
of past prots are normally useless, if not worse.
Compound Growth Rates
Sales patterns usually trace a gentle curve rather than a straight line. All products,
and for that matter, all companies and the people who run them, move through life
cycles that can often be represented by a lazy S curve. Its components are tryout,
growth, maturity and decline see Figure 15.1. The compound growth rates over
short segments of the life cycle can be useful in forecasting 12 months or so ahead.
The rates will vary depending on how far back you go, and it is up to you to decide
which rate is best to project. While you are working that out, remember that the
sales trend is a curve that changes direction now and then. It is also up to you to
gure out if such a change is about to occur.
Regression Analysis
Regression analysis is a mathematical procedure that guratively plots past sales on
a graph and then draws a line through the middle of the points; extending the line
into the future gives the forecast. The results will vary depending on how much past
data you use, and the whole process is meaningful only if the past plot points form
a linear pattern. Even then, we must be mindful that unusual past events can skew
the pattern, and that nothing in life really travels in a straight line.
Revenues and Costs
Once revenues have been forecast, the task of projecting costs is largely routine.
Most costs, even those we consider xed, relate to sales or revenues. When sales
rise, costs will soon follow. When sales go down, however, costs tend to follow more
slowly due to a natural human reluctance to do without something we once had and
a natural hope that conditions will soon be better again.
The largest expense in most businesses is the cost of sales, sometimes called
the cost of goods sold, and it can usually be determined with fair precision. The task
of manufacturing a product is well dened, as are the materials and labor that lend
it value.
Departmental Budgets
Beyond the estimates for the direct costs of manufacturing or buying products for
resale lie the budgets for administrative expense, overhead, marketing, R&D, and
so on. We speak of these as budgets rather than forecasts because we have some
control over them.
SL3151Ch15Frame Page 711 Thursday, September 12, 2002 8:00 PM
712 Six Sigma and Beyond
The computer is a helpful tool in the process because such budgets are negotiated
rather than merely extrapolated from revenues, planned rather than merely projected.
There are often several versions prepared before the nal plan is hammered out. A
good deal of posturing and gamesmanship goes into the process, and a not untypical
scenario nds the department manager padding his or her budget secure in the
knowledge that top management will cut it, and top management cutting it because
they know it is always padded a little.
How to Budget
The easiest way to budget is to increase last years budget by some percentage that
will allow a comfortable salary raise for everyone, plus a few more bodies to ease
the workload, and some extra gadgets and trips to make it more fun. A more
businesslike method is to adjust last years budget to the expected level of next years
principal activities of the department. Every department has its tasks, and if the
output or the activities can be quantied, this will furnish a standard for setting the
new budget. The best lever for controlling department expense is the number of
people employed. Employees not only cost salaries, fringe benets, and taxes, but
desks, computers, food service, and parking spaces, too.
Zero-Growth Budgeting
This is not to be confused with zero-based budgeting, which deservedly died a quiet
death a few years back. Zero-growth budgeting is a term for a plan that seeks to
hold expenses at the current level while revenues grow. If it works it will obviously
mean more prot. Of the two ways to get rich in life, the more excusable is to
increase your productivity work harder or faster or smarter so that the value of
your output goes up.
To the department manager, zero-growth budgeting says, Look, we expect sales
to rise 10% this year but we would like to handle the increased business with the
same budget as last year. What is more, we think the better people should get nice
raises, but out of the same pot as last year. That means you have to handle the work
more efciently, and perhaps not replace people so fast when they leave.
The idea is to challenge people to work smarter but not threaten them with the
annihilation that was implicit in the zero-based budgeting concept. Most productivity
gains are made inch by inch, and if too much is asked of people, they tend to give up.
SELECTED BIBLIOGRAPHY
Ainworth, P. et al., Introduction to Accounting: An Integrating Approach, Irwin, Homewood,
IL, 2001.
Albright, S.C. et al., Managerial Statistics. Duxbury, Pacic Grove, CA, 2000.
Baker, R.E., Lembke, V.C., and King, T.E., Advanced Financial Accounting, 5th ed., Irwin,
Homewood, IL, 2002.
Block, E., Chen, K. and Lin, T., Cost Management, Irwin, Homewood, IL, 2001.
Brealey, R.A., Fundamentals of Corporate Finance, 3rd. ed., Irwin, Homewood, IL, 2001.
Brigham, E.F. and Ehrhardt, M.E., Financial Management: Theory and Practice, 10th ed.,
The Dryden Press, New Rochelle, NY, 2002.
SL3151Ch15Frame Page 712 Thursday, September 12, 2002 8:00 PM
Fundamentals of Finance and Accounting 713
Brigham, E.F. and Houston, J.F., Fundamentals of Financial Management, Concise, 3rd. ed.,
The Dryden Press, New Rochelle, NY, 2002.
Dauten, C.A. and Welshans, M.T., Principles of Finance, 3rd. ed., South-Western Publishing
Co., New Rochelle, NY, 1970.
Edmonds, T.P. et al., Fundamental Financial Accounting Concepts, Irwin, Homewood, IL,
2000.
Moore, F.G., Manufacturing Management, 5th ed., Irwin, Homewood, IL, 1969.
Pyle, W.W. and White, J.A., Fundamental Accounting Principles, 6th ed., Irwin, Homewood,
IL, 1972.
Weston, J.F. and Brigham, E.F., Essentials of Managerial Finance, 3rd ed., The Dryden Press,
Hinsdale, IL, 1974.
SL3151Ch15Frame Page 713 Thursday, September 12, 2002 8:00 PM
SL3151Ch15Frame Page 714 Thursday, September 12, 2002 8:00 PM

715

16

Closing Thoughts about
Design for Six Sigma
(DFSS)

Design for six sigma (DFSS) is really a breakthrough strategy to improvement, as
well as customer satisfaction. In the new millennium, it is the most advantageous
way as well as an economical way to plan. Fundamentally, the process of DFSS is
really a four-step approach. It recognizes the customer and progressively builds on
the system concept for robustness in product or service development and nally
testing as well as verifying the results against the design. Some of the essential tools
used in DFSS are:
Dene
Customer understanding
Market research
Kano model
Organizational knowledge
Target setting
Characterize
Concept selection
Pugh selection
Value analysis
System diagram
Structure matrix
Functional ow
Interface
QFD
TRIZ
Conjoint analysis
Robustness
Reliability checklist
Signal process ow diagrams
Axiomatic designs
P-diagram
Validation
Verication
Specications
Optimize
Parameter and tolerance design
Simulation

SL3151Ch16Frame Page 715 Thursday, September 12, 2002 5:57 PM

716

Six Sigma and Beyond

Taguchi
Statistical tolerancing
QLF
Design and process failure mode and effects analysis (FMEA)
Robustness
Reliability checklist
Process capability
Gauge R & R
Control plan
Verify
Assessment (validation and verication score cards)
Design verication plan and report
Robustness reliability
Process capability
Gauge R & R
Control plan
The concept of DFSS may be translated into a model shown in Figure 16.1. This
model not only identies the components DCOV (dene, characterize, optimize,
verify), but it also identies the key characteristics of each one of the stages.
To understand and appreciate how and why this model works, one must under-
stand the purpose and the deliverables of each stage in the model. So, let us give a
summary of what the DFSS is all about.

FIGURE 16.1

The DFSS model.
Verify
Define Characterize Optimize
Consumer
understanding
Quality
history
Understanding of
customer
requirements
(functionality
CTS(Y))
Kano
model
CTSs to
metrics
(Y toy)
Technical
metrics
(y tox)
Concept
selection
Function
structure
CAE modeling
P- diagram
Noise
management
Reliability and
robustness
Process
capability
Process flow
diagram
Gauge R&R
control plans
Parameter
design
Tolerance
design
Robustness
assessment
Statistical
Design
verification
process
Key life testing
Long term
process
capability
Reliability and
robustness
Product
performance
over time

SL3151Ch16Frame Page 716 Thursday, September 12, 2002 5:57 PM

Closing Thoughts about Design for Six Sigma (DFSS)

717

In the Dene (D) stage, it is imperative to make sure that the customer is
understood. The spoken and the unspoken requirements must be accounted for
and then the denition of the CTS drivers takes place. It is very tempting to jump
right away into the Y

i

without really knowing what the critical characteristics (or
functionalities) are for the customer. Unless they are understood, establishing oper-
ating window(s) for these Ys will be fruitless.
So the question then is, How do we get to this point? And the answer in
general terms (the specic answer depends on the organization and its product or
service) is that the inputs must be developed from a variety of sources including but
not limited to the following the order does not matter:
Consumer understanding
Kano model application
Regulatory requirements
Corporate requirements
Quality/customer satisfaction history
Functional, serviceability, expectations
Understanding of integration targets process
Brand proler/DNA
Benchmarking
Quality Function Deployment (QFD)
Product Design Specications (PDS)
Business strategy
Competitive environment
Market segmentation
Technology assessment
Once these inputs have been identied, developed, and understood by the DFSS
team, then the translation of these functionalities may be articulated into the Ys
and thus the iteration process begins. How is this done? By making sure all the
active individuals participate and have ownership of the project as well as technical
knowledge. Specically, in this stage the owners of the DFSS project will be looking
to make correlated connections of what they expect and what they have found in
their research. Thus, the search for a transformation function begins and the
journey to improvement begins in a formal way. Some of the steps to identify the
Ys are:
Dene customer and product needs/requirements.
Relate needs/requirements to customer satisfaction; benchmark.
Prioritize needs/requirements to determine CTS Ys.
Review and develop consensus.
Once the technical team has nished its review and come up with a consensus
for action, the following deliverables are expected:

SL3151Ch16Frame Page 717 Thursday, September 12, 2002 5:57 PM

718

Six Sigma and Beyond

Kano diagram
Targets and ranges for CTS Ys
Y relationship to customer satisfaction
Y relationship to customer satisfaction
Benchmarked CTSs
CTS scorecard
At this point, one of the most important steps must be completed before the
DFSS team must go ofcially into the next step characterize. This step is the
evaluation process. A thorough question and answer session takes place with focus
on what has transpired in this stage. It is important to ask questions such as: Are
we sure that our CTS Ys are really associated with customer satisfaction? Did we
review all attributes and functionalities? And so on. Typical tools for the basis of
the evaluation are:
Consumer insight
Market research
Quality history
Kano analysis
QFD
Regression modeling
Conjoint analysis
When everyone is satised and consensus has been reached, then the team
ofcially moves into the characterize (C) stage. In this stage, all participants must
make sure that the system is understood. As a result of this understanding, the team
begins to formalize the concepts. The process for this development proceeds as
follows:
Flow CTS Ys down to lower level ys, e.g., Y = f(y

1

, y

2

, y

n

).
Relate ys to CTQ parameters (xs and ns), y = f(x

1

,, x

k

, n

1

,, n

j

) (x
is the characteristic and n is the noise).
Characterize robustness opportunities (optimize characteristics in the pres-
ence of noise).
Specically, the inputs for this discussion are the:
Kano diagram
CTS Ys, with targets and ranges
Customer satisfaction scorecard
Functional boundaries and interfaces from system design specication(s)
and/or verication analysis
Existing hardware FMEA data
Once these inputs have been identied, developed, and understood, then the
formal decomposition of Y to y to y

1

as well as the relationship of X to x to x

1

and

SL3151Ch16Frame Page 718 Thursday, September 12, 2002 5:57 PM

Closing Thoughts about Design for Six Sigma (DFSS)

719

ns to the Ys begins. How is this done? By making sure all the active individuals
participate and all have ownership of the project as well as technical knowledge.
Specically, in this stage the owners of the DFSS project will be looking to make
correlated connections of what they expect and what they have found in their
research. Thus, the formal search for the transformation function, preferably the
ideal function, gets underway. Some of the steps to identify both the decomposition
of the Ys and its relationship to x are (order does not matter, since in most cases
these items will be worked on simultaneously):
Identify functions associated with CTSs
Identify control and noise factors
Create function structure or other model for identied functions
Select Ys that measure the intended function
Create general or explicit transfer function
Peer review
The deliverables of this activity are:
Function diagram(s)
Mapping of Y


functions


critical functions


ys
P-diagram, including critical
Control factors, xs,
Technical metrics, ys,
Noise factors, ns
Transfer function
Scorecard with target and range for ys and xs
Plan for optimization and verication (R&R checklist)
At this point, one of the most important steps must be completed before the
DFSS team must go ofcially into the next step optimization. This step is the
evaluation process. A thorough question and answer session takes place with focus
on what has transpired in this stage. It is important to ask questions such as: Have
all the ys technical metrices been accounted for? Are all the CTQ xs measurable
and correlated to the Ys of the customer? Are all functionalities accounted for? And
so on. Typical tools for the basis of the evaluation are:
Function structures
P-diagram
Robustness/reliability checklist
Modeling using design of experiments (DOE)
TRIZ
When everyone is satised, then the team ofcially moves into the optimization
(O) stage. In this stage, we make sure that the system is designed with robustness
in mind, which means the focus is on

SL3151Ch16Frame Page 719 Thursday, September 12, 2002 5:57 PM

720

Six Sigma and Beyond

1. Minimizing product sensitivity to manufacturing and usage conditions
2. Minimizing process sensitivity to product and manufacturing variations
In essence here, we design for producibility. The process for this development
follows the following steps:
Understand capability and stability of present processes.
Understand the high time-in-service robustness of the present product.
Minimize product sensitivity to noise, as required.
Minimize process sensitivity to product and manufacturing variations, as
required.
The inputs for this process are based on the following processes and information:
Present process capability ( target and


)
P-diagram, with critical ys, xs, ns
Transfer function (as developed to date)
Manufacturing and assembly process ow diagrams, maps
Gage R&R capability studies
PFMEA & DFMEA data
Verication plans: robustness and reliability checklist
Noise management strategy
Once these inputs have been identied, developed, and understood, then the
formal optimization begins. Remember, there is a big difference between maximi-
zation and optimization. We are interested in optimization because we want to
equalize our input in such a way that when we do the trade-off analysis we are still
ahead. That means we want to decrease variability and satisfy the customer without
adding extra cost. How is this done? By making sure all the active individuals
participate and all have ownership of the project as well as technical knowledge.
Specically, in this stage, the owners of the DFSS project will be looking to make
adjustments in both variability and sensitivity using optimization and modeling
equations and calculations to optimize both product and process. The central formula
is
DMAIC
DCOV
Whereas the focus of the DMAIC model is to reduce (variability), the focus
of the DCOV is to reduce the (sensitivity). This is very important, and it

y x x
y
x
y
x

j
(
,
\
,
(
+ +
,

,
,
]
]
]
]
( ) ...
1
2 2
2
2
2
1 2
1 2

x
x
2

( )
y x

SL3151Ch16Frame Page 720 Thursday, September 12, 2002 5:57 PM

Closing Thoughts about Design for Six Sigma (DFSS)

721

is why we use the partial derivatives of the xs to dene the Ys. Of course, if the
transformation function is a linear one, then the only thing we can do is to control
variability. Needless to say, in most cases we deal with polynomials, and that is why
DOE and especially parameter design are very important in any DFSS endeavor.
Some of the steps to identify this optimizing process are (order does not matter,
since in most cases these items will be worked on simultaneously):
Minimize variability in y by selecting optimal nominals for xs.
Optimize process to achieve appropriate


x

.
Ensure ease of assembly and manufacturability (in both steps above).
Eliminate specic failure modes.
Update control plan.
Review and develop consensus.
The deliverables of this stage are:
Transfer function
Scorecard with estimate of


y

Target nominal values identied for xs
Variability metric for CTS Y or related function, e.g., range, standard
deviation, S/N ratio improvement
Tolerances specied for important characteristics
Short-term capability, z score
Long-term capability
Updated verication plans: robustness and reliability checklist
Updated control plan
At this point, one of the most important steps must be completed before the
DFSS team must go ofcially into the next step testing and verication. This step
is the evaluation process. A thorough question and answer session takes place with
focus on what has transpired in this stage. It is important to ask questions such as:
Have all the z scores for the CTQs been identied? How about their targets and
ranges? Is there a clear denition of the product variability over time metric? And
so on. Typical tools for the basis of the evaluation are:
Experimental plan with two-step optimization and conrmation run
Design FMEA with robustness linkages
Process FMEA (including noise factor analysis)
Parameter design
Robustness assessment
Simulation software
Statistical tolerancing
Tolerance design
Error prevention: compensation, eliminate noise, Poka-Yoke
Gage R&R studies
Control plan

SL3151Ch16Frame Page 721 Thursday, September 12, 2002 5:57 PM

722

Six Sigma and Beyond

After the team is satised with the progress thus far, it is ready to address the issues
and concerns of the last leg of the model verication of results (V). In this stage,
the team focuses on assessing the performance, the reliability, and the manufacturability
of what has been designed. The process for developing the verication begins by
emphasizing two items:
1. Assessing the actual performance, reliability and manufacturing capability
2. Demonstrating customer-correlated performance over time.
The inputs to generate this information are based on but not limited to the
following:
Updated verication plans: robustness and reliability checklist
Scorecard with predicted values of y,


y

, based upon

x

and


x

Historical design verication plan(s) and reliability if available
Control plan
Once these inputs have been identied, developed, and understood, then the
team is entering perhaps one of the most critical phases in the DFSS process. This
is where the experience and knowledge of the team members through synergy will
indeed shine. This is where the team members will be expected to come up with
physical and analytical performance test(s) as well as key life testing to verify the
correlation of what has been designed and the functionality that the customer is
seeking. In other words, the team is actually testing the ideal function and the
model generating the characteristics that will delight the customer. Awesome respon-
sibility indeed, but doable. The approach of generating some of the tests is:
Enhance tests with key noise factors.
Improve ability of tests to discriminate good/bad commodities.
Apply test strategy to maximize resource efciency.
Review and develop consensus.
The deliverables are test results, such as:
Product performance over time
Weibull, hazard plot, etc.
Long-term process capabilities
Completed robustness and reliability checklist with demonstration matrix
Scorecard with actual values of y,


y

.
Lessons learned captured in system design specications, component
design specications, and verication design system.
To say that we have such and such a test that will do this and this and will
conform to such and such condition or circumstance is not a big issue or important.
What is important and essential is to be able to assess the performance of what you
have designed against the customers functionalities. In other words: Are all your

SL3151Ch16Frame Page 722 Thursday, September 12, 2002 5:57 PM

Closing Thoughts about Design for Six Sigma (DFSS)

723

xs correlated (and if so, to what degree) to Xs which in turn correlate to y which
in turn correlate to the Y (the real functional denition of the customer)? Have the
phases D, C, and O of the model been appropriately assessed in every stage? How
reliable is the testing? And so on. Some of the approaches and methodologies used
are (order does not matter, since in most cases these items will be worked on
simultaneously):
Reliability/robustness plan
Design verication plan with key noises
Correlation: tests to customer usage
Reliability/robustness demonstration matrix

SL3151Ch16Frame Page 723 Thursday, September 12, 2002 5:57 PM

SL3151Ch16Frame Page 724 Thursday, September 12, 2002 5:57 PM

725

Appendix:
The Four Stages of Quality
Function Deployment

STAGE 1: ESTABLISH TARGETS

Develop a planning matrix (multiple functions).
Recognize the voice of the customer.
Analyze major product features.
Perform market and technical evaluation of competitive products.
Establish targets for major features.
Evaluate strengths and weaknesses of your product offering (engineering lead-
ership role).
Design, technology, reliability, cost
Major selling features, critical targets, necessary breakthroughs

Note:

First QFD meeting held: Vice president of engineering chairs meeting for
the purpose of critiquing the design concept and target setting.

STAGE 2: FINALIZE DESIGN TIMETABLES
AND PROTOTYPE PLANS

Discuss/evaluate all possible means of achieving important characteristics.
Decision on technology to be used
Targets and tolerances for critical components identied
Fault tree analysis/design FMEA/Taguchi optimization methods used
Final characteristic deployment matrix generated

Note:

Second QFD management meeting held: Discuss process of targeting and
mass production planning.

STAGE 3: ESTABLISH CONDITIONS
OF PRODUCTION

Primary emphasis is process design.
Critical component targets and tolerances related to prototype process-
ing conditions
Optimization experiments performed as needed

SL3151ZAppAFrame Page 725 Thursday, September 12, 2002 5:56 PM

726

Six Sigma and Beyond

Signicant control items and means of control established
Process FMEA/FTA/mistake proong methods established
Further need for breakthroughs dened
Trial runs used to verify forecasted process stability, capability, ade-
quacy of control points, and product quality
Output should be QA method providing excellent results with mini-
mum factory effort
Process quality planning matrices generated
Leadership of the QFD process transferred from engineering to man-
ufacturing when the decision to go ahead with mass production is made

Note:

Third QFD management meeting held:
Evaluate probability of success of pending mass production program
Discuss details of quality assurance system

STAGE 4: BEGIN MASS PRODUCTION STARTUP

Develop database on mass production capabilities versus plan.
Identify problems and areas for further improvement.
Integrate operator-suggested efciency or effectiveness improvements into
plan.
Identify additional customer inputs.

Note:

Fourth and nal quality management meeting held three months after start
of mass production. Manufacturing leads; engineering participates.
Actual performance data integrated into QFD package
Additional study needs detailed and commitments made
The importance and focus on the voice of the customer results in:
1. Reinforcing the quality linkage between departments
2. Setting the priorities for achieving marketplace advantage
3. Reducing the time from concept to product delivery

TANGIBLE BENEFITS

Major reduction in development time
Virtual elimination of late engineering changes
Lower cost designs at outset
Enhanced design reliability
Economical factory controls

SL3151ZAppAFrame Page 726 Thursday, September 12, 2002 5:56 PM

Appendix: The Four Stages of Quality Function Deployment

727

INTANGIBLE BENEFITS

Increased customer satisfaction
Stable duality assurance planning activity
QFD documentation package
Often applies to generic family
Transferable storehouse of engineering know-how
Basis for improvement planning

SUMMARY VALUE

Strengthens current development process
Clear targets dened early based on market business demands
Simultaneous focus on product and process technologies
Key issues remain visible for prioritizing resource allocation
Communication and teamwork enhanced
Desired output efciently achieved
Products meet customers needs.
Products provide a competitive edge.

THE QFD PROCESS

1. The project
a. Selection
Broad appeal
Simple but not trivial
Opportunity to improve
Management support
Available expertise
Available market information
b. Scope/targets
Project limitations, operating constraints, product constraints
Market segment
Regularity requirements
Cost
Mass
c. Objectives
Reason for doing
Expected results/outcome
d. Timing
Spans full product cycle
Months work
Concentrated effort
Hours meetings/members
Signicant time commitment

SL3151ZAppAFrame Page 727 Thursday, September 12, 2002 5:56 PM

728

Six Sigma and Beyond

2. Team
a. Selection
Cross-functional
Members
Membership
Product planning
Styling
Marketing
Product/manufacturing engineering
Operations
Key supplier
Service
Product assurance/testing
Expertise (not position)
Keep ranks about equal
Open-minded members
b. Operation
Facilitator/leader
Recorder
Regular meetings
Meeting to



organize
Team consensus
Agreement
Not voting
No one dominates
No factions
c. Agree to support group decisions
d. Team training
At least one person knowledgeable of QFD
QFD overview
Other training as needed
Team building
Creative thinking
Problem solving
Meeting skills
Facilitator skills
Interview/survey methods
Employee involvement team skills

MANAGING THE PROCESS

1. Timing
Process spans a major portion of the product development process.
Identify intermediate measures of progress.
Major projects will require 5060 hours of meetings.

SL3151ZAppAFrame Page 728 Thursday, September 12, 2002 5:56 PM

Appendix: The Four Stages of Quality Function Deployment

729

Meetings are used to coordinate activities and update charts.
Most of the work happens outside the meetings.
2. Supporting the team
Provide the time.
Demonstrate your commitment.
Push for progress, but not too hard.
Be realistic.
Review charts make sure you understand.
Set priorities if needed.
Help team through the rough spots.
Keep asking the right questions.
3. What to look for
Blank rows



unfullled customer wants
Blank columns
Unnecessary requirements
Incomplete customer wants
Rows of columns with only weak relationships banking a lot on
maybes
Unmeasurable hows
Too many relationships (Less than 50% white space makes it hard
to prioritize.)
Opportunities to excel
Negative correlations (Try to eliminate, trade off if needed.)
Conicting competitive assessments
4. Common pitfalls
QFD on everything
Inadequate priorities
Lack of teamwork
Wrong participants
Turf issues
Lack of team skills
Lack of support
Too much chart focus
Handling trade-offs
Internal focus
Stuck on tradition
Hurry up and get done
Failure to integrate QFD
5. Some right questions
How was the voice of the customer determined?
How were the design requirements (etc.) determined? (Challenge the
usual in-house standards.)
How do we compare to our competition?
What opportunities can we identify to gain a competitive edge?
What further information do we need? How can we get it?
How can we proceed with what we have?

SL3151ZAppAFrame Page 729 Thursday, September 12, 2002 5:56 PM

730

Six Sigma and Beyond

What trade-off decisions



are needed?
What can I do to help?
6. Points to remember
The process may look easy but requires effort.
Many of the entries look obvious after they are written down.
If there are not tough spots, it probably is not being done right.
Focus on the end-user customer.
Charts are not the objective.
Charts are the means of achieving the objective.
Find reasons to succeed, not excuses for failure.

SL3151ZAppAFrame Page 730 Thursday, September 12, 2002 5:56 PM

731

Selected Bibliography

Adams, L., Measuring by Comparison,

Quality

, May 2001, pp. 3234.
Allen, M.J. and Yen, M.,

Introduction to Measurement Theory,

Wadsworth,



Belmont, CA,
1979.
Anon., Statistics Roundtable: Statistical Gymnastics Revisited,

Quality Progress

, Feb. 1999,
pp. 8494.
Anon., Statistical thinking and its contribution to total quality,

The American Statistician,

44,
116121, 1990.
Atkinson, H., Hamburg, J., and Ittner, C.,

Linking Quality to Prots: Quality Based Cost
Management,

Quality Press, Milwaukee, 1994.
Aubrey, C., Six Sigma Creates Success in Service Sector, QFD and Design for Six Sigma,
Proceedings 45th EOQ Congress 2001, Instanbul, Sept. 20, 2001.
Balasubramanian, R., Concurrent Engineering A Powerful Enabler of Supply Chain Man-
agement,

Quality Progress

, June 2001, pp. 4754.
Barnes, E.B. and Mohanty, G.P., Tolerance allocation methodology for manufacturing, 1996
SAE Reliability, Maintainability, Supportability and Logistics Conferences and Work-
shop,

Proceedings of the 8th Annual SAE RMS Workshop,

1996,



pp. 4954.
Bongiorno, J., Improving FMEAs,

Quality Digest

, Oct. 2000, pp. 3740.
Boyce, W.E. and DiPrima, R.C.,

Elementary Differential Equations and Boundary Value
Problems

, 7

th

ed, Wiley, New York, 2000.
Brauer, J.R.,

Finite Element Analysis,

Marcel Dekker, New York, 1998.
Breyfogle, F.W.,

Implementing Six Sigma: Smarter Solutions Using Statistical Methods,

Wiley-Interscience, New York, 1999.
Breyfogle, F.W.,

Statistical Methods for Testing, Development and Manufacturing

, Wiley-
Interscience, New York, 1992.
Britz, G. et al., Statistical Thinking, Special Publication, ASQC Statistics Division, Spring
1996, pp. 223.
Burke, R.J. and Maier, N.R.F., Attempts to predict success on an insight problem,

Psycho-
logical Reports,

17, 303310, 1965.
Campanella, J., Ed.,

Principles of Quality Costs: Principles, Implementation, and Use

, 3

rd

ed., Quality Press, Milwaukee, 1999.
Chen, I.J. et al., Quality Managers and the Successful Management of Quality: An Insight,

Quality Management Journal

, 2000, pp. 4054.
Cone, G., Six Sigma: Black Belt Training for Quality,

Automotive Excellence

, Fall 1998, pp.
1014.
Daniels, S.E., Product Safety and Reliability: The Failures, SUV Rollovers Put Quality on
Trial,

Quality Progress

, Dec. 2000, pp. 3048.
Daniels, S.E. and Hagen, M.R., Making the Pitch in the Executive Suite,

Quality Progress

,
Apr. 1999, pp. 3048.
Davenport, T.H.,

Process Innovation: Reengineering Work Through Information Technology

,
Harvard Business School Press, Boston, 1993.
De Pablos, L.A., Six Sigma Quality Metric vs. Taguchi Loss Function, QFD and Design for
Six Sigma, Proceedings 45th EOQ Congress 2001, Instanbul, Sept. 20, 2001.

SL3151ZAppAFrame Page 731 Thursday, September 12, 2002 5:56 PM

732

Six Sigma and Beyond

Deleryd, M., Deltin, J., and Klefsjo, B., Critical factors for successful implementation of
process capability studies,

Quality Management Journal

, 6(1), 4059, 1999.
Denton, K.,

The Tool Box for the Mind: Finding and Implementing Creative Solutions in the
Workplace

, Quality Press, Milwaukee, 1999.
Dimitriades, Z.S., Empowerment in total quality: designing and implementing effective employee
decision-making strategies

, Quality Management Journal,

8(2), 1928, 2001.
Dodson, B.,

Weibull Analysis,

Quality Press, Milwaukee, 1995.
Dovich, R.A.,

Reliability Statistics

, Quality Press, Milwaukee, 1990.
Draman, R.H. and Chakravorty, S.S., An Evaluation of Quality Improvement Project Selection
Alternatives,

Quality Management Journal

, 2000, pp. 5873.
Dusharme, D., Gage Use and Abuse: A Guide to Common Gage Misuse,

Quality Digest

, Feb.
1999, pp. 3033.
Field, J.M., Beyond design: implementing effective production work teams,

Quality Manage-
ment Journal

, 8(2), 2943, 2001.
Fleischer, M. and Liker, J.K.,

Concurrent Engineering Effectiveness

, Hanser Gardner, Cin-
cinnati, 1997.
Finn, G., Building Quality into Design Engineering,

Quality Digest

, Feb. 2000, pp. 3538.
Fraenkel, J., Wallen, N., and Sawin, E.I.,

Visual Statistics. A Conceptual Primer,

Allyn &
Bacon,



Needham Heights, MA, 1999.
Fuller, W.A.,

Measurement Error Models

, Wiley, New York, 1987.
Gebrael, M.G., Markov Modeling as a Reliability Tool, 1996 SAE Reliability, Maintainability,
Supportability and Logistics Conferences and Workshop,

Proceedings of the 8th
Annual SAE RMS Workshop

, 1996, pp. 3744.
Genest, D.H., Improving Measurement System Compatibility,

Quality Digest

, Apr. 2001, pp.
3540.
Ghiselin, B., Ed.,

The Creative Process,

University of California Press, Berkeley, 1952.
Goldratt, E.,

Critical Chain

, North River Press, Great Barrington, MA, 1998.
Goldratt, E.,

Essays on the Theory of Constraints,

North River Press, Great Barrington, MA,
1998.
Goldratt, E.,

Necessary But Not Sufcient

, North River Press, Great Barrington, MA, 2000.
Goodden, R., How a Good Quality Management System Can Limit Lawsuits,

Quality
Progress

, June 2001, pp. 5559.
Goodenow, W.H., How to Become a Master Black Belt Organization Without Six Sigma,

Quality in Manufacturing

, Jan./Feb. 2001, pp. 1618.
Gorsuch, R.L.,

Factor Analysis

, W.B. Saunders, Philadelphia, 1974.
Grifth, G.K.,

Statistical Process Control Methods for Long and Short Run

, 2

nd

ed., Quality
Press, Milwaukee, 1996.
Hammer, M.,

Beyond Reengineering How the Process Centered Organization is Changing
Our Work and Our Lives,

Harper Business, New York, 1996.
Hammer, M. and Champy, J.,

Reengineering the Corporation. A Manifesto for Business
Revolution,

Harper Business, New York, 1993.
Harrington, H.J., Project Management: Its a More Important Tool than Six Sigma,

Quality
Digest

, June 2000, p. 20.
Harrington, H.J.,

Business Process Improvement: The Breakthrough Strategy for Total Quality,
Productivity, and Competitiveness

, McGraw-Hill, New York, 1991.
Heil, G., Parker, T., and Stephens, D.C.,

One Size Fits One. Building Relationships One
Customer and One Employee at a Time

, Wiley, New York, 1999.
Heiser, D.R. and Schikora, P., Flowcharting with Excel,

Quality Management Journal

, 2001,
pp. 2635.

SL3151ZAppAFrame Page 732 Thursday, September 12, 2002 5:56 PM

Selected Bibliography

733

Hoerl, R.W., Six Sigma and the Future of the Quality Profession,

Quality Progress

, June
1998, pp. 3544.
Holmes, D.S. and Mergen, A.E., Building an Acceptance Chart.,

Quality Digest

, June 2000,
pp. 3536.
Holpp, L.,

Managing Teams,

McGraw-Hill, New York, 1999.
Hoyer, R.W. and Hoyer, B.B.Y., What Is Quality?

Quality Progress

, July 2001, pp. 5262.
Hunter, J.S., Metrics for Uncertainty: A Look at Probability, Evidence and a Seldom Used
Additive Metric,

Quality Progress

, Dec. 2000, pp. 7273.
Imparato, N. and Harari, O.,

Jumping the Curve. Innovation And Strategic Choice in an Age
of Transition,

Jossey-Bass, San Francisco, 1994.
Ireson, W.G., Coombs, C.F., and Moss, R.Y.,

Handbook of Reliability Engineering and
Management,

2

nd

ed., Quality Press, Milwaukee, 1996.
Isaacson, J. and Chambers, W., An Introduction to Optical Measurement,

Quality Digest,

Oct.
2000

,

pp. 2832.
Janov, J.,

The Inventive Organization. Hope and Daring at Work

, Jossey-Bass, San Francisco,
1994.
Kales, P.,

Reliability: For Technology, Engineering, and Management

, Quality Press, Milwau-
kee, 1998.
Kalfut, M., Riding the Benchmark,

Technology Century

, Dec. 1997/Jan. 1998, pp. 3031.
Kall, J., Manufacturing Execution Systems: Leveraging Data for Competitive Advantage (Part
I),

Quality Digest

, Aug. 1999, pp. 3134.
Kall, J., Manufacturing Execution Systems: Leveraging Data for Competitive Advantage (Part
II),

Quality Digest,

Sept. 1999, pp. 3133.
Kanyamibwa, F., Christy, D.P., and Fong, D.K.H., Variable selection in product design,

Quality
Management Journal

, 8(1), 6279, 2001.
Kaplan, R.S. and Norton, D.P.,

The Balanced Scorecard,

Harvard Business School Press,
Boston, 1996.
Kapur, K.C. and Lamberson., L.R.,

Reliability in Engineering Design

, Wiley, New York, 1977.
Kay, M., Applying Six Sigma in a Public Service Organization, QFD and Design for Six
Sigma, Proceedings 45

th

EOQ Congress 2001, Instanbul, Sept. 20, 2001.
Kelada, J.N.,

Intergrading Reengineering with Total Quality

, Quality Press, Milwaukee, 1996.
Kelly, C. and Kachatorian. L., Robust Design for Six Sigma Manufacturability, 1996 SAE
Reliability, Maintainability, Supportability and Logistics Conferences and Workshop,

Proceedings of the 8th Annual SAE RMS Workshop

, 1996, pp. 2528.
Kelly, L. and Morath, P., How Do You Know the Change Worked?

Quality Progress

, July
2001, pp. 6874.
Kish, L., Some statistical problems in research design,

American Sociological Review,

24,
328338, 1959.
Kish, F.J., Utilizing value engineering as a problem solving management tool, SAE National
Combined Farm Construction and Industrial Machinery, Powerplant, and Transpor-
tation Meetings, Society of Automotive Engineers, Milwaukee, Sept. 912, 1968,
paper 680567.
Knouse, S.B. and Strutton, H.D., Getting Employee Buy-In to Quality Management,

Quality
Progress

, Apr. 1999, pp. 6164.
Krishnamoorthi, K.S.,

Reliability Methods for Engineers

, Quality Press, Milwaukee, 1992.
Kume, H.,

Statistical Methods for Quality Improvement

, The Association for Overseas Tech-
nical Scholarship, Tokyo, 1985.
Lathin, D. and Mitchell, R., Learning from Mistakes,

Quality Progress

, June 2001, pp. 3946.
Lehmann, E.L.,

Testing Statistical Hypotheses

, Wiley, New York, 1986.
Lepi, S.M.,

Practical Guide to Finite Elements

, Marcel Dekker. New York, 1998.

SL3151ZAppAFrame Page 733 Thursday, September 12, 2002 5:56 PM

734

Six Sigma and Beyond

Levinson, W.A., How to Design Attribute Sample Plans on a Computer,

Quality Digest

, July
1999, pp. 4547.
Liberatore, R., Teaching the Role of SPC in Industrial Statistics,

Quality Progress,

July 2001

,

pp. 8994.
Livingston, S., Creating the Right Atmosphere: Setting the Stage for Innovative Thinking in
Ideation Sessions,

Quirks Marketing Research Review

, May 2001, pp. 3239.
Maier, N.R.F.,

Problem Solving and Creativity in Individuals and Groups,

Brooks/Cole
Publishing Co., Wadsworth Publishing Co., Belmont, CA, 1970.
Marash, S.A., A New Look at Six Sigma,

Quality Digest,

Mar. 1999, p. 18.
Mazur, G., QFD and Design for Six Sigma, Proceedings 45th EOQ Congress 2001, Instanbul,
Sept. 20, 2001.
McLean, H.W.,

HALT, HASS and HASA Explained: Accelerated Reliability Techniques

, Qual-
ity Press, Milwaukee, 2000.
Meeker, W.Q., Doganaksy, N., and Hahn, G.J., Using Degradation Data for Product Reliability
Analysis,

Quality Progress

, June 2001, pp. 6065.
Mitchell, R.H., Process Capability Indices,

ASQ Statistics Division Newsletter

, Winter 1999,
pp. 1620.
Mitchell, E., Web-Based APQP Keeps Everyone Connected,

Quality

, July 2001, pp. 4044.
Modares, M., Kaminski, M., and Krivtsov, V.,

Reliability Engineering and Risk Analysis: A
Practical Guide

, Marcel Dekker, New York, 1999.
Myers, R.E. and Torrance, E.P.,

Invitations to Thinking and Doing,

Ginn, Boston, 1964.
OConnell, V., Advertising,

Wall Street Journal

, Nov. 27, 2000, p. B21.
OConor, P.D.T.,

Practical Reliability Engineering

, 3

rd

ed., Quality Press, Milwaukee, 1995.
Orme, B., Assessing the Monetary Value of Attribute Levels with Conjoint Analysis: Warnings
and Suggestions, Quirks Marketing Research Review, May 2001, pp. 16, 4447
Osborn, A.F., Applied Imagination, 3rd ed., Scribener, New York, 1963.
Paul, L.G., Outsourcing and Analyzing the Value Proposition, CFO, Aug. 2001, pp. 6061.
Peterman, M., Lean Manufacturing Techniques Support the Quest for Quality, Quality in
Manufacturing, Jan./Feb. 2001, pp. 2425.
Peterman, M., Simulation Nation: Process Simulation Is Key in a Lean Manufacturing Com-
pany Hungering for Big Results, Quality Digest, May 2000, pp. 3942.
Porter, A. and Adams, L., Quality Begins with Good Data, Quality, May 2001, pp. 3234.
Porter, M.E., Competitive Advantage: Creating and Sustaining Superior Performance, Free
Press, New York, 1985.
Pylipow, P.E., Can It Be This Easy? You Can Alter Drawing Practices to Achieve Six Sigma,
But Only if You Understand All the Implications, Quality Progress, July 2001, pp.
139140.
Pyzdek, T., Considering Constraints, Quality Digest, June 2000, p. 22.
Pyzdek, T., The 1.5 Sigma Shift, Quality Digest, May 2001, p. 22.
Quesenberry, C., Statistical Gymnastics, Quality Progress, Sept. 1998, pp. 7578.
Rosen, R. and Digh, P., Developing globally literate leaders, T+D, 55(5), 7083, 2001.
Salzman, R.H. and Liddy, R.G., Product Life Predictions from Warranty Data, 1996 SAE
Reliability, Maintainability, Supportability and Logistics Conferences and Workshop,
Proceedings of the 8th Annual SAE RMS Workshop, 1996, pp. 4548.
Schuetz, G., Gaged and Confused, Quality Digest, May 2001, pp. 4447.
Schwarz, F.C., Managing Progress Through Value Engineering, SAE National Combined
Farm Construction and Industrial Machinery, Powerplant, and Transportation Meet-
ings, Society of Automotive Engineers. Milwaukee, Sept. 912, 1968, paper 680566.
Slater, R., Jack Welch and the GE Way: Management Insights and Leadership Secrets of the
Legendary CEO, McGraw-Hill, New York, 1999.
SL3151ZAppAFrame Page 734 Thursday, September 12, 2002 5:56 PM
Selected Bibliography 735
Smith, D., How Good Are Your Data? Quality Digest, June 2000, pp. 5051.
Stalk, G., Jr. and Hout, T.M., Competing Against Time: How Time-Based Competition is
Reshaping Global Markets, Free Press, New York, 1990.
Stamatis, D.H., The Nuts and Bolts of Reengineering, Paton Press, Red Bluff, CA, 1998.
Stamatis, D.H., TQM Engineering Handbook, Marcel Dekker, New York, 1997.
Stasiowski, F.A. and Burstein, D., Total Quality Management for the Design Firm: How to
Improve Quality, Increase Sales, and Reduce Costs, Wiley, New York, 1993.
Steel, J., Truth, Lies and Advertising, Wiley, New York, 1998.
Steele, J. M., Applied Finite Element Modeling: Practical Problem Solving for Engineers,
Marcel Dekker, New York, 1998.
Stein, P., All You Ever Wanted to Know About Resolution, Quality Progress, July 2001, pp.
141142.
Stevens, D.P., A stochastic approach for analyzing for analyzing product tolerances, Quality
Engineering, 6(3), 439449, 1994.
Sun, H., Comparing quality management practices in the manufacturing and service industries:
learning opportunities, Quality Management Journal, 8(2), 5371, 2001.
Subramanian, K., The System Approach: A Strategy to Survive and Succeed in the Global
Economy, Hanser Gardner, Cincinnati, 2000.
Taraschi, R., Cutting the Ties that Bind, Training and Development, Nov. 1998, pp. 1214.
Tichy, N.M. and Sherman, S., Control Your Destiny or Someone Else Will: Lessons in
Mastering Change From the Principles Jack Welch Is Using to Revolutionize GE,
Harper Business, New York, 1993.
Umble, E.J. and Umble, M.M., Developing Control Charts and Illustrating Type I and Type
II Errors, Quality Management Journal, 2000, pp. 2331.
Valance, N., Prices Without Borders? CFO, Aug. 2001, pp. 7173.
Van Mieghem, T., Lessons Learned From Alexander the Great, Quality Progress, Jan. 1998,
pp. 4148.
Vasilash, G.S., For Robust Products, Automotive Design and Production, Aug. 2001, p. 8.
Ward, S., How Much Data is Needed? Quality, July 2001, pp. 2629.
Wearing, C. and Karl, D.P., The Importance of Following GD&T Specications, Quality
Progress, Feb. 1995, pp. 9598.
Wetmore, D., The Juggling Act, Training and Development, Sept. 2000, pp. 6768.
White, D.A. and Kall, J., Coherent Laser Radar: True Noncontact Three-dimensional Mea-
surement Has Arrived, Quality Digest, Aug. 1999, pp. 3538.
Whiteld, K., The Current State of Quality at Honda and Toyota, Automotive Design and
Production, Aug. 2001, pp. 5052.
Yilmaz, M.R. and Chatterjee, S., Six sigma beyond manufacturing a concept for robust
management, Quality Management Journal, 7(3), 6778, 2000.
SL3151ZAppAFrame Page 735 Thursday, September 12, 2002 5:56 PM
SL3151ZAppAFrame Page 736 Thursday, September 12, 2002 5:56 PM

737

Index*

A

Abstracting and indexing services, 145
Accelerated degradation testing (ADT), 336
Accelerated depreciation, 687
Accelerated life testing (ALT), 336, 362
Accelerated stress test (AST), 310311
Accelerated testing, 305
ADT (accelerated degradation testing), 336
ALT (accelerated life testing), 336, 362
AST (accelerated stress test), 310311
constant-stress testing, 305306
denition of, 362
HALT (highly accelerated life test), 310
HASS (highly accelerated stress screens), 310
methods, 305306
models, 306309
PASS (production accelerated stress screen),
311312
progressive-stress testing, 306
step-stress testing, 306
Acceleration factor (A), 308309
Acclaro (software), 545547
Accountants, 663
clean opinions of, 671
reports of, 671672
Accounting
accrual basis of, 676677
books of account in, 675676
in business assessments,

138

cash basis of, 677
and depreciation, 684
earliest evidence of, 672
entries in, 675676
nancial reports in, 664
and nancial statement analysis, 688
as measure of quality cost, 492
recording business transactions in, 672675
roles in business, 664
valuation methods in, 679681
Accounts
books of, 675676
contra, 684
types of, 674
Accounts receivables, 681, 691
Accrual accounting, 676677
Accrued pension liabilities, 667
Accumulated depreciation, 665666, 684
Achieved availability, 292
Acquisitions, in product design, 196
Action plans, 161162
based on facts and data, 107
creative planning process in, 162
documenting, 162
in FMEA (failure modes and effects analysis),
253258
monitoring and controlling, 162163
prioritizing, 162
Action standards.

see

standards
Activation energy type constant (E

a

), 308
Active repair time, 293
Activities in benchmarking
after visits to partners, 156
dening, 150
drivers of, 151
owcharting, 153154
modeling, 152153
output, 151
performance measure, 151152
resource requirements, 151
triggering events, 150
during visits to partners, 155
Activity analysis, 150152
Activity benchmarking, 123
Activity drivers, 151
Activity performance measure, 151152
Actual costs, 478480, 568, 703
Actual operating hours, 525
Actual size, 522
Actual usage, amount of, 525
Administrative process
cost of, 570
improving, 490492
as measure of quality cost, 493
ADT (accelerated degradation testing), 336
Advanced product quality planning.

see

APQP
Advanced quality planning.

see

AQP
Advanced Systems and Designs Inc., 405
Aerospace industry, 226
Aesthetics, 113

*

Note

: Italicized numbers refer to illustrations and tables

SL3151ZIndex Page 737 Thursday, September 26, 2002 8:56 PM

738

Six Sigma and Beyond

Aggressors, in team systems, 25
Aircraft, 196
Airline industry, 702
Allowance, 568
Almanacs, 145
Alpha tests, 266
ALT (accelerated life testing), 336, 362
Alternative costs, 703
Alternative lists, in trade-off studies, 472
Alternative rank, 476
Altman, Edward I., 699
Altshuller, Genrich, 549
Aluminum, 531
Amateur errors, 211
American National Standards Institute (ANSI), 61
Amortization, 669
Analysis of variance.

see

ANOVA
Angular dimensions, 520
Angular measurements, 527
Annual reports, 146, 671672, 678
ANOVA (analysis of variance), 396, 407410
for cumulative frequency,

423

for NTB signal-to-noise ratios, 431432
for raw data,

430

,

434

,

436

signal-to-noise (S/N) ratio as raw data,

434

,

437

for transformed data,

438

typical table setup,

432

ANOVA-TM computer program, 405
ANSI (American National Standards Institute), 61
ANSYS program, 183185
Antifreeze, 289
Apollo program, 226
Apple Computer, 195
Appraisal costs, 101, 482,

489

APQP (advanced product quality planning), 266
vs. AQP (advanced quality planning), 43
in DFSS (design for six sigma), 4547
and product reliability, 298
AQP (advanced quality planning), 4042
vs. APQP (advanced product quality planning),
43
basic requirements for, 42
demonstrating, 42
pitfalls in, 4344
qualitative methodology in, 4445
reasons for using, 42
workable plans for, 43
Archiving, 40
Area sensors, 218
ARIZ algorithm, 549
Arrhenius model, 308309
Assembly lines, 207
simulation of, 170175
two-station,

173174

Assembly mistakes, 213
Assembly omissions, 214
Assembly process, 206
Assessment items, 476
Assets, 679
in balance sheet equation, 664, 674
buying, 666
contra, 684
current, 665
current value of, 680
vs. expenses, 680681
nancial, 681
in nancial statements, 679
xed, 665666
historical cost of, 679
ination effect on, 679
intrinsic value of, 680
as investments, 680
liquidation value of, 679
noncurrent, 667
physical, 681682
psychic value of, 680
replacement cost, 680
return on assets (ROA), 694
return on assets managed (ROAM), 695
return on gross assets (ROGA), 695
return on net assets (RONA), 695
selling, 669
slow, 670
types of, 681682
valuation methods, 679681
values based on historical costs, 678679
Assets/equity ratio, 692
Asset turnover, 704705
AST (accelerated stress test), 310311
AT&T, benchmarking in, 122123
Attributes, 116
Attributes tests, 313314,

423

Auditing, 597
Automation, 682
Automobile industry, 545
AQP (advanced quality planning) in, 41
commonly used elements in, 176
product reliability in, 296297
six sigma philosophy in, 2
Automobile parts industry, 54
Availability, 292, 356
Axiomatic design
applying to cars, 543544
axioms in, 542
benets of, 545547, 545547
and change management, 545
changing existing designs with, 544545
creating new designs with, 544
diagnosing existing designs with, 544

SL3151ZIndex Page 738 Thursday, September 26, 2002 8:56 PM

Index

739

and project workow, 545
vs. robust design, 543
Axiomatic designs, 541, 715
Axioms, 542

B

Balance sheets, 664665
accrual accounting in, 678
in annual reports, 671
changes in working capital items in, 670
current assets and liabilities, 665
current liabilities, 665666
earning per share, 669
equation, 664, 673675
xed assets, 665666
footnotes in, 670
gross prot, 668
income statements, 667668
noncurrent assets, 667
noncurrent liabilities, 667
ratio analysis of,

689690

shareholder's equity, 667
slow assets, 666
sources of funds, 669
statement of changes in, 669
in summary of normal debit/credit balances,

674

use of funds, 670
working capital format in, 666
Bank lings, 147
Bankruptcy, 679
Barriers to market, 129
Barter, 53
Basic functions, 557, 574575
Basic manufacturing process, 206
Basic needs, 229
Basic quality, 6869,

70

Bathtub curve,

293

Beams, 176
Beam sensors, 218
Behavioral theory, 663664
Beliefs, and change management, 127
Benchmarking, 97
alternatives, nancial analysis of, 163164
alternatives, identifying, 129132
alternatives, prioritizing, 139142
areas of application of, 9799
and business strategy development, 99102
and change management, 126129
classical approach to, 102103
common mistakes in, 166
continuum process,

98

and Deming management method,
110111
and Deming wheel, 111112
in design FMEA, 267268
in DFSS (design for six sigma), 717
nancial, 157
in FMEA (failure modes and effects analysis),
230
gaining cooperation of partners in, 148
generic, 122
history of, 97
identifying candidates for, 129134
identifying cause of problems with, 134
in least cost strategy, 100101
making contacts for, 149
as a management tool, 119120
managing for performance, 164166
and national quality award winners,
107110
operations process, 123
and organizational change, 126129
organizations for, 123124
performance and process analysis, 149158
preparing proposals for, 149
activities before visiting partners, 149
understanding own operations, 149
activity analysis, 150152
activity modeling, 152153
owcharting process, 152153
activities during visit, 155
understanding partners' activities, 155
identifying success factors, 155156
activities after visit, 156
activities of partners, 155156
in process FMEA, 275276
project evaluations,

165

resistance to, 127
scopes of, 120121
and Shewart cycle, 111112
and six sigma, 105107
sources, 142149
and SQM (strategic quality management),
102105
success factors in, 124126, 164166
technical competitive, 78
ten-step process in, 121122
types of, 122123
Benet-cost analysis, 610
Beta tests, 266
Bibliographies, 145
Bilateral tolerance, 523
Binomial distribution
in xed-sample tests, 315316
in sequential tests, 317318
Biographical sources, 145
Black Belts, in dealing with projects, 661
Bladed wheel hopper feeders, 207

SL3151ZIndex Page 739 Thursday, September 26, 2002 8:56 PM

740

Six Sigma and Beyond

Blast-create-dene method, 582584
Block diagrams, 234, 323325
Blockers, in team systems, 25
Boeing Co., 169, 196
Bolted joints, 181
Boltzman's constant (K

b

), 308
Bond rating companies, 695696
Bonds, 695696
Bonds payable, 667
Bookkeeping, 672
Bookshelf data, 349
Books of account, 675676
Book value, 666, 687
Boothroyd, Geoffrey, 202203
Boundaries, in teams, 29
Boundary diagrams, 258260
Box, George, 404, 429
Brainstorming, 230, 267268
and concept of functives, 57
in creative phase of job plans, 582
in design FMEA, 267268
in determining causes of failures, 247
in developing alternatives to functions,
558559
in planning DOE (design of experiments),
372373
in process FMEA, 275276
in value control approach, 582
Branch transmissions, 535
Brand names, 8994
Breakdowns, 278
Breakeven analysis, 705706
Breakthrough strategies, 160
Buckling, 176, 179
Budgets, 711712
calculating, 604
departmental, 711712
managing, 712
and satisfaction of management, 662
zero-based, 712
zero-growth, 712
Burden, 568569
Business assessment forms,

135139

Business assessments, 133
Business assets.

see

assets
BusinessLine, 146
Business meetings, 19
Business reviews, 145
Business strategy, and benchmarking,
99100
Business transactions, recording, 672, 675
BusinessWire, 146
Buyer/supplier relationship.

see

customer/supplier
relationship
Buying groups, 148

C

Cadillac, 107108
Calendar elapsed time, 525
Calibration, 526
Calipers, 527
Capacitive tests, 530
Capital investments, 661
Capital surplus, 670
Carlzon, Jan, 125126
Case studies, 148
Cash, 681
in annual reports, 671
in business transactions, 675
vs. prots, 678
ratio analysis of, 691692
recording, 675
sources of, 669, 673
uses of, 673
Cash basis, of accounting, 677
Cash ow, 700
calculating, 701
and change management, 127
and current assets and liabilities, 702
denition of, 700
depreciation as part of, 685
forecasting, 691
as measure in TOC (theory of constraints), 462
in NPV (net present value) analysis, 610
present value of, 162163,

165

and tax shelter schemes, 702
and working capital, 702
Cashing out, 667
Cash receipts journals, 675
Casting, 204
Catalogs, 582
Categories, 477
Category lists, in trade-off studies, 472
Catholic clerics, 672, 673
Causality, 33
Cause and effect relationships, 33, 134,

376

CDI (customer desirability index), 77
Cendata, 146
Census, 146
Centerboard hopper feeders, 207
Center for Advanced Purchasing Studies, 157
Centralized benchmarking, 124
Centrifugal hopper feeders, 207
Chain rule, 656
Chambers of commerce, 147
Change, psychology of, 126127
Channel value, 54
Characteristic matrix, 6364
Characteristics, 254255
Charting, 133

SL3151ZIndex Page 740 Thursday, September 26, 2002 8:56 PM

Index

741

Check sheets, 484
Chemical measurements, 527
Chi-square test, 334, 626
Classication, 254255
Classied attribute analysis, 422426
Classied data, analysis of, 421430
Classied responses, 422
Classied variable analysis, 426428
Clean opinions (accounting), 671
Clerical process, as measure of quality cost, 493
Closed systems, 35
CNC lathe, 6163
Coefcient of expansion, 531
COGS (cost of goods sold), 711
in nancial benchmarking, 157
in product design cycle, 194
reducing, 545
Color marking sensors, 218
Column interaction tables, 384,

386

Combination design, 415418
Combinex method, 589591

Commerce Business Daily

, 147
Commercial cost, 570
Commercial credit ratings, 698
Commodity management organizations, 15
Common stocks, 696697
Company life cycle, 133
Comparators, 527
Compensation costs, 36
Competition
and DFSS (design for six sigma), 717
and earnings, 693
and product demand, 662
Competitive assessments,

82

, 8384
Competitive best performers, 143
Competitive bidding, 10
Competitive evaluations, 118119, 131
Competitive quality ratings,

487
Competitive Strategy

(book), 99
Competitors, 118
Complaints
handling, 6162
indices for, 484
processing and resolution of, 484
Complex reliability block diagrams, 323325
Components, 205
costs of, 571572
levels of, 442
tolerance levels of, 447454
Component testing, 266
Component view, 238
Composite credit appraisal, 698
Compound growth rates, 711
Comptrollers, 503
Computer databases, 146
Computer formats, 339
Concept FMEA, 224, 262
Concept phase, 295
Concurrent engineering, 199,

468

Condition, statement of.

see

balance sheets
Conduction, 179
Conference method, 513515
Condence level
around estimation, 409
of demonstration tests, 312
Conguration, probability of, 637638
Conformance, 2930, 112
Conjoint analysis, 88
in DFSS (design for six sigma), 715, 718
empirical example of, 9094
hypothetical example of, 8990
managerial uses of, 9595
Constant dollars, 484
Constant rate failure, 619
Constant-stress testing, 305306
Constraints, 180, 457458, 463465
Construction contractors, 147
Consultants, 148
Consumer's risk, 313
Consumer groups, 147
Contamination, 289
Continuous production ow manufacturing,
207208
Continuous time waveform,

621

Continuous transfer manufacturing, 206
Contra account, 684
Contra asset, 684
Contractors,

358

Contribution margin analysis, 706
Control charts, 484,

621624

Control factors, 393, 411
in DFSS (design for six sigma), 719
in monitoring team performance, 33
and noise interactions,

337

Controlled radius, 522
Convection, 179
Conventional dimensioning, 518
Conventional tolerancing, 518
Conveyors, 206
Cooperation.

see

partnering
Coordinate measuring machines, 527
Copper, 531
Copper plating, six sigma in, 6
Corporate culture, 127128
Corporate general interest buyers, 117
Corporate growth, 663
Correlation matrix, 83
Corrosive materials, 289, 294
Cost analysis, 156, 485
Cost benchmarking, 123

SL3151ZIndex Page 741 Thursday, September 26, 2002 8:56 PM

742

Six Sigma and Beyond

Cost-function worksheets,

566

Cost of goods sold.

see

COGS
Cost of non-quality, 101
Cost of sales, 703, 711
Costs, 101
actual, 478480, 568, 703
alternative, 703
analyzing, 156
appraisal, 101, 482,

489

benets of DFM/DFA (design for
manufacturability/assembly) on, 189
comparison reports, 480
of components, 571572
denition of, 569
design, 569
differential, 703
direct, 36, 703
and earnings, 693
elements of, 571
of engineering changes, 297298
estimated, 703
external failure, 101, 483,

491

extraordinary, 703
xed, 569, 703
freight, 570
functional area, 573
and functions, 580
of goods, 683684
historical, 679, 703
imputed, 703
incremental, 569, 703
indirect, 36, 703
internal failure, 101, 483,

490

joint, 703
manufacturing, 569, 571
monitoring system, 478
noncontrollable, 703
opportunity, 703
out of pocket, 704
per dimension, 572
per functional property, 572573
period, 704
per period of time, 572
per pound, 572
prepaid, 704
presentation formats for, 485
prevention, 101, 482,

488

prime, 704
of processes, 571572
product, 704
production, 704
of product unreliability, 294
quantitative, 572573
reducing, 480
replacement, 680, 704
and revenues, 711
of sales, 711
sources of information, 570
standard, 478, 570, 704
sunk, 704
in theory of rm, 662
vs. throughput, 461
tolerance limit, 480
total, 570
and value, 558
and value control, 556
variable, 704
variance of, 480
visibility of, 564565, 568
Costs of quality.

see

quality costs
Cost/time analysis, 134
Cost visibility, 568
in cost-function worksheets, 564,

566

techniques, 571573
Counterpart characteristics, 73
Counters, 219
County courthouses, 147
Coupled matrix, 657
Court records, 147
Covariance, 650651
Coverage ratios, 692
Crashes, 177
Crash programs, 194196
Creative phase (job plans), 582584
Credit appraisal, 698
Credit balance, 664
Credits, 664665
in business assessments,

136

in business transactions, 672
recording, 675
revolving, 666
using, 673
Critical condition indicators, 219
Critical design review, 466
Critical success factors, 129
Crosby, P., 481
Cross-functional teams, 472, 604
CTP (process characteristics), 510
CTQ (quality characteristics), 510, 719
Cumulative density function,

618

Cumulative distributions, 170171, 640
Cumulative frequency,

422

Cumulative rate of occurrence, 425426
Current assets, 665
in annual reports, 671
net changes in, 670
ratio of total liabilities to, 697
Current controls, 282
Current liabilities, 665666
analysis of, 691

SL3151ZIndex Page 742 Thursday, September 26, 2002 8:56 PM

Index

743

in annual reports, 671
and cash ow, 702
net changes in, 670
Current ratio, 691
Current value, 680
Customer attributes, 201202
Customer axis, 7778
Customer desirability index (CDI), 77
Customer duty cycles, 291
Customer requirement planning matrix, 72
Customer requirements, 8588
Customers
customer axis, 7778
in evaluation of competitive products,

203

fast response to, 107
overarching, 54
perception of performance vs. importance, 131
perception of quality, 117119
in process FMEA, 269
processing and resolution of complaints, 484
roles in customer/supplier relationship, 13
satisfaction of.

see

customer satisfaction
service hot lines for, 118
surveying, 118119
types of, 229
view on quality, 114
voice of, 73,

83

wants and needs of, 5354, 228230
Customer satisfaction, 74
and benchmarking, 104
vs. customer service, 4951
in expanded partnering, 25
levels of, 49
vs. loyalty, 49
in partnering, 23, 25
and product design, 113114
and product performance, 288
and product reliability, 292293
scorecard, 718
Y relationship to, 718
Customer service.

see

services
Customer/supplier relationship, 1114
checklists of, 2123
improving, 2021
interface meetings in, 16
major issues with, 1920
Customs and traditions, 582

D

DAA (dimensional assembly analysis), 199
DaimlerChrysler, 203, 296297
Dana Corp., 54
Databases, 146
Data failure distribution, 633
Data processing, 6, 493
Data recording, 526
DDB (double declining balance) method, 686687
Death spiral symptom, 461
Debit balance, 664
Debits, 664665
in business transactions, 672
recording, 675
using, 673
Debt
in annual reports, 671
and equity, 692, 697
long-term, 667
net reduction in, 670
in theory of rm, 661
Debt/assets ratio, 692
Decay time, 618
Decentralized benchmarking, 124
Decimal dimensions, 520
Decision analysis, 610

Decline and Fall

(book), 661
Decline period, of product life cycle, 699700
Decoupled designs, 542
Decoupled matrix, 657
Defective parts, 278
Defect matrices, 485
Defects, 209210
detecting, 216
examples of,

213

matrices for, 484
as measure in TOC (theory of constraints), 462
mistakes as sources of, 212213
preventing, 216
quality defects, 291
reliability defects, 291
zero, 483
Defense Technical Information Center,

340

Deferred compensation, 667
Deferred income taxes, 669
Deection, 654655
Deformations, 177
Degrees of freedom, 383, 407, 428
Dell Corp., 169
Delphi Automotive Systems, 169
Demand
and earnings, 693
factors affecting, 129
and sales forecasting, 710
Deming, W.E., 110, 480481
Deming management method, 110111
Deming wheel, 111112
Demographic data, 146
Density, 180
Density function, 633
Departmental budgets, 711712

SL3151ZIndex Page 743 Thursday, September 26, 2002 8:56 PM

744

Six Sigma and Beyond

Department benchmarking, 123
Department of Defense, 555
Depreciation, 684
accelerated, 687
accumulated, 665666, 684
in cash ow analysis, 701
as expenses, 665, 669, 684
as part of cash ow, 685
of physical assets, 681
replacement cost, 687
straight line, 685686
sum-of-the-years' digits (SYD), 686
as tax strategy, 684-685
as valuation reserve, 684
Derating, 359, 362
Descriptive feedback, 31, 33
Design controls, 249, 265266
Design cost, 569
Design customers, 229
Design engineering, as measure of quality cost,
493494
Design engineers, 269
Design FMEA, 224225, 262;

see also

FMEA (failure modes and effects
analysis)
calculating RPN (risk priority number) in, 267
describing anticipated failure modes in, 264
describing causes of failure in, 264265
describing effect of failure in, 264
describing functions of design/product in, 264
detection table,

252

in DFSS (design for six sigma), 721
estimating failure detection in, 266267
estimating frequency of occurrence of failure
in, 265
estimating severity of failures in, 265
failure modes, 240244
forming teams for, 263
functions of, 264
identifying system and design controls in, 265
linkages to process FMEA and control plans,
258260
objectives of, 263
occurrence rating,

249

purpose of, 265
in QFD (quality function deployment), 725
recommending corrective actions in, 267268
requirements for, 263
severity rating,

246

special characteristics for,

257

timing, 263
Design for manufacturability/assembly.

see


DFM/DA
Design for six sigma.

see

DFSS
Design margins, 359360
Design of experiments.

see

DOE
Design optimization, 178, 182185
Design parameters, 336337, 542, 656
Design phase, 5
Design reliability, 313
Design requirements, 8788
Design reviews, 464
checklists of,

468

denition of, 362
FMEA (failure modes and effects analysis) in,
467
objectives of,

466

in R&M (reliability and maintainability),
352
sequential phases of, 465467
in system/component level testing, 266
Designs, 183
axiomatic, 541544
existing, 544
extensions and changes to, 544
new, 544
Design sets, 184
Design synthesis, 3839
Design variables, 183185
Destructive testing, 529
Detect delivery chutes, 219
Detection ratings, 250
and lowering risks, 267268, 276
vs. occurrence ratings, 253
in surrogate machinery FMEAs, 282
Detectors, 216219
Development risk, 195
Deviations, 9193
Dewhurst. Peter, 202203
DFM/DFA (design for
manufacturability/assembly), 187189
approach alternatives to, 198199
business expectations from, 189190
charters, 193
and cost of quality, 509510
effects on product design, 204
elements of success in, 192194
fundamental design guidance for, 204205
instruction manuals for, 199, 200203
mechanics, 199
objectives of, 187189
in product design, 195, 204
product design in, 194
product plans in, 194
and product reliability, 298
sequential approach to,

191

simultaneous approach to,

191

tools and methods for, 198199
use of human body in, 199200
DFSS (design for six sigma), 9

SL3151ZIndex Page 744 Thursday, September 26, 2002 8:56 PM

Index

745

and APQP (advanced product quality
planning), 4547
and AQP (advanced quality planning), 4047
and cost of quality, 510
essential tools in, 715716
implementing with project management,
605609
model,

716

partnering in, 925
physical and performance tests, 722723
and project management, 608609
quality engineering approach in, 2533
and R&M (reliability and maintainability),
364365
and reengineering, 516517
and simulation, 185
stages in, 717723
systems engineering in, 3440
and TOC (theory of constraints), 463
transfer function in, 52
transformation functions in, 717718
Diameter, 522
Difference between two means, 655656
Differential costs, 703
Differentiation strategy, 101102, 110
Digital Equipment Corp., 203
Digital signal processing,

622

Digitizing, 178
Dimensional assembly analysis (DAA), 199
Dimensional mistakes, 214
Dimensioning, 518522
Dimensions, 522
Direct costs, 36, 703
Direct labor, 569
Direct magnitude evaluation (DME), 586
Direct materials, 569
Directories, 145146
Direct product competitor benchmarks, 122
Discrete time,

621

Discretionary funds, 700
Discriminant analysis, 699
Discriminators, 477
Displacements, 179180
Displacement sensors, 218
Disposable razor, failure modes in, 6667
Distribution, 54
Diversity, in team systems, 2930
Dividends, 670, 700
DMAIC model, 7
DME (direct magnitude evaluation), 586
Documentation, 40, 474
DOE (design of experiments), 367370;

see also

experiments
analysis, 405420
ANOVA (analysis of variance), 407410
classied data, 421430
combination design, 415418
graphical analysis, 405407
signal-to-noise (S/N) ratio, 411415
comparisons using,

369

conrmatory tests in, 418421
denition of, 362
in DFM/DFA (design for
manufacturability/assembly), 199
dynamic situations in, 430441
group runs using,

369

loss function in, 397398
parameter design, 441447
planning, 372380
in reliability applications, 335336
setting up experiments, 380395
signal-to-noise (S/N) ratio in, 403404
Taguchi approach, 370371
tolerance design, 447454
Dominance factors, 273
Donneley demographics, 146
Door intrusion beams, 177
Double declining balance (DDB) method, 686687
Double-entry bookkeeping, 672, 675
Double feed sensors, 218
Dow Jones, 146
Downtimes, 238, 278
Dry run, 362
Duane model,

361

Dun and Bradstreet, 146, 698
Dupont Connector Systems, 6
DuPont system, modied, 157, 704705
Durability, 112
Durability life, 289
Dust, 289
Dynamic analysis, 176
Dynamic process, vs. static process, 12
Dynamic situations, 430441

E

Earning ratios, 693695
Earnings, 692693
and change management, 127
and luck, 693
retained, 670
Earnings before interest and taxes (EBIT), 668
Earnings per share (EPS), 662, 669
EBIT (earnings before interest and taxes), 668
Economic buyers, 117
Economic order quantity (EOQ) model, 707
Economies of scale, 155
Effort goals, 159
Eigenvalues, 177
Eigenvectors, 177

SL3151ZIndex Page 745 Thursday, September 26, 2002 8:56 PM

746

Six Sigma and Beyond

Eight-level factors, 389
Elastic buckling, 176
Elasticity, 177, 179
Elastic of modulus, 654
Electrical design margins, 359
Electrical discharges, 289
Electrical measurements, 527
Electroforming, 204
Electronics industry, 2, 5
Element connectivities, 180
Element data recovery, 181
Element properties, 180
Elevating hopper feeders, 207
Emission standards, 54
Employees, 663
and benchmarking, 104, 107
motivation and earnings of, 693
Enclosures,

358
Encyclopedia of Business Information Services

,
145
Encyclopedias, 145
Engineering
in business assessments,

136137

conformance elements in, 502
manufacturing, 494
nonconformance elements in, 503
plant, 494495
Engineering analysis, 266
Engineering changes, costs of, 297298
Enhancing functions, 5859, 264
Environmental controls, 526
Environmental FMEA, 225
Environmental laws, 54
Environmental Protection Agency (EPA), 147
EOQ (economic order quantity) model, 707
EPS (earnings per share), 662, 669
Equal bilateral tolerance, 523
Equipment, 362, 675
Equipment errors, 526
Equity
in balance sheet equation, 664, 674
and debt, 692
ratio to total debt, 697
of shareholders, 667
in theory of rm, 661
Equity/debt ratios, 692
Equity earnings, 484
Erlicher, Harry, 555
Errors, 526527
eliminating, 212
inevitability of, 212
proong, 208, 274
variables, 336337
Essential functions, 5859
Esteem value, 558
Estimated costs, 703
Euclid, 542
Euler buckling analysis, 176, 177
Evaluation phase (job plans), 585591
Evaluation summary, 587
Evidence books, 475, 477
Excel (software), 182
Exchange value, 558
Excitement needs, 229
Excitement quality, 6971
Executives, 663
Expanded partnering, 1214
Expansion, coefcient of, 531
Expected customer life, 289
Expenditures, 680
Expenses, 701
vs. assets, 680681
depreciation as, 684
and productivity, 702
Experiments, 249;

see also

DOE (design of experiments)
analysis in, 405410, 415418
column interaction tables in, 384
conrmatory tests in, 418421
degrees of freedom in, 383
dynamic situations in, 430441
factor levels in, 380382
factors with large numbers of levels in,
392
factors with three levels in two-level arrays in,
391
factors with two levels in three-level arrays in,
390391
hardware test setups in, 385386
inner and outer arrays in, 393
linear graphs in, 382384
nesting of factors in, 392
orthogonal arrays in, 383384
parameter design, 441447
planning, 372380
randomization of tests in, 394
test arrays in, 387389
tolerance design, 447454
Exponential distribution,

617

, 641
in xed-sample tests, 318320
in reliability problems, 618
in sequential tests, 321323
Exponential function, 619
Extended interior penalty functions, 185
External failure costs, 101, 483,

491

External gate hopper feeders, 207
External manufacturing, 67
External variations, 28
Extraordinary costs, 703
Extrusion, 204

SL3151ZIndex Page 746 Thursday, September 26, 2002 8:56 PM

Index

747

F

F.W. Dodge reports, 145
Fabrication process, 206
Factors, 380
ANOVA (analysis of variance) decomposition
of, 410411
choosing number of levels, 380382
decomposition of, 410
effects of,

424425

eight-level, 389
four-level, 389
nesting of, 392
nine-level, 390
test matrix for,

377

three-level, 385, 391,

392

in three-level arrays, 390391
two-level, 390
types of, 393
Facts, and change management, 127
Fail safe design, 208
Failure, 362
causes of, 246247, 264265, 272273
constant rate, 619
costs of, 483,

490491

cumulative function, 635
detecting, 267, 274275
effects of, 264, 271272
free time, 643
logs of, 282
methods of determining, 247249
occurrence rating, 249, 265, 273
probability of, 618620
severity of, 265, 273
user costs, 484

Failure Denition and Scoring Criterion

(book),
288
Failure modes, 240
cause and occurrence, 246247
describing, 264, 270271
design controls for, 249250
detection rating, 250
determining, 66, 247249
effects of, 243245, 278
examples of, 242243
in FMEA (failure modes and effects analysis),
242244
in function diagrams, 66
in machinery FMEA, 277278
process controls for, 249250
severity rating, 244245
Failure modes and effects analysis.

see

FMEA
Failure rate, 633
conversion to MTBF (mean time between
failures), 361
in failure-truncated tests, 318
as measure of product reliability, 290
and product life, 293295
in R&M (reliability and maintainability), 355
and system failure, 629
Failure reporting, analysis, and corrective action
system (FRACAS),

341

, 352, 362
Failure-truncated tests, 318319
FAST (functional analysis system technique), 567,
577580, 712
Fatigue, 294
Fault tree analysis.

see

FTA
Fax machines, in customer/supplier
communications, 13
FEA (nite element analysis), 175
analysis procedure in, 178179
common problems in, 182
denition of, 362
input to models in, 180
outputs from, 180181
procedures in, 178
solution procedure in, 179180
techniques in, 182
types of, 176177
Feasibility, 362
Feasible designs, 183184
Feature of size, 522
Features, 112, 522
Federal Database Finder, 147
Federal depository libraries, 147
Federal Reserve banks, 147
Feedback, 3031
descriptive, 31, 33
loops, 3031
negative, 35
positive, 35
systems, 31, 35
Feeders, 207
Feelings, and change management, 127
Fiber optical tests, 530
Fiber sensors, 218
Field performance data,

487

Field service reports, 279
FIFO (rst-in rst-out) method), 683
Finance, as measure of quality cost, 495
Financial analysis, 704709
breakeven analysis, 704705
contribution margin analysis, 706
EOQ (Economic Order Quantity) model, 707
IRR (internal rate of return) method, 709710
modied duPont system, 704705
NPV (net present value) method, 709
pricevolume variance analysis, 707
return on investment analysis, 708709
ROI (return on investment), 708-709

SL3151ZIndex Page 747 Thursday, September 26, 2002 8:56 PM

748

Six Sigma and Beyond

Financial assets;

see also

assets
Financial benchmarking, 157
Financial forecasting, 688
Financial leverage
in annual reports, 672
and earnings, 692
in nancial comparison, 131
in modied duPont formula, 704705
in rating bonds, 695
in rating stocks, 697
ratios, 692
Financial management rate of return (FMRR),
709710
Financial planning, 710712
Financial position, statement of.

see

balance sheets
Financial rating companies, 695
Financial rating systems, 695
bond rating companies, 695696
commercial credit ratings, 698
Financial ratios, 145
Financial reports, 146, 664
accountants' report, 671
annual reports, 671
audited, 671
balance sheets, 664665
Financial statement analysis, 688
Finished product inspection, 528
Finite element analysis.

see

FEA
Finite elements, 175176
Firm, theory of, 661662
First-in rst-out (FIFO) method, 683
Fishbone diagram, 134, 348,

373

Fixed assets, 665666
accumulated depreciation of, 684
in cash ow analysis, 701
in nancial comparison, 131
as noncurrent assets, 667
Fixed burden costs, 569
Fixed costs, 569
in breakeven analysis, 704705
in contribution margin analysis, 706
denition of, 703
Fixed-sample tests, 314
using binomial distribution, 315316
using exponential distribution, 318320
using hypergeometric distribution, 315
using Poisson distribution, 316
using Weibull and normal distributions, 320
Florida Power and Light, 143
Flow charts, 6163, 153154
Fluid mechanics, 177
FMEA (failure modes and effects analysis),
223224
action plans in, 253258
in benchmarking, 134
benets of, 226
common problems in, 260262
in core engineering process, 299
denition of, 224, 362
design FMEA.

see

design FMEA
design/process controls in, 250
in design reviews, 467
detection rating in, 250
in DFM/DFA (design for
manufacturability/assembly), 199
in DFSS (design for six sigma), 716, 721
failure mode analysis in, 240242
forms, 235238
vs. FTA (fault tree analysis),

469

function concepts in, 53, 6468
getting started with, 228235
history of, 226227
initiating, 227228
learning stages in,

262

machinery.

see

machinery FMEA
in problem solving, 225226
process.

see

process FMEA
in quality lever,

227

scopes of,

236

steps of, 469
transferring causes and occurrences to forms,

250

transferring current controls and detection to
forms,

254

transferring RPN to forms,

256

transferring severity and classication to
forms,

248

types of, 224225
typical body,

237

typical header,

236

FMRR (nancial management rate of return),
709710
Focus groups, 131, 147
Footnotes, in balance sheets, 670
Force, 527
Force eld analysis, 128129
Ford Motor Co., 41, 169, 203, 296297
Forecasts
in expanded partnering, 17
nancial, 688
as measure of quality cost, 495
sales forecasting, 710
technology forecasting, 156157
Forgetfulness, 210
Forging, 204
Fork lifts, 207
Formal qualication review, 466
Four-level factors, 389
FRACAS (failure reporting, analysis, and
corrective action system),

341

, 352, 362

SL3151ZIndex Page 748 Thursday, September 26, 2002 8:56 PM

Index

749

F ratio statistical test, 408
Freedom of Information Act, 147
Free-state conditions, 522
Freight costs, 570
Frequency distributions, 170
Friction, 179, 294
FTA (fault tree analysis), 299
denition of, 362
in design FMEA, 267
in determining causes of failures, 248
vs. FMEA (failure modes and effects analysis),

469

in QFD (quality function deployment), 725
in R&M (reliability and maintainability), 348,
355
seven-step approach to, 469470
Fuji-Xerox, 143
Functional analysis, 156
Functional analysis system technique (FAST), 567,
577580, 712
Functional area costs, 573
Functional benchmarking, 122, 123
Functional requirements, 541543
Function journals, 145
Functions, 5253, 574
alternatives to, 558559
analysis and evaluation, 567568, 575580
basic functions, 574575
and costs, 580
denition of, 52, 230
of designs, 264
determining, 567, 573574
developing, 238
diagrams, 5556
as dimension of product quality, 113
enhancing, 5859
essential, 5859
evaluating, 580582
failure modes, 6667
in FMEA (failure modes and effects analysis),
6468, 230
objective, 183
organizing, 239240
penalty, 185
in product ow diagrams, 5661
of products, 264
in QFD (quality function deployment), 6468
secondary, 575
task, 5859
terminus, 6667
tree structure, 239240
types of, 264
in VA (Value Analysis), 6468
in value analysis, 557
in value control, 567568, 573574
values, 581582
Function tree process, 239240
Functives, 5556
Funds
in annual reports, 671
in balance sheets, 669
discretionary, 700
sources of, 661, 669
use of, 670

G

G. Heilman Brewing,

99

GAAP (generally accepted accounting principles),
672
Gages, 527
accuracy, 533
blocks, 531533
in hierarchy of standards, 525
linearity, 533
repeatability, 533
reproducibility, 533
stability, 533
Galbraith, John Kenneth, 663, 681
Gale Research, 145
Gamma distribution, 625631
Gamma functions, 626631
Gamma ray tests, 530
Gap analysis, 158159
Gasoline fumes, 289
Gates, 219
GD&T (geometric dimensioning and tolerancing),
199, 518523
General design standards,

338

General Electric Co., 555
General journals, 675
General ledgers, 676
Generally accepted accounting principles (GAAP),
672
General Motors Corp., 296297
annual report (1986) of, 670
AQP (advanced quality planning) in, 41
ROE (return on equity),

99

General services, as measure of quality cost, 501
Generic benchmarking, 122
Generic products, 8994
Geometric analysis, 176
Geometric dimensioning and tolerancing (GD&T),
199, 518523
Geometry, 180
Goals, 159
characteristics of, 159
customer-oriented, 131
guiding principles, 160
interdepartmental, 161

SL3151ZIndex Page 749 Thursday, September 26, 2002 8:56 PM

750

Six Sigma and Beyond

philosophy in setting, 159160
in project management, 608
results vs. effort, 159
service/quality, 131
strategic, 19
structures of, 160161
tactical, 19
Goals downplans up (forecasting), 710
Goodwill,

491

, 666
Gorton Fish Co., 169
Government, 663
Government Printing Ofce Index, 147
Government regulations, 129
Graham, Benjamin, 697698
Graphical analysis, 405407
Graph transmissions, 535537
Grid point data recovery, 181
Gross assets, 695
Gross prot, 668
Gross prot margin, 130, 157, 668
Groups. see teams
Group technology (GT), 199
Growth period, of product life cycle, 699700
Growth rates, 663, 711
GT (group technology), 199
Guide rods/pins, 219
Guide to the Project Management Body of
Knowledge (book), 599
H
Habits, and productivity, 582
HALT (highly accelerated life test), 310
Handbooks, 145
Hardened tool steel, 531
Hardness testers, 527
HASS (highly accelerated stress screens), 310
Hazard rate, 294, 634635, 643
Heat transfer, 179, 357
Heavy equipment industry, 200
Help-seekers, in team systems, 25
Hidden costs, 36
Hidden factories, 36
Highly accelerated life test (HALT), 310
Highly accelerated stress screens (HASS), 310
Histograms, 484
Historical costs, 679, 703
Historic data, 133, 266
Homeostasis, 31
Hood buckling, 177
Hopper feeders, 207
Horizontal beam deection, 654655
House of Quality matrix, 73, 140141
Human body, in DFM/DFA (design for
manufacturability/assembly), 199200
Human mistakes, 210212
Human resources
conformance elements in, 506
in customer/supplier relationship, 22
in expanded partnering, 24
nonconformance elements in, 506
in partnering, 24
Humidity, 289, 526
Hypergeometric distribution, 315
I
IBM, 109110, 195
Identication mistakes, 210211
Image, in quality, 113
Implementation phase (job plans), 591592
attitude, 596
audit results, 597
goals, 591592
organization, 594595
plans, 592593
principles, 593594
system evaluation, 593
value council, 596597
Importance/feasibility matrix, 141
Importance/performance analysis, 131
Importance rating, 84
Improvement potential, 141
Imputed costs, 703
Inadvertent mistakes, 211
Incentives, in benchmarking, 104105
Inch dimensions, 520521
Income after taxes, 484
Income before extraordinary items, 668669
Income before nonrecurring items, 668669
Income before taxes, 668
Income from continuing business, 668
Income statements, 664, 667668
in annual reports, 671
ratio analysis of, 690
in summary of normal debit/credit balances,
674
Income taxes, 683685
Incoming material inspection, 528
Incremental costs, 569, 703
Independent quality rating, 487
Indexing mechanisms, 206
Indicators, 21, 527
Indirect costs, 36, 703
Indirect labor, 569
Indirect materials, 569
Industrial cleanser, 8994
Industrial engineering, 508
Industrial state, 663
Industry analysis, 129130
SL3151ZIndex Page 750 Thursday, September 26, 2002 8:56 PM
Index 751
Infant mortality period, 293
Infeasible designs, 183184
Ination, 679
and inventories, 682683
and sales forecasting, 710
Information
collecting, 564
in customer/supplier relationship, 22
in expanded partnering, 24
Information brokers, 148
Information phase (job plans), 563564
cost visibility, 564565, 568
functions in, 567568, 573574
information collection, 564
project scope, 565567
Information systems and management
in business assessments, 139
conformance elements in, 508
nonconformance elements in, 509
in partnering, 22, 24
Information theory, 542
Informative inspection, 217
Inherent availability, 292
In-house reviews, 467
Injuries, costs of, 36
Inland Steel, 99
Inner arrays, 393
Innovations, levels of, 549
In-process inspection, 528
Input, 27, 61
Input output method, 577
Inspections
in classifying characteristics, 529
interpreting results of, 530
points, 528
process in, 206
purpose of, 528
stations, 487
techniques in, 217
types of, 528529
Instruments, 525
Intangible assets, 667
Integrator approach, 199
Intellectual property, 22
Intelligence Tracking Service, 146
Intentional mistakes, 212
Interdepartmental goals, 161
Interest income, 668, 685
Interface matrix, 258260, 279
Interference, 360
Interference testing, 321323
Interim design review, 466
Intermittent transfer manufacturing, 206
Internal assessments, 132
Internal benchmarking, 122
Internal best performers, 143
Internal failure costs, 101, 483, 490
Internal manufacturing, 56
Internal organizations, for partnering, 1415
Internal processes, 53
Internal rate of return (IRR) method, 611612,
709710
Internal standards and tests, 79
Internal variations, 29
International System of Units (SI), 520
Internet, in customer/supplier communications, 13
Intrinsic value, of assets, 680
Inventories, 682
cycles, 707
determining value of, 682683
Economic Order Quantity (EOQ) model,
707
as internal failure cost, 490
Inventory control systems, 159
Inventory prots, 458
Inverse power model, 307308
Inversion method, in problem-solving, 583
Investments, 458
assets as, 680
bonds, 695696
capital, 661
and depreciation, 685
rating systems for, 695696
stocks, 696698
Invoice, 675
IRR (internal rate of return) method,
611612, 709710
Ishikawa diagram, 134
ISO 9000 certication program, 42
ISO/TS 19469 certication program, 42
J
Jaguar, 296297
JIT (just-in-time) method, 199
Job plans, 559
creative phase, 582584
evaluation phase, 585591
implementation phase, 591597
information phase, 563568, 573574
steps in, 561562
vs. techniques, 562563
Job shops, 207
Johnson Controls, 54
Joint costs, 703
Joint stiffness evaluation, 177
Joint ventures, 196
Judgment inspection, 217
Juran, J., 480
Just-in-time (JIT) method, 199
SL3151ZIndex Page 751 Thursday, September 26, 2002 8:56 PM
752 Six Sigma and Beyond
K
Kaizen method, 142, 160
Kano model, 6871
basic quality depicted in, 70
of customer needs, 229
in DFSS (design for six sigma), 715, 718
excitement quality depicted in, 7071
performance quality depicted in, 70
and transformations, 53
Kepner Trago analysis, 248
Key life testing, 299
K factor, 4
Kolmogov-Smirnov test, 334
L
L.L. Bean, 124, 143
Labor, 569
Laboratories, 487
Laboratory errors, 526
Lack of standards mistakes, 211
Landlords, 147
Last-in rst-out (LIFO) method, 683684
Law of maldistribution, 585
Laws of mechanics, 542
LCC (life cycle costs), 348
denition of, 363
in R&M (reliability and maintainability),
3567
Leadership, in partnering, 2124
Lead facilitators, in trade-off studies, 472473
Lead times, 74
Leapfrog approach, 196
Learning curves, 155
Leasing agents, 147
Least cost strategy, 100101
Ledgers, 676
Legal department
conformance elements in, 509
as measure of quality cost, 495
nonconformance elements in, 509
Legislative summaries, 147
Leverage
in annual reports, 672
and earnings, 692
in nancial comparison, 131
in modied duPont formula, 704705
in rating bonds, 695
in rating stocks, 697
ratios, 692
Liabilities
accrued pension, 667
in balance sheet equation, 664, 674
current, 665666
and current assets, 697
increasing, 669
noncurrent, 667
pension, 667
Life cycles, 133
of companies, 133
denition of, 363
of products. see product life cycle
LIFO (last-in rst-out) method, 683684
Limits dimensioning, 523
Lindbergh, Charles, 554
Linear graphs, 382384, 386
Linear measurements, 527
Linear static analysis, 177
Line elements, 176
Line organizations, 15
Liquid assets, 691
Liquidation value, 679
Liquidity, 665, 671
in nancial comparison, 131
ratio analysis of, 691692
Liquidity ratios, 691692
Loads, 180, 181
Long-term debts, 667
Long-term process variation, 45
Loops, 184, 538
Loop transmission, 538
Losses, 671
and cash ow, 702
controlling, 36
as part of transactions, 672
Loss function, 397398
calculating, 398402
for LTB (larger-the-better) situations, 402
vs. process performance (C
pk
), 402403
and signal-to-noise (S/N) ratio, 403405
for STB (smaller-the-better) situations, 401
Low safety factors, 294
LTB (larger-the-better), 402, 413
Lubricants, 33, 289
Luck, and earnings, 693
M
Machine condition signature analysis (MCSA),
363
Machine customers, 229
Machinery FMEA, 224225, 277;
see also FMEA (failure modes and effects
analysis)
classication in, 279
current controls, 282
detection ratings, 282
failure modes, 277278
and FTA (fault tree analysis), 348
SL3151ZIndex Page 752 Thursday, September 26, 2002 8:56 PM
Index 753
identifying functions, 277
identifying scopes of, 277
occurrence ratings in, 282
potential causes in, 279
in R&M (reliability and maintainability),
351352, 361
recommended actions in, 283
RPN (risk priority number), 282283
severity rating, 279
Machining, characteristic matrix of, 6364
MacLaurin series, 646, 649
Magnetic disk feeders, 207
Magnetic elevating hopper feeders, 207
Magnetic elds, 289
Magnetic particle tests, 530
Maintainability, 292, 338, 363
Maintenance records, 282
Major parts standards, 338339
Malcolm Baldrige National Quality Award, 105
Maldistribution, law of, 585
Management
benchmarking in, 98, 103104, 124
and budgets, 662
in business assessments, 138
and earnings, 693
as measure of quality cost, 496
operational, 601
in partnering, 19
roles in customer/supplier relationship, 2122
and security, 663
systems concept in, 35-37
in theory of rm, 662
Management process benchmarking, 122123
Manpower, 483
Manuals, 145
Manufacturing-based view, 114
Manufacturing cells, 206
Manufacturing cost, 569, 571
Manufacturing engineering, 296
Manufacturing engineering sign-of approach, 199
Manufacturing engineers, 269
Manufacturing process, 206
approaches to, 207208
in business assessments, 136
categories of, 206, 206207
as cause of product failures, 291
conformance elements in, 506507
controls, 273274
costs, 569
costs of, 571
design-related factors in, 197
external, 67
factors affecting, 197198
functions, 269270
improving, 489
internal, 56
as measure in TOC (theory of constraints), 462
nonconformance elements in, 507
one in, one out, 207
product design-related factors affecting, 197
in R&M (reliability and maintainability), 350
schematic diagram, 206
secondary, 206
and theory of non-constraints, 464
Margin/t problems, 177
Marketable securities, 671, 681
Marketing
advantages in, 74
conformance elements in, 505506
as measure of quality cost, 497
nonconformance elements in, 506
Market niches, 54
Market research, 148
in DFSS (design for six sigma), 715, 718
in product development, 295
as source of benchmarking information, 144
in surveys, 118
Market segmentation, 102
in benchmarking, 125
in DFSS (design for six sigma), 717
planning, 45
Market segments, 54
Market share, 696
Mass, 527
Massachusetts Institute of Technology, 541543
Mass production, 518, 726
Master Belt, in dealing with projects, 661
Master Black Belt, in dealing with projects,
661661
Materials, 569
analysis of, 176
in business assessments, 136
direct, 569
errors in, 526
handling, 206207
indirect, 569
as input in team systems, 26
as measure of quality cost, 497499
properties of, 180
raw, 487, 523
as source of mistakes, 212
in TOC (theory of constraints), 458
Mathematical modeling, 266
Matrix analysis, 587589
Maturity period, of product life cycle, 699700
Maytag, 99
MCSA (machine condition signature analysis),
363
Mean deection, 654
Means, difference between two, 655656
SL3151ZIndex Page 753 Thursday, September 26, 2002 8:56 PM
754 Six Sigma and Beyond
Mean time between failures. see MBTF
Mean time to repair. see MTTR
Measurement mistakes, 214
Measurement systems, 524525
interpreting results of inspection and testing in,
530
mechanical, 527
purpose of inspection in, 528529
roles of, 525527
sources of inaccuracy, 526
techniques and equipment, 527528
testing methods, 529530
Mechanical design margins, 359360
Mechanical loads, 178179
Mechanical measurements, 527
Mechanics, laws of, 542
Medical costs, 36
Mergers and acquisitions, asset values in, 680
Metal detectors, 218
Metric dimensions, 520
Metric tolerance, 520
Metrology, 524525
interpreting results of inspection and testing in,
530
purpose of inspection in, 528529
roles of, 525527
techniques and equipment, 527528
testing methods, 529530
Microeconomics, 662
Microinches, 531
Micrometers, 527
Microswitches, 219
Miles, L.D., 555
MIL-HDBK-727 method, 203
Milliken and Co., 143
Millimeters, 520
Minority interest, 667
Mirror image (accounting), 676
Mission statements, 160
Mistake proong, 208209
in avoiding workplace errors, 210
devices for, 216219
equation for success in, 218
inspection techniques in, 217
proactive system approach to, 216
in process FMEA, 274
reactive system approach to, 216
Mistakes, 213215
causes of, 213215
detecting, 216217
examples of, 213
human, 210212
preventing, 216217
signals that alert, 215
sources of, 212213
types of, 213215
Mistakes of misunderstanding, 210
Mitsubishi method, 200201
Models and modeling, 178
in engineering analysis, 266
in FEA (nite element analysis), 169
nite element, 180
redesigns of, 181182
as tool of quality cost, 485
Modied duPont system, 157, 704
Money. see funds
Monochrome monitors, 358
Monte Carlo method, 169
Moody's, 146, 696
Motion economy, principles of, 201
Motorola Inc., 1
benchmarking programs in, 109110, 157
six sigma quality programs, 101
Mounting mistakes, 214215
MSC/NASTRAN software, 180
MTBE (mean time between event), 354355
MTBF (mean time between failures), 348
conversion to failure rate, 361
denition of, 363
in failure-truncated tests, 318
and inherent availability, 292293
machine history of, 349
as measure of product reliability, 290
and occurrence ratings, 282
in R&M (reliability and maintainability), 348,
355
in sequential tests, 321
in time-truncated tests, 319320
MTTF (mean time to failure), 363, 619
MTTR (mean time to repair), 348
denition of, 363
machine history of, 349
in R&M (reliability and maintainability),
292293, 355356
Musts and wants, 477
N
Nasser, Jacques, 297
National Electrical Manufacturing Association
(NEMA), 358
National Institute of Standards and Technology
(NIST), 526
National reference, 525
National standards, 525
National Technical Information Center, 147
Need sets, 53
Negative conrmation, in customer satisfaction,
49
Negative feedback, 31, 35
SL3151ZIndex Page 754 Thursday, September 26, 2002 8:56 PM
Index 755
NEMA (National Electrical Manufacturing
Association), 358
Net assets, 695
Net income, 671
Net present value (NPV) method, 611, 709
Net prots, 459460
New products, 710
Newsearch, 146
Newsletters, 145, 147
Newspapers, 145
Nine-level factors, 390
NIST (National Institute of Standards and
Technology), 526
Node absorption, 539
Noise factors, 393, 411, 721
Noises, 336337
Nominal dimension, 531
Nominal group process, 114, 132134
Nominal size, 522
Non-constraints, theory of, 463464
Noncontrollable costs, 703
Noncurrent assets, 667
Noncurrent liabilities, 667
Nondestructive testing, 530
Non-disclosure agreements, 17
Nonlinear analysis, 176
Nonlinear dynamic analysis, 177
Nonlinear static analysis, 177
Nonprot organizations, 668
Nonrecurring expenses, 669
Non-rigid parts, 522
Non-statistical controls, 274
Normal density-like function, 647
Normal distribution, 320
in xed-sample tests, 320
in sequential tests, 323
Normalizing constant, 308
Normal modes analysis, 177
Not invented here syndrome, 110
NPV (net present value) method, 611, 709
NTB (nominal-the-best), 413415
in loss function, 399
signal-to-noise (S/N) ratio for, 404405, 431,
439441
Nuclear radiation, 289
Numerical evaluation. see paired comparisons
O
Objective functions, 183
Object oriented analysis and design (OOAD),
515516
Observed frequency, 422
Occupational safety laws, 54
Occurrence rating, 249;
see also severity rating
in design FMEA, 249
and lowering risks, 267, 276
in machinery FMEA, 282
in process FMEA, 250
reducing, 253
Odd part out method, 219
OE (operating expense), 458, 460461
OEE (overall equipment effectiveness), 349, 356,
363
Ofce equipment, accounting of, 675
Omega method, 425
One Idea Club method, 124
One in, one out manufacturing process, 207
Ongoing program/project manager approach, 199
Online databases, 145
OOAD (object oriented analysis and design),
515516
Open systems, 35
Operating characteristic curve, 313
Operating expense (OE), 458, 460461
Operating hours, 525
Operating instructions, 73
Operating leverage, 682
Operating margin, 668
Operating prots, 484
Operational management, 601
Operational results, 23, 25
Operations mistakes, 214
Operations process benchmarking, 123
Operator errors, 526
Operator-paced free-transfer machines, 206
Operator to operator errors, 526
Opportunity cost, 195, 703
Optical measurements, 527
Optimal inventory cycle, 707
Optimization algorithms, 184185
Optimization loops, 184
Optimum design, 183
Organizational change, and benchmarking,
126129
Organizational suboptimization, 36
Organization expense, as noncurrent assets, 667
Orthogonal arrays, 383384, 386
OSHA (Occupational Safety and Health
Administration), 147
Outer arrays, 393
Outliers, 312
Out of pocket costs, 704
Output
in process ow diagrams, 61
of teams, 28
Overall equipment effectiveness (OEE), 349, 356,
363
Overarching customers, 54
SL3151ZIndex Page 755 Thursday, September 26, 2002 8:56 PM
756 Six Sigma and Beyond
Overhead costs, 36, 130
Oxidation, 294
P
Pace production line, 206, 208
Packaging, as cause of product failures, 291
Paired comparisons, 141, 586587
Paper pencil assembly, 60
Parallel reliability block diagrams, 323325
Parameter design, 371
in DFSS (design for six sigma), 715
in DOE (design of experiments), 441447
in improving reliability, 336337
Parameter Design approach, 3132
Parametric variations, 178
Pareto, Vilfredo, 585
Pareto analysis, 44, 132133, 484
Pareto voting, 585586
Partial derivatives, 649
Partnering, 911
buyer/supplier relationship in, 1112
checklists of, 2123
in DFSS (design for six sigma), 13
expanded, 1214, 2325
implementing, 1419
improving, 2021
principles of, 11
process managers, 1517
reevaluating, 17
success indicators, 21
typical questionnaire for, 18
Partnering for Total Quality assessment process, 21
Parts, 205
defective, 278
inclusion of wrong, 214
missing, 214
non-rigid, 522
in product design, 205
Parts/component feeding systems, 207
Part worths, 8994
PASS (production accelerated stress screen),
310312
PAT (prot after tax), 693694
Patents, 147, 667
Path transmissions, 535, 538
Payback period method, 612, 708
PDGS-FAST system, 178
P diagrams, 299
in DFSS (design for six sigma), 715, 719
in FMEA (failure modes and effects analysis),
258260
and team systems, 26
PDS (product design specications), 717
P/E (price/earnings) ratio, 698
Peacemakers, in team systems, 25
Penalty functions, 185
Penetrant dye tests, 530
Pension liabilities, 667
Perceived quality, 113
Perception, 113
Perfect products, 194196
Performance, 112
vs. importance, 131
index of, 558559
needs, 229
parameters, 292
product-based view, 113
quality of, 69, 70
reviews of, 19
Performance evaluation review technique (PERT),
604
Period costs, 704
Periodic actions, 549
Periodicals, 145
Perishable tooling, 363
Personal computers, 195
Personnel
in business assessments, 139
as measure of quality cost, 499
PERT (performance evaluation review technique),
604
PFIS (plant oor information system), 363
Philip Morris, 99
Phosphate-based liquid, 8994
Phosphate-free liquid, 8994
Phosphate-free powder, 8994
Physical assets, 681682
depreciation of, 684
inventories as, 682683
operating leverage, 682
PIMS (Prot Impact of Marketing Strategies), 157
in benchmarking, 119
objectives and benets of, 312
par report, 130
Pin joint clearance, 181
Planning matrix, 725
Plans up form (forecasting), 710
Plant administration, 504
Plant and equipment, 136, 701
Plant engineering, as measure of quality cost,
494495
Plant oor information system (PFIS), 363
Plant reports, 480
Plasticity, 177, 181
Plug gages, 527
Plus-minus dimensioning, 523
Pneumatic gaging, 527
Point elements, 176
Poisson distribution, 316, 509510, 636640
SL3151ZIndex Page 756 Thursday, September 26, 2002 8:56 PM
Index 757
Poisson process, 635636
Poka Yoke method, 208, 274, 275, 721
Portfolio analysis, 610
Positioning sensors, 218
Positive feedback, 35
Potential design verication tests, 258
Power supplies, 358
Practice gaps, 158
Predictive maintenance, 363
Pre-feasibility analysis, 38
Preference structure, 89
Preferred stocks, 669, 697
Preliminary design review, 466
Prentice Hall Almanac of Business and Industrial
Statistics, 157
Prepaid costs, 704
Pre-planning matrix, 65
Preservation of knowledge, 74
Pressure, 527
Preventers, 216217
Prevention costs, 482, 488
Preventive maintenance, 350, 363
Price/earnings (P/E) ratio, 698
Pricevolume variance analysis, 707
Pricing, 119
and ROI (return on investment), 119
in theory of rm, 661
Primary reference standards, 525
Prime costs, 704
Principal, 685
Priorities, in FMEA (failure modes and effects
analysis), 230
Prioritization matrix, 139140
Proactive systems, in mistake proong, 216
Probability density function, 618, 628
Probability distribution, 313, 636
Probability of conguration, 637638
Probability of failure, 618620
Probability of reliability, 621
Probability paper, 485
Probability ratio sequential testing (PRST), 363
Probes, 219
Process average
shifts in, 12
short- vs. long-term standard variation in, 45
Process benchmarking, 122123
Process characteristics (CTP), 510
Process Control Methods (book), 6
Process controls, 249, 268, 276
Process customers, 229
Process engineers, 269
Processes, 363
costs of, 571572
dominance factors, 273
internal, 53
parameters of, 255
in partnering, 25
planning with project management, 607608
in project management, 601602
quality management in, 23
random vs. identiable causes in, 133
short- vs. long-term variation in, 45
and six sigma, 12
special characteristics for, 257
standard deviation, 45
static vs. dynamic, 12
Process facilitators, in trade-off studies, 473
Process ow diagrams, 6164, 234, 259
Process FMEA, 224225;
see also FMEA (failure modes and effects
analysis)
calculating RPN (risk priority number) in, 275
describing failure causes in, 272
describing failure effects in, 271272
describing failure modes in, 270271
describing process functions in, 269270
detection table, 253
estimating detection of failure in, 274275
estimating frequency of occurrence of failure
in, 273
estimating severity of failures in, 273
failure modes, 242244
forming teams for, 269
identifying manufacturing process controls in,
273274
linkages to design FMEA and control plans,
258260
objectives of, 268
occurrence rating, 250
recommending corrective actions in,
275277
requirements for, 268269
severity rating, 247
special characteristics for, 257
timing, 268
Process functions, 269270
Process gaps, 158
Processing mistakes, 213-214
Processing omissions, 214
Process performance (C
pk
), 2, 402403
Process plans, 72, 201
Process quality, 23, 25
Process redesign, 511512
Procrustes (Greek mythology), 2930
Procurement, 10
Producers, 53
Producers' risk, 313
Product assurance, as measure of quality cost, 499
Product-based view, 113
Product characteristic deployment matrix, 72
SL3151ZIndex Page 757 Thursday, September 26, 2002 8:56 PM
758 Six Sigma and Beyond
Product control, as measure of quality cost,
499500
Product costs, 704
Product demand, and competition, 662
Product design and development, 194
basic vs. secondary processes in, 204
benets of DFM/DFA (design for
manufacturability/assembly) on, 189
case studies, 195196
as cause of product failures, 291
and costs of engineering changes, 297298
crash program approach to, 195
and customer satisfaction, 113114
effects of DFM/DFA (design for
manufacturability/assembly) on, 204
factors affecting manufacturing process, 197
focus of, 205
forming and sizing operations in, 204
functions of, 264
fundamentals of, 204
map guide to, 197
as measure in TOC (theory of constraints),
462
minimum performance requirements in, 198
perfect product approach to, 196
primary process in, 204
and product life cycle, 297298
as product plan, 196198
QFD (quality function deployment) in, 7980,
8688
reducing cost of, 189
reducing risks in, 267
reducing time for, 158
reliability in, 296297
secondary process, 204
sequential approach to, 191
simultaneous approach to, 191
six sigma philosophy, 57
special characteristics for, 257
steps in, 295296
Taguchi's approach to, 371
TDP (technology deployment process) in,
298300
Product design specications (PDS), 717
Product failures, 288, 290
Product ow diagrams, 5661
Production, 364
costs of, 704
establishing conditions for, 725726
mass production, 726
as measure of quality cost, 500
requirements in, 8788
and team systems, 26
Production accelerated stress screen (PASS),
310312
Productivity, 459460
effects of customs and traditions on, 582
effects of habits on, 582
in theory of rm, 661
Product launching, 189
Product liability, 189, 491
Product life cycle, 133
and cost of engineering changes, 297298
as a factor in product design, 194
and failure rate, 293295
maturity period, 699700
and product design, 297298
stages of, 699700
Product plans, 194, 196198
Product quality, 112117
eight dimensions of, 112113
perception of, 117119
and return on investment, 119
Product quality deployment, 73
Product recall, 491
Product reliability. see reliability
Products, 364
characteristics of, 255
defects, 291
durability life, 289
environmental conditions prole,
289290
expected customer life, 289
function diagrams for, 5661
functions of, 264
life cycles of, 133
minimum performance requirements,
198
with multiple characteristics, 23
nonconforming, 3
non-price reasons in buying, 114
reliability numbers, 290
reliability of, 288
and sales forecasting, 710
Professional associations, 145, 147
Prolometers, 527
Protability ratios, 693695
Prot after tax (PAT), 693694
Prot and loss statements, 667668
Prot before tax, 693
Prot/equity ratio, 708709
Prot Impact of Marketing Strategies. see PIMS
Prot/investment ratio, 708709
Prots, 570
analysis of, 704707
in annual reports, 671
and axiomatic design, 545
calculating, 668669
vs. cash, 678
in cash ow analysis, 701
SL3151ZIndex Page 758 Thursday, September 26, 2002 8:56 PM
Index 759
direction of, 671
maximizing, 661662
as part of transactions, 672
planning, 710712
and productivity, 459460
rating, 695696
and ROI (return on investment), 459460
in theory of rm, 661662
Program management, as measure of quality cost,
500
Progressive-stress testing, 306
Project decision analysis, 612613
Project management, 599601, 604
decision analysis, 612613
in DFSS (design for six sigma), 605609
generic seven-step approach to, 603605
goal setting in, 608
key integrative processes in, 602
processes in, 601602
and quality, 603
in six sigma, 605609
succeeding in, 613615
value in implementation process,
607608
Projects, 599601
completing, 605
describing, 603604
justication and prioritization of, 610613
planning, 604
planning team for, 604
risk factors, 612613
scopes of, 565567
selecting, 597598
starting, 605
Proprietary information, in expanded partnering,
17
Prospectus, 146
Prototype programs, 296
Proximity detectors, 219
PRST (probability ratio sequential testing), 363
Publications, as measure of quality cost, 500
Public bids, 147
Pugh concept selection, 230231
in design FMEA, 267268
in DFSS (design for six sigma), 715
in process FMEA, 275276
Pulse echo tests, 530
Purchasing
conformance elements in, 505
as measure of quality cost, 501
nonconformance elements in, 505
non-price reasons in, 114
Purchasing agents, 54
Purchasing performance benchmarks, 157
Purchasing power, 54
Q
QAA (qualitative assembly analysis), 199
QFD (quality function deployment), 53, 7172
benets of, 7374
combining with Taylor's motion economy,
200201
denition of, 73
in design FMEA, 267
development of, 87
in DFM/DFA (design for
manufacturability/assembly), 199
in DFSS (design for six sigma), 715, 7178
function concepts in, 6468
intangible benets of, 727
issues with, 7576
key documents in, 7273
methodology, 8084
and planning, 8486
in prioritizing benchmarking alternatives,
140141
process management in, 727730
process overview, 76
in product development process, 7980, 8688
project plan, 7679
stages of, 725726
summary value, 727
tangible benets of, 727
terms associated with, 73
total development process in, 75
QOS (quality operating systems), 345346
QS-9000 certication program, 42, 345
Qualitative assembly analysis (QAA), 199
Quality, 112117
alternative denitions of, 112113
basic, 6869
costs of. see quality costs
customer-driven, 107
denition of, 8485
excitement, 6971
improving with quality cost, 492
manufacturing-based view of, 114
as measure in TOC (theory of constraints), 462
and operational results, 23, 25
perceived, 113
perception of, 117119
performance, 69
planning, 22, 24, 41, 102103
product-based view of, 113
and product reliability, 291295
and project management, 603
qualitative tool for measuring, 44
and return on investment, 119
and ROI (return on investment), 119
tables, 73
SL3151ZIndex Page 759 Thursday, September 26, 2002 8:56 PM
760 Six Sigma and Beyond
transcendent view of, 113
user-based view of, 114
value-based view of, 114
Quality characteristics (CTQ), 510, 719
Quality control, 206
charts, 72, 478
conformance elements in, 507508
as measure of quality cost, 501
nonconformance elements in, 508
in SQM (strategic quality management), 103
system, 478
Quality costs, 477478
analyzing, 484485
categories of, 481482
components of, 481483
concepts of, 480481
conformance elements in, 502509
data sources for, 487
and DFSS (design for six sigma), 509510
improving quality with, 492
inputs, 481481
inspecting, 487
laws of, 485
measuring, 483484
nonconformance elements, 502509
non-manufacturing measurements for,
492502
optimizing, 483
outputs from, 482
presentation formats for, 485
product control as measure of, 499500
quantifying, 482483
tools of, 484
typical monthly report, 486
Quality defects, 291
Quality engineering
in DFSS (design for six sigma), 2533
in measuring methods, 483484
Parameter Design approach in, 3132
Quality engineers, 269
Quality failures, costs of, 101
Quality function deployment. see QFD
Quality functions, 73
Quality operating systems (QOS), 345346
Quality ratings, 487
Quality Systems Requirements, Tooling &
Equipment (book), 345
Quantitative costs, 572573, 572573
Quantum leap parallel programs, 196
Questionnaires, for evaluating partnering process,
1719
R
R&D (research and development), 137, 501
R&M (reliability and maintainability), 345
building and installing, 352353
concepts, 349350
bookshelf data stage of, 349
manufacturing process selection stage in,
350
preventive maintenance needs analysis
stage in, 350
conversion/decommission of, 353
Department of Defense standards, 337342
developing and designing, 350352
and DFSS (design for six sigma), 364365
implementing, 346347
key denitions in, 362364
objectives of, 346
operations and support of, 353
phases in, 346
plans, 364
sequence and timing of, 348349
targets, 364
tools and measures, 347348, 354361
R/1000, 290
Radius, 522
Radius gages, 527
Random variable approach, 632
Random variables
constant raised to power of, 653
division of, 651652
exponential of, 652653
functions of, 651
logarithm of, 653654
powers of, 652
in systems failure analysis, 632
Taylor series of, 650
Random variations, 133
Random walk theory, 688
Ranking teams, 473474, 477
Rate of change of failure (ROCOF), 294295
Rate of growth, 663
Rate of occurrence, 425426
Ratings, for partnering process, 1719
Rating services, 147, 695
Ratio analysis, 688691
coverage ratios, 692
earning ratios, 693695
leverage ratios, 692
liquidity ratios, 691692
return ratios, 693695
Raw materials, 152153, 487
Rayleigh distribution, 641
RCA (root cause analysis), 255, 364
R control charts, 274
Reactive systems, in mistake proong, 216
Recall system, 526
Receivables, 681
SL3151ZIndex Page 760 Thursday, September 26, 2002 8:56 PM
Index 761
Reciprocating tube hopper feeders, 207
Redesign, 511512
Reengineering, 511
conference method, 513515
and DFSS (design for six sigma), 516517
OOAD (object oriented analysis and design),
515516
process redesign in, 511512
restructuring approach, 512513
Reference dimension, 522
Regression analysis, 711, 718
Regulatory requirements review, 259260
Relationship matrix, 76, 82, 201
Relays, 358
Reliability, 287
block diagrams, 323325
costs of unreliability, 296297
and customer satisfaction, 292293
denition of, 364
in design, 296297
design, 313
as a dimension of product quality, 112
DOE (design of experiments) in applications,
335336
environmental conditions prole, 289290
of equipment, 292
exponential distribution in, 618
gamma distribution in, 627
growth, 364
growth plots, 361
and hazard functions, 634635
improving through parameter design, 336337
indicators of, 290
and maintainability. see R&M (reliability and
maintainability)
parallel, 323325
probability of, 287288, 621
and quality, 291295
reliability numbers, 290
series, 323325
specied, 312
specied conditions, 289
specied time period of, 288289
system, 324
in TDP (technology deployment process),
298300
visions in, 323
Reliability Analysis Center, 339340
Reliability defects, 291
Reliability demonstration tests, 312313
attributes tests, 313314
xed-sample tests, 314316
operating characteristic curve, 313
sequential tests, 314, 317318, 321322
variables tests, 314, 318320
Reliability function, 632633
Reliability numbers, 290
Reliability point, 354
Reliability relationships, 632
Reliability standards, 338
Reliability tests, 300
accelerated testing, 305
objectives of, 301302
planning, 301
sudden-death testing, 302304
Rents, 668
Repair
active repair time, 293
as internal failure cost, 490
planning, 41, 238
Replacement cost, 704
in calculating depreciation, 687
vs. current value, 680
Requirement analysis, 38
Research and development (R&D), 137, 501
Research centers, 145
Resource requirements, 151
Result gaps, 158
Result goals, 159
Retail Scan Data, 146
Retained earnings, 670
Return on assets. see ROA
Return on assets managed (ROAM), 695
Return on equity. see ROE
Return on gross assets (ROGA), 695
Return on investment. see ROI
Return on net assets (RONA), 695
Return on net capital employed (RONCE), 694
Return on sales (ROS), 694
Return ratios, 693695
Revenues, 668
in annual reports, 671
and costs, 711
in price-volume analysis, 707
rating, 696
Reverse engineering, 148
Revised RPN (risk priority number), 283284
Revolving credit, 666
Revolving hook hopper feeders, 207
Rewards, 107
Rework, 41
Rigid links, 176
Rise time, 618
Risk priority number. see RPN
Risks, 696
and axiomatic design, 546
calculating, 251253
consumer's risk, 313
and earnings, 693
in manufacturing, 195
SL3151ZIndex Page 761 Thursday, September 26, 2002 8:56 PM
762 Six Sigma and Beyond
in product development, 195
in projects, 612613
rating, 696
reducing, 267, 275276
ROA (return on assets), 694
as measure in TOC (theory of constraints), 462
in modied duPont formula, 704705
in project management, 610
ROAM (return on assets managed), 695
Robert Morris Associates Annual Statement
Summary, 157
Robust designs, 543
Robust teams. see teams
ROCOF (rate of change of failure), 294295
ROE (return on equity), 694
calculating, 708709
in modied duPont formula, 704705
ROGA (return on gross assets), 695
ROI (return on investment), 693694
average rate of return, 708709
as measure in TOC (theory of constraints), 462
and net prots, 459460
payback period method, 708
and pricing, 119
and productivity, 459460
in project management, 610611
and quality, 119
Roll forming, 204
Roman Catholic Church, 672
Rome Air Development Center (RADC), 469470
Rome Laboratory, 340
RONA (return on net assets), 695
RONCE (return on net capital employed), 694
Roof crush, 177
Roof matrix, 201
Root cause analysis (RCA), 255, 364
ROS (return on sales), 694
Rotary centerboard hopper feeders, 207
Rotary disk feeders, 207
Royalties, 668
RPN (risk priority number), 251253
calculating, 267, 275
and characteristic/root causes of failure, 276
in machinery FMEA, 282283
revised, 283284
Rust inhibitors/undercoatings, 289
Ryan Airlines, 554
S
Saab, 296297
Sabotage, 212
SAE J1730 standard, 245
Safety margins, 359360
Safety regulations, 54
Sales
in annual reports, 671
in balance sheets, 668
in benchmarking, 123
in breakeven analysis, 704705
in business assessments, 135
as cause of product failures, 291
costs of, 703
factors affecting, 710
in nancial comparison, 131
forecasting, 710
maximizing, 662
as measure in TOC (theory of constraints), 462
promoting, 157
recording, 675
return on sales (ROS), 694
statistical forecasts of, 711
in surveys, 118
in theory of rm, 662
trend in, 487
Sales goals form (forecasting), 710
Salt spray, 289
Salvage value, 687
Sample data approach, 632
Sample difference, 656
Sample space, 622623, 632
Sampling, 170, 528
SAVE (Society of American Value Engineers), 556,
593
Savings potential, vs. time, 560
Scale parameter, 625, 641
Scales, 527
Scandinavian Airlines (SAS), benchmarking in,
125126
Scatter plots, 484
Scheduling, in project management, 603, 604
Schools and universities, 148
Scraps, 41, 278, 490
Screening methods, 585591
Seat belts, 177
Seating arrangements, in meetings, 33
Secondary functions, 557, 575
Secondary manufacturing process, 206
Securities, 671, 681
Security
and business management, 663
as measure of quality cost, 501
Segmentation, 102, 549
Self loops, 538539
Seminars, 147
Senior management
as executive customer/supplier partner, 14
in expanded partnering, 2324
Sensitivity analysis, 475476
Sensors, 216219
SL3151ZIndex Page 762 Thursday, September 26, 2002 8:56 PM
Index 763
Sequence resistors, 219
Sequential tests, 314
for binomial distribution, 317318
graphical solutions, 318
using exponential distribution, 321323
using Weibull and normal distributions in, 323
Sequential unconstrained minimization technique
(SUMT), 185
Series reliability block diagrams, 323324,
323325
Serviceability, 112, 293
Service FMEA, 225
Services
in business assessments, 136
and customer satisfaction, 4951
data on, 282
delivery of, 4951
hot lines for, 118
and non-price reasons in buying, 114116
Servo transformers, 358
Sets, 184, 624625
Setup mistakes, 214
Severity rating, 245247;
see also occurrence rating
components of, 279
in design FMEA, 246
estimating, 265
and lowering risks, 267, 276
in process FMEA, 247
reducing, 253
Shape parameter, 625, 641, 643
Shareholder's equity, 667, 670671
Shareholders, 663
Shewart cycle, 111112
Shingo method, 208
Shipbuilding industry, 200
Shipping, as cause of product failures, 291
Shock, 289
Shock spectra, 179
Shoguns, in dealing with projects, 661
Short-term process variation, 45
Should-cost/total-cost models, 17
Signal factors, 393, 431432
Signal ow graphs, 535536
basic operations on, 538
effects of self loops on, 538539
node absorption in, 539
rules of denitions of, 538
Signals, 27
Signal-to-noise (S/N) ratio, 393
calculating, 404
and loss function, 403405
for LTB (larger-the-better) situations, 413
for NTB (nominal-the-best) situations,
413415, 431, 439441
for STB (smaller-the-better) situations, 412
in Taguchi approach, 370
Signicant factors, effect of, 424
Simulated sampling, 170175
Simulation, 169170
in DFM/DFA (design for
manufacturability/assembly), 199
and DFSS (design for six sigma), 185, 715
in sampling, 170175
software for, 169170
statistical modeling in, 485
in system and design controls, 266
as tool of quality cost, 485
Simultaneous engineering, 199, 364
Sine plates, 527
Singled station manufacturing, 207
Single-entry bookkeeping, 672
Site inspections, 148
Six sigma, 1;
see also DFSS (design for six sigma)
and benchmarking, 105107
equation for, 4
in external manufacturing, 67
in internal manufacturing, 56
philosophy, 1, 57
and product design, 57
and project management, 608
short- vs. long- term process variation, 45
Six Sigma Mechanical Design Tolerancing (book),
5
Skills development, in partnering, 21
Slow assets, 666, 670
Slowness mistakes, 211
SMEs (subject matter experts), 78
Smith, Adam, 661662
Social events, 147
Society of American Value Engineers (SAVE), 556,
593
Software, 6, 504505
Software FMEA, 225
Solid elements, 176
Solid mechanics, 177
Solutions, in partnering, 19
Solver (Excel), 182
Source inspection, 217
Spare parts use growth curves, 485
Special interest books, 145
Special purpose elements, 176
Specied dimensions, 523
Specied reliability (R
s
), 312, 318
Spherical radius, 522
Spline gages, 527
Spoilage, 36
Sport weld forces, 177
Springs, 176
SL3151ZIndex Page 763 Thursday, September 26, 2002 8:56 PM
764 Six Sigma and Beyond
SQM (strategic quality management), 102105
Squared deviations, 9193
SS (sum of squares), 415416
SSO (strategic standardization organization), 77
Stainless steel, 531
Standard's optimal inventory cycle, 707
Standard and Poor's, 145, 696697
Standard cost, 478, 704
Standard deviation, 45, 9193
Standard normal distribution, 647
Standards, 45
hierarchy of, 525
lack of, 211
Startup costs, 74
Startup losses, 278
State corporate lings, 147
Statement of changes, 669
Statement of condition. see balance sheets
Statement of nancial position. see balance sheets
State variables, 183185
Static process, vs. dynamic process, 12
Stationary hook hopper feeders, 207
Statistical analysis, 699, 711
Statistical modeling, 485
Statistical process control, 133
in monitoring team performance, 33
in process FMEA, 274
Statistical tolerancing, 721
Statistics for Experiments (book), 397
STB (smaller-the-better), 401, 412
Steel industry, 200
Steering wheels, 33
Step-stress testing, 306
Stockholders, 667
Stock markets, 688
Stocks, 670, 696698
Stock size, 522
Stoppage, 278
Stoppers, 219
Storage, as cause of product failures, 291
Straight line depreciation, 685686
Strain energy distribution, 177
Strain gages, 181
Strategic goals, 19
Strategic Planning Institute, 119
Strategic quality management (SQM), 1025
Strategic quality planning, 22, 24
Strategic standardization organization (SSO), 77
Stratication charts, 484
Stress, 176
Stress contours, 177
Structural pressure, 128
Sub-customers, 54
Subject matter experts (SMEs), 78
Suboptimization, 3536
Subsidiaries, 667
Substructuring, 179
Subsystem view, 238
Success factors in business, 129130
Success testing, 316317
Sudden-death testing, 302304
Sumerian farmers, 672
Sum of squares (SS), 415416
Sum-of-the-years' digits (SYD), 686
SUMT (sequential unconstrained minimization
technique), 185
Sunk cost, 704
Supermarkets, 116117
Supervision, as measure of quality cost, 501502
Suppliers, 10
and benchmarking, 104
councils/teams for, 15
evaluating and selecting, 14
involvement in partnering, 15
partnering managers for, 1415
in process FMEA, 269
roles in customer/supplier relationship, 13
Supply, factors affecting, 129
Supporting functions, 264
Surface elements, 176
Surface plates, 527
Surplus in capital, 670
Surprise mistakes, 211
Surrogate machinery FMEA, 279, 282
Surveillance equipment, 148
Surveys, 118119
Survival function, 642
SYD (sum-of-the-years' digits), 686
System/concept FMEA, 262
System controls, 265266
System customers, 229
System design, 371
System failure, 627629
System feedback, 31
System FMEA, 224225, 262
System initial design review, 466
System/part FMEA, 238
System reliability, 324
System requirements review, 465466
Systems, 3435
denition of, 34
in management, 3537
Systems engineering, 34
denition of, 37
design synthesis in, 3839
pre-feasibility analysis in, 38
requirement analysis in, 38
trade-off analysis in, 39
verication in, 3940
System view, 238
SL3151ZIndex Page 764 Thursday, September 26, 2002 8:56 PM
Index 765
T
TABInputs, 602
TABOutputs, 602
TABTools and techniques, 602
Tactical goals, 19
Taguchi, G., 481
Taguchi model, 259, 266
vs. axiomatic design, 543
in determining causes of failures, 249
in DFSS (design for six sigma), 716
in DOE (design of experiments), 370372
loss function in, 397398
in product design, 371
in QFD (quality function deployment), 725
signals in, 26
Tandy Computer, 195
Tap sensors, 218
Task benchmarking, 122
Task functions, 5859, 239, 264
Tasks, estimating, 604
Tax deductions, 685
Tax shelters, 702
Taylor's motion economy, 200, 203
Taylor series, 644649
partial derivatives, 649
of random variable functions, 650
in two-dimensions, 649650
variance and covariance, 650651
TDP (technology deployment process), 298300
Team champions, in trade-off studies, 476
Teams, 2627
aggressors in, 25
blockers in, 25
boundaries in, 29
conformance in, 2930
cross-functional, 472
dealing with variations in, 3031
in DFSS (design for six sigma), 2526
environment, 28
external variations, 28
feedback to, 31
help-seekers in, 25
input, 27
internal variations, 29
minimizing effects of variations on, 3132
monitoring performance of, 33
non-systems approaches to, 26
output/response, 28
peacemakers in, 25
ranking, 473474, 477
signals, 27
system interrelationships in, 33
system structure of, 2728
in trade-off studies, 472473
Technical abstracts, 145
Technical axis, 79
Technical buyers, 117
Technical system expectations (TSE), 78
Technical targets, 78
Technology deployment process (TDP), 298300
Technology forecasting, 156157
Telecommunications industry, 6
Temperature, 29, 289, 526
Templates, 219
Terminus functions, 6667
Test arrays, 387389
Testing
interpreting results of, 530
methods of, 529530
reasons for, 529
Testing rms, 148
Test procedure errors, 526
Textbooks, 145
Text databases, 146
TGR/TGW (things gone right/wrong), 349, 364
Theory of constraints. see TOC
Theory of rm, 661662
Theory of non-constraints, 463464
Thermal analysis, 357359
Thermal conductivity, 358
Thermal expansion coefcient, 180
Thermal rise, 358
Thermal stresses, 177
Thermodynamics, 542
Things gone right/wrong (TGR/TGW),
349, 364
Thin plates, 176
Thomas Register, 146
Three-level factors, 385, 391, 392
Three-parameter Weibull distribution, 643
Throughput, 458
vs. costs, 461
obstacles to, 461463
in TOC (theory of constraints), 463
Time, 525
Time interest earned ratios, 692
Time to total system failure, 627628
Time-truncated tests, 319320
TOC (theory of constraints), 457
ve-step framework of, 464465
foundation elements of, 463
goals of, 457458
measurement focus in, 460461
strategic measures, 458
vs. theory of non-constraints, 463464
throughput vs. cost world in, 461
Tolerances, 523
bilateral, 523
cost of reducing, 448
SL3151ZIndex Page 765 Thursday, September 26, 2002 8:56 PM
766 Six Sigma and Beyond
in DOE (design of experiments), 371, 447454
impact of tightening, 449
Tolerance stack studies, 266
Tolerancing, 518522
conventional, 518
in DFSS (design for six sigma), 716
geometric, 518
and six sigma, 1
statistical, 721
Tooling, 364
Tooling engineers, 269
Tools and equipment
design of, 200
wrong and inadequate, 214
Toothpaste industry, 102
Torque, 527
Total cost, 570
Total development process, 75
Total productive maintenance (TPM), 3623
Total quality management (TQM), 102105
TPM (total productive maintenance), 362363
TQM (total quality management), 102105
Traceability, 39
Tractors, 206
Trade and Industry Index, 146
Trade associations, 145
Trade journals, 145
Trade-off studies, 470471
checklist of, 476
conducting, 471475
hypothetical example of, 89
matrix, 477
ranking methods in, 473474
selection process in, 474
sensitivity analysis in, 475
standardized documentation in, 474
in systems engineering, 39
weighting rule, 475
Trade shows, 147
Traditional engineering, 468
Training, 110
Transactions, recording, 672, 675
Transcendent view, 113
Transfer functions, 5152, 719
Transformations, 5253, 396, 717718
Trend charting, 133
Trial balance (accounting), 676
Triggering events, 150
Trimetrons, 218
TRIZ theory, 230
in design FMEA, 267268
in DFSS (design for six sigma), 715
foundation of, 548
and innovation, 548
and levels of innovations, 549
principles associated with, 550
in process FMEA, 275276
tools, 549
Tryout period, of product life cycle,
699700
TSE (technical system expectations), 78
Tumbling barrel hopper feeders, 207
Tungsten carbide, 531
Two-level factors, 390391
Two-station assembly lines, 170175
U
U.S. Army, 203
U.S. Navy, 555
Ultrasonic tests, 530
U-MASS method, 202203
Uncoupled matrix, 657
Unemployment insurance, 702
Unequal bilateral tolerance, 523
Uniform Commerce Code lings, 147
Unilateral tolerance, 523
United Technologies, 54
University of Massachusetts, 202203
Unreliability, cost of, 294
Use value, 558
Useful life period, 293294
User-based view, 114
User groups, 147
V
Vacation pay, 702
Vacuum, 527
Valuation methods, 679681
current value, 680
historical cost, 680681
intrinsic value, 680
investment value, 680
liquidation value, 680
psychic value, 680
replacement cost, 680
Value, 557558
and historical costs, 678
in quality, 114
types of, 558
Value analysis, 555
in DFM/DFA (design for
manufacturability/assembly), 199
in DFSS (design for six sigma), 715
function concepts in, 6468
and transformational activities, 53
Value-based view, 114
Value chains, 54
Value concept, 556
SL3151ZIndex Page 766 Thursday, September 26, 2002 8:56 PM
Index 767
Value control, 553555
developing alternatives in, 558559
function analysis in, 573
functions in, 557
history of, 555
implementing, 559
job plans, 559562
managing, 560
planned approach to, 556
techniques, 562563
Value engineering, 555
attitudes in, 596
denition of, 556
developing plans in, 592593
in DFM/DFA (design for
manufacturability/assembly),
199
evaluating, 593
goals in, 592
in lowering costs, 581
project selection in, 597598
selection methods in, 586
setting up organization in, 594595
understanding principles of, 593594
value council in, 596597
Value Line Investment Survey, 697
Values
and change management, 127
in goal setting, 160
Variable burden costs, 569
Variable costs, 704705
Variables, 183185
Variables tests, 314, 318320
Variance, 480, 650651
Variance of deection, 654655
Variations
compensating for, 3031
controlling, 30
external, 28
external variations, 28
internal, 29
minimizing effects of, 3132
random, 133
system feedback, 31
Velcro, 101
Vendors, 10
Verication, in systems engineering, 3940
Vertical integration, 10
Vibrations, 177, 289
Vibration sensors, 218
Vibratory bowl feeders, 207
Visual inspection, 529
VOC (voice of customer), 73, 83, 201
Voice mail, in customer/supplier communications,
13
Volkswagen, 169
Volvo, 296297
W
Wal-Mart Corp., 117
Warehouse operations, 153154, 157
Warranties, 289
costs of, 101, 294, 297
data, 279
as external failure cost, 491
as measure in TOC (theory of constraints),
462
periods, 289
reducing, 74
Wealth of Nations (book), 661662
Wear out period, 294
Weather, 289
Web sites, 13, 111112
Weibull distribution, 640643
in xed-sample tests, 320
in plotting and analyzing failure data,
323333
in sequential tests, 323
three-parameter, 643
using, 334335
Weibull failure distribution, 642643
Weibull hazard rate function, 643
Weibull probability density function, 640
Weibull reliability function, 642
Weibull scale parameter, 307
Weibull shape parameter, 307
Weight, 527
Weighted average method, 684
Weightings, 474475, 477
Welding point indicators, 218
Westinghouse Electric Co., 203
Where to Find Business Information (book), 145
Willful mistakes, 211
Work
breakdown structure, 604
dening in projects, 604
improving efciency in, 199200
stoppage of, 278
Working capital, 661
and cash ow, 702
format, 666
net changes in, 670
Working standards, 525
Work place, 200, 210
Writing, earliest evidence of, 672
X
X-bar charts, 274
SL3151ZIndex Page 767 Thursday, September 26, 2002 8:56 PM
768 Six Sigma and Beyond
Xerox, benchmarking programs in, 97, 108109,
122, 124, 143
X-moving range charts, 274
Y
Yearbooks, 145
Yellow pages, 145
Young's modulus, 180
Z
Zero-based budgeting,
712
Zero defects, 483
Zero-growth budgeting,
712
Z score, 699, 721
Z traps, 159
SL3151ZIndex Page 768 Thursday, September 26, 2002 8:56 PM

You might also like