You are on page 1of 329

LEAN SIX SIGMA | GREEN BELT

BOOK OF KNOWLEDGE
THIRD EDITION
Lean Six Sigma | Green Belt Book of Knowledge

Table of Contents

Part I: Introduction to Lean Six Sigma.............................. 1

Unlawful to replicate or distribute


Chapter 1: Evolution of Lean Six Sigma (LSS)................................................................... 3
1.1 Industrial Quality in the 18th and 19th Centuries...........................................................................3
1.2 Industrial Quality in the 20th Century.................................................................................................4
1.3 Early 20th Century Quality Pioneers.....................................................................................................4
1.3.1 Walter A. Shewhart (1891–1967)............................................................................................... 4
1.3.2 Henry Ford (1863–1947)............................................................................................................... 5
1.3.3 Frederick Winslow Taylor (1856–1915).................................................................................... 6
1.4 Americans Taking Methods to Japan...................................................................................................6
1.4.1 W. Edwards Deming (1900–1993)............................................................................................. 6
1.4.2 Joseph M. Juran (1904–2008)..................................................................................................... 8
1.5 Quality Revolution in Japan................................................................................................................. 11
1.5.1 Kaoru Ishikawa (1915–1989).....................................................................................................11
1.5.2 Genichi Taguchi (1924–2012)...................................................................................................12
1.5.3 Shigeo Shingo (1909–1990)......................................................................................................13
1.5.4 Taiichi Ohno (1912–1990)...........................................................................................................14
1.5.5 Eiji Toyoda (1913–2013)..............................................................................................................14
1.6 Moving Towards Total Quality ............................................................................................................ 15
1.6.1 Philip B. Crosby (1926–2001).....................................................................................................15
1.6.2 James P. Womack and Daniel T. Jones....................................................................................16
1.6.3 Armand V. Feigenbaum (1922–2014)....................................................................................17
1.6.4 Malcolm Baldrige (1922–1987)................................................................................................18
Chapter 2: Integration of Lean and Six Sigma................................................................ 19
2.1 Six Sigma Methodology......................................................................................................................... 19
2.1.1 The Six Sigma Culture..................................................................................................................22
2.1.2 Define-Measure-Analyze-Improve-Control (DMAIC).......................................................22
2.1.3 Design for Six Sigma (DFSS)......................................................................................................23
2.2 Lean Methodology.................................................................................................................................. 25
2.2.1 Toyota Production System.........................................................................................................26
2.2.2 Lean Thinking.................................................................................................................................27
2.2.3 Muda..................................................................................................................................................28
2.2.4 Transitioning to Lean...................................................................................................................31
2.3 Comparison of the Methodologies.................................................................................................... 32
2.4 Lean Six Sigma (LSS)................................................................................................................................ 32
Chapter 3: Value of Lean Six Sigma (LSS)........................................................................ 35
3.1 Creating and Delivering Value............................................................................................................. 35
3.1.1 Defining Value................................................................................................................................36
3.1.2 Value-Added vs. Non-Value-Added Activities.....................................................................36

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 I


3.1.3 Tools to Specify Value..................................................................................................................37


3.2 Advantages of Lean Six Sigma (LSS).................................................................................................. 37
3.3 Application across Various Industries............................................................................................... 38
3.4 Real-Life Success Stories........................................................................................................................ 40
Unlawful to replicate or distribute

Chapter 4: Lean Six Sigma (LSS) and Organizational Goals.......................................... 43


4.1 Organizational Strategic Goals and Lean Six Sigma (LSS) Projects........................................ 43
4.1.1 Processes and Systems Thinking.............................................................................................44
4.1.2 Avoiding Project Failure..............................................................................................................45
4.1.3 Transfer Function of y=f(x).........................................................................................................46
4.2 Organizational Drivers............................................................................................................................ 47
4.3 Organizational Metrics........................................................................................................................... 48
4.3.1 Developing Performance Metrics...........................................................................................49
4.3.2 Balanced Scorecard......................................................................................................................50

Part II: Project Management Basics................................. 53


Chapter 5: Seven Quality Control (7QC) Tools................................................................ 55
5.1 Check Sheets.............................................................................................................................................. 56
5.2 Pareto Charts.............................................................................................................................................. 57
5.3 Histograms.................................................................................................................................................. 58
5.4 Scatter Diagrams...................................................................................................................................... 59
5.5 Flow Charts................................................................................................................................................. 59
5.6 Control Charts........................................................................................................................................... 60
5.7 Cause and Effect Diagrams................................................................................................................... 61
Chapter 6: Seven Management and Planning Tools...................................................... 63
6.1 Affinity Diagrams...................................................................................................................................... 64
6.2 Tree Diagrams............................................................................................................................................ 65
6.3 Interrelationship Digraphs.................................................................................................................... 66
6.4 Matrix Diagrams........................................................................................................................................ 69
6.5 Prioritization Matrices............................................................................................................................. 70
6.6 Process Decision Program Charts (PDPC)........................................................................................ 71
6.7 Activity Network Diagrams................................................................................................................... 72
Chapter 7: Project Tracking............................................................................................. 77
7.1 Planning and Completing Project Work........................................................................................... 77
7.2 Project Planning and Monitoring Tools............................................................................................ 78
7.2.1 Gantt Charts....................................................................................................................................78
7.2.2 Milestone Schedule......................................................................................................................78
7.2.3 Deliverables Schedule.................................................................................................................80
7.2.4 The Critical Path Method (CPM)...............................................................................................81
7.2.5 PERT Charts.....................................................................................................................................82
Chapter 8: Project Teams................................................................................................. 85
8.1 Leading Project Teams............................................................................................................................ 85

II © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

8.2 Stages of Team Development.............................................................................................................. 86


8.2.1 Forming............................................................................................................................................86
8.2.2 Storming...........................................................................................................................................86
8.2.3 Norming...........................................................................................................................................87

Unlawful to replicate or distribute


8.2.4 Performing.......................................................................................................................................87
8.2.5 Adjourning......................................................................................................................................87
8.3 Rewards and Recognition..................................................................................................................... 87
8.4 Resolving Negative Team Dynamics................................................................................................. 88
8.5 Team Roles and Responsibilities......................................................................................................... 90
8.5.1 Lean Six Sigma (LSS) Roles and Responsibilities................................................................90
8.5.2 General Team Roles and Responsibilities ............................................................................91
8.6 Team Tools and Techniques.................................................................................................................. 92
8.6.1 Brainstorming.................................................................................................................................92
8.6.2 Nominal Group Technique.........................................................................................................93
8.6.3 Multi‑Voting....................................................................................................................................94
Chapter 9: Project Communication................................................................................. 97
9.1 Building Effective Team Communications...................................................................................... 97
9.2 Communication Tools and Techniques............................................................................................ 98
9.2.1 Active Listening.............................................................................................................................98
9.2.2 Speaking Clearly and Purposefully ........................................................................................99
9.2.3 Developing Effective Team Communication Skills ..........................................................99
9.2.4 The A3 One-Page Report......................................................................................................... 100
9.2.5 Communications Plan.............................................................................................................. 102
9.3 Project Documentation........................................................................................................................103
9.3.1 Project Reports............................................................................................................................ 104
9.3.2 Project Records Management............................................................................................... 107
9.4 Project Presentations............................................................................................................................108
9.4.1 Creating and Designing Project Presentations............................................................... 108

Part III: Define Phase of DMAIC..................................... 111


Chapter 10: Voice of the Customer (VOC)..................................................................... 113
10.1 Identifying Your Customer................................................................................................................114
10.2 Collecting Customer Data.................................................................................................................114
10.2.2 Sorting and Grouping Customer Data............................................................................. 116
10.3 Identifying Customer Needs and Requirements......................................................................117
10.3.1 Kano Model................................................................................................................................ 117
10.4 Developing CTx Measures................................................................................................................118
10.4.1 Critical to Quality (CTQ) Metrics......................................................................................... 118
10.4.2 Critical to Schedule (CTS) Metrics...................................................................................... 120
10.4.3 Critical to Cost (CTC) Metrics............................................................................................... 121
10.4.4 Refining Requirements.......................................................................................................... 123
10.5 Linking Customer Requirements to Business Objectives......................................................124

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 III


10.5.1 Operational Definitions......................................................................................................... 124


10.5.2 Quality Function Deployment............................................................................................ 125
Chapter 11: Identifying and Selecting a Project.......................................................... 129
11.1 Identifying a Project............................................................................................................................129
Unlawful to replicate or distribute

11.2 Identifying Process Owners and Project Stakeholders..........................................................130


11.2.1 Stakeholder Analysis.............................................................................................................. 130
11.3 Project Selection Process..................................................................................................................131
11.3.1 Using a Prioritization Matrix................................................................................................ 132
11.3.2 Tiered Approach....................................................................................................................... 132
11.4 Benchmarking.......................................................................................................................................133
Chapter 12: Defining and Documenting the Process.................................................. 135
12.1 Top-Level Process Definition............................................................................................................135
12.2 Process Inputs and Outputs.............................................................................................................136
12.3 SIPOC Diagram......................................................................................................................................136
12.4 Process Mapping..................................................................................................................................137
12.4.1 Steps for Creating a Process Map....................................................................................... 138
12.5 Spaghetti Diagram..............................................................................................................................139
Chapter 13: Project Charter........................................................................................... 141
13.1 Business Case........................................................................................................................................143
13.2 Problem and Opportunity Statements........................................................................................143
13.3 Project Goals and Objectives...........................................................................................................143
13.4 Project Scope, Constraints, and Assumptions...........................................................................144
13.4.1 Scope............................................................................................................................................ 144
13.4.2 Constraints................................................................................................................................. 145
13.4.3 Assumptions.............................................................................................................................. 145
13.5 Expected Benefits................................................................................................................................145
13.6 Project Resources.................................................................................................................................146
13.7 Baseline Measures and Results.......................................................................................................146
13.7.1 Measuring a Process....................................................................................................................... 147
13.8 Preliminary Project Plan....................................................................................................................149
13.8.1 Deliverables vs. Activities...................................................................................................... 149
13.8.2 Final and Interim Deliverables............................................................................................ 149

Part IV: Lean Manufacturing and Lean Office............... 151


Chapter 14: Value Stream Mapping.............................................................................. 153
14.1 Comparing VSM and Process Maps...............................................................................................153
14.2 Current-State VSM...............................................................................................................................154
14.3 Procedure for Drawing a Current State VSM..............................................................................155
Chapter 15: Lean Methods and Tools............................................................................ 159
15.1 5S (Sort, Set, Shine, Standardize, and Sustain)............................................................... 160
15.1.1 5S Work Instruction................................................................................................................. 160

IV © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

15.2 Constraint Management...................................................................................................................161


15.2.1 Drum-Buffer-Rope................................................................................................................... 161
15.2.2 Constraint Improvement...................................................................................................... 162
15.3 Continuous Flow..................................................................................................................................162

Unlawful to replicate or distribute


15.4 Cycle Time Reduction.........................................................................................................................162
15.4.1 Examples of Cycle Time Reduction................................................................................... 162
15.5 Kanban.....................................................................................................................................................163
15.5.1 Two-bin System........................................................................................................................ 163
15.5.2 Other Kanban Examples........................................................................................................ 163
15.6 Level Loading (Heijunka)..................................................................................................................163
15.7 Lot Size Reduction...............................................................................................................................164
15.7.1 Example of Small Lot Size............................................................................................................. 164
15.8 Mistake-proofing.................................................................................................................................164
15.8.1 Mistake-proofing Principles................................................................................................. 164
15.8.2 Mistake-proofing Example................................................................................................... 165
15.9 Plant Layout...........................................................................................................................................166
15.10 Point of Use Storage (POUS)..........................................................................................................166
15.11 Pull Systems.........................................................................................................................................166
15.12 Quality at the Source........................................................................................................................166
15.13 Quick Changeover............................................................................................................................166
15.14 Standard Work....................................................................................................................................167
15.15 Total Productive Maintenance (TPM).........................................................................................167
15.15.1 TPM Subgroups...................................................................................................................... 168
15.15.2 Overall Equipment Effectiveness (OEE) ........................................................................ 168
15.15.3 OEE Example........................................................................................................................... 168
15.16 Visual Factory......................................................................................................................................169
Chapter 16: Value Stream Analysis............................................................................... 171
16.1 The Eight Wastes in the Value Stream...........................................................................................171
16.2 Lean Improvement Methods and Tools to Reduce Waste and Increase Flow................172
16.3 Current State Value Stream Map (VSM).......................................................................................172
16.4 Future State Value Stream Map......................................................................................................173
16.4.1 Procedure for drawing a Future State Map.................................................................... 173
16.4.2 Questions to Ask When Creating a Future State VSM................................................. 173
16.5 Kaizen.......................................................................................................................................................174
16.5.1 Kaizen Event Work Instructions.......................................................................................... 175
16.5.2 Kaizen Example........................................................................................................................ 175

Part V: Measure Phase of DMAIC................................... 181


Chapter 17: Probability and Statistics.......................................................................... 183
17.1 Basic Probability Concepts...............................................................................................................183
17.1.1 Probability Definitions........................................................................................................... 184
17.1.2 Probability Rules...................................................................................................................... 185

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 V


17.2 Basic Statistics.......................................................................................................................................186


17.2.1 Central Tendency..................................................................................................................... 188
17.2.2 Variation...................................................................................................................................... 188
17.2.3 Inferential Statistics................................................................................................................. 191
Unlawful to replicate or distribute

Chapter 18: Measurement System Analysis (MSA)...................................................... 193


18.1 MSA for Attribute Data......................................................................................................................194
18.2 Gage Repeatability and Reproducibility (R&R) Studies..........................................................196
18.2.1 Types of Gage R&R Studies................................................................................................... 196
18.2.2 Using Software to Analyze Gage R&R Results- QI Macros......................................... 196
18.2.3 Using Software to Analyze Gage R&R Results - Minitab............................................ 200
Chapter 19: Collecting and Summarizing Data............................................................ 203
19.1 Types of Data and Measurement Scales......................................................................................204
19.1.1 What Needs to be Measured?............................................................................................. 205
19.1.2 What Type of Data Are Collected?..................................................................................... 207
19.1.3 Stratifying Data......................................................................................................................... 208
19.2 Sampling and Data Collection Methods.....................................................................................208
19.2.1 Factors in Sample Selection................................................................................................. 209
19.2.2 Understanding Sampling Bias............................................................................................ 209
19.2.3 Worst Ways to Choose Samples.......................................................................................... 210
19.2.4 Sampling Strategies................................................................................................................ 210
19.2.5 Confidence Level or Interval................................................................................................ 210
19.2.6 Determining Sample Size.................................................................................................... 210
19.2.7 Data Collection Planning...................................................................................................... 215
19.2.8 Data Collection Tools.............................................................................................................. 216
19.3 Graphical Methods of Displaying Data........................................................................................217
19.3.1 Displaying Data Using Histograms.................................................................................... 217
19.3.2 Displaying Data Using Pareto Charts................................................................................ 225
19.3.3 Displaying Data Using Runs Charts................................................................................... 227
19.3.4 Scatter Diagram (Scatterplot).............................................................................................. 228
19.4 Using Existing Data.............................................................................................................................230

Part VI: Principles of Statistical Process Control.......... 231


Chapter 20: Statistical Process Control......................................................................... 233
20.1 Common and Special Causes of Variation..................................................................................233
20.2 Data Collection for SPC .....................................................................................................................234
20.3 Rational Subgrouping .......................................................................................................................234
20.4 Central Limit Theorem.......................................................................................................................235
Chapter 21: Probability Distributions........................................................................... 237
21.1 Probability Distributions: Discrete vs. Continuous..................................................................237
21.2 Discrete Probability Distributions..................................................................................................238
21.2.1 Binomial Distribution............................................................................................................. 238
21.2.2 Poisson Distribution .............................................................................................................. 238

VI © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

21.2.3 Hypergeometric Distribution.............................................................................................. 239


21.3 Continuous Probability Distributions...........................................................................................239
21.3.1 Normal Distribution ............................................................................................................... 239
21.3.2 Exponential Distribution ...................................................................................................... 240

Unlawful to replicate or distribute


21.3.3 Weibull Distribution ............................................................................................................... 240
21.4 Choosing the Right Probability Distribution.............................................................................241
Chapter 22 Control Charts............................................................................................. 243
22.1 Control Chart Overview.....................................................................................................................243
22.2 Basic Control Charts Procedure......................................................................................................244
22.3 Control Charts for Variable Data.....................................................................................................245
22.3.1 IMR (Individual and Moving Range) Chart..................................................................... 245
22.3.2 X-barR (Subgroup Average and Range) Chart............................................................. 246
22.4 Control Charts for Attribute Data...................................................................................................248
22.4.1 P Chart for Proportion Defective........................................................................................ 248
22.4.2 NP Chart for Count of Defectives....................................................................................... 248
22.4.3 U Chart......................................................................................................................................... 248
22.4.4 C Chart......................................................................................................................................... 249
22.5 Selecting the Correct Control Chart..............................................................................................249
22.6 Control Chart Analysis........................................................................................................................249
22.6.1 Basic Guidelines....................................................................................................................... 249
22.6.2 Commonly Used Rules to Detect Out of Control Conditions (Special Causes). 249
22.7 Examples of Control Chart Applications.....................................................................................253
22.7.1 Example One............................................................................................................................. 253
22.7.2 Example Two............................................................................................................................. 253
22.8 Control Chart Formulas.....................................................................................................................254
Chapter 23: Process Capability and Performance........................................................ 257
23.1 Process Capability Indices ...............................................................................................................258
23.1.1 Cp.................................................................................................................................................. 258
23.1.2 Cpk................................................................................................................................................ 259
23.1.3 Difference between Cp and Cpk........................................................................................ 260
23.2 Process Performance Indices...........................................................................................................260
23.2.1 Pp................................................................................................................................................... 260
23.2.2 Ppk................................................................................................................................................ 261
23.2.3 Difference between Pp and Ppk......................................................................................... 262
23.3 Process Capability for Variable Data Example:..........................................................................262
23.4 Process Capability and Process Performance Summary........................................................265
23.4.1 Process Capability for Attribute Data Example............................................................. 266
23.5 Process Performance Metrics .........................................................................................................266

Part VII: Analyze Phase of DMAIC.................................. 269


Chapter 24: Root Cause and Variation Analysis........................................................... 271

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 VII


24.1 Identify Potential Causes...................................................................................................................272


24.2 Screen Potential Causes....................................................................................................................273
24.3 Determine/Validate the Critical Inputs........................................................................................275
24.4 Example of the Root Cause Analysis Process.............................................................................275
Unlawful to replicate or distribute

Chapter 25: Correlation Analysis and Regression........................................................ 277


25.1 Scatterplots............................................................................................................................................278
25.2 Pearson Correlation Coefficient......................................................................................................278
25.3 Regression Analysis.............................................................................................................................279
25.4 Correlation Analysis Example..........................................................................................................280
Chapter 26: Hypothesis Testing..................................................................................... 285
26.1 Terms Associated with Hypothesis Testing.................................................................................286
26.2 Types of Hypothesis Tests.................................................................................................................287
26.3 Basic Hypothesis Testing Procedure.............................................................................................287
26.4 Analyzing the Results.........................................................................................................................288
26.5 Examples of Hypothesis Tests.........................................................................................................288
26.5.1 2 Sample t Test for Variable Data........................................................................................ 288
26.5.2 1 Proportion Test for Attribute Data................................................................................. 289
26.5.3 Other Examples........................................................................................................................ 290
Chapter 27: Design of Experiment (DOE)..................................................................... 291
27.1 Terms Associated with Design of Experiments.........................................................................292
27.2 Types of Design of Experiments.....................................................................................................292
27.3 Basic Design of Experiments Testing Procedures.....................................................................293
27.4 Analyzing the Results.........................................................................................................................293
27.5 Example...................................................................................................................................................293

Part VIII: Improve Phase of DMAIC................................ 297


Chapter 28: Selecting a Solution................................................................................... 299
28.1 Generating Solutions and Reducing Waste................................................................................299
28.2 Re-evaluate the Measuring Systems.............................................................................................301
28.3 Performing a Final Capability Study..............................................................................................301
28.3.1 Steps to Execute a Pilot Study............................................................................................. 301
28.3.2 Critical Issues in Planning a Pilot Study........................................................................... 301
28.3.3 Evaluating the Results of a Pilot Study............................................................................. 301
Chapter 29: Risk Analysis and Mitigation..................................................................... 303
29.1 Expected Profit.....................................................................................................................................303
29.2 SWOT Analysis.......................................................................................................................................304
29.3 Feasibility Study...................................................................................................................................304
29.4 Unintended Consequences..............................................................................................................305
29.5 Failure Mode and Effects Analysis (FMEA)..................................................................................305
29.5.1 FMEA Work Instructions........................................................................................................ 306
29.5.2 FMEA Key Rating Terms......................................................................................................... 307

VIII © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

29.5.3 Rating Criteria Example......................................................................................................... 307


29.5.4 FMEA Examples........................................................................................................................ 307

Part IX: Control Phase of DMAIC.................................... 309

Unlawful to replicate or distribute


Chapter 30: Process Control Planning.......................................................................... 311
30.1 Statistical Process Control (SPC).....................................................................................................311
30.2 Control Plans.........................................................................................................................................311
30.3 Process Audits.......................................................................................................................................313
30.3.1 LSS Project Audit Work Instruction .................................................................................. 313
30.3.2 Process Audits Interviews..................................................................................................... 313
30.4 Process Metrics.....................................................................................................................................314
Chapter 31: Project Closure........................................................................................... 315
31.1 Lessons Learned...................................................................................................................................315
31.2 Training Plan Deployment ...............................................................................................................315
31.3 Documentation....................................................................................................................................316
31.4 After Project Closure...........................................................................................................................316

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 IX
Lean Six Sigma | Green Belt Book of Knowledge

Part I: Introduction to Lean Six Sigma

Unlawful to replicate or distribute


B ased on the Lean and Six Sigma methodologies, Lean Six Sigma (LSS) is a continuous
improvement methodology that focuses on the elimination of waste and reducing variation
in manufacturing, service, or design processes. Pioneered by Toyota, the Lean methodology
aims to reduce non-value added activities and cycle times while creating value for customers.
Six Sigma focuses on identifying and reducing variability and improving overall quality. LSS
therefore can help an organization meet or exceed the needs or requirements of their customers
while improving their own performance and effectiveness and managing their quality.

Benefits of Lean Six Sigma (LSS):

◆◆ Increased customer and employee satisfaction

◆◆ Reduced costs

◆◆ Retained business

◆◆ Enhanced reputation

◆◆ Increased competitive advantage

◆◆ Improved staff morale and collaboration

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
This page intentionally left blank.
Unlawful to replicate or distribute

2 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Chapter 1: Evolution of Lean Six Sigma (LSS)

Unlawful to replicate or distribute


Key Terms
Plan-Do-Check-Act (PDCA) Cycle
Plan-Do-Study-Act (PDSA) Cycle
quality

Key People
Armand V. Feigenbaum Kaoru Ishikawa
Daniel T. Jones Malcolm Baldrige
Eiji Toyoda Philip B. Crosby
Frederick Winslow Shiego Shingo
Taylor Genichi Taguchi Taiichi Ohno
Henry Ford Walter A. Shewhart
James P. Womack W. Edwards Deming
Joseph M. Juran

Body of Knowledge
1. Explain the historical perspective and the evolution of LSS from quality leaders such as Juran,
Taylor, Deming, Shewhart, Ishikawa, Ohno, and others.

2. List the three obstacles to diffusing Lean production throughout various industries.

W e have the benefit of more than 100 years of study, trial and error, and proven success with
various principles, methodologies, and tools to leverage as we tackle today’s difficult product
and process improvement projects. The driving force behind this evolution of quality has been
the companies that are constantly striving for ever-increasing levels of efficiency, effectiveness, and
high-quality products and services. These challenges inspire the individuals who discover new tools,
techniques, and principles to make improvements possible.

1.1 Industrial Quality in the 18th and 19th Centuries


The era known as the Industrial Revolution was a period in which fundamental changes occurred
in the agriculture, textile and metal manufacturing, and transportation industries. These changes
occurred between 1760 and 1850 and brought about increases in production, food supplies, and
raw materials. Nineteenth-century craftsmen had to minimize wasted time, actions, and materials
in order to make money. To stay in business, they needed to figure out how to create every product
or service they offered at the highest standard of quality the first time, each time, and every time.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Chapter 1: Evolution of Lean Six Sigma (LSS)

1.2 Industrial Quality in the 20th Century


In the early 1900s, industrial quality was limited to inspecting finished products and removing
defective items. This was obviously a very costly method of delivering quality products. Therefore,
during that time period, companies were constantly looking for ways to improve quality and reduce
Unlawful to replicate or distribute

variability. In 1908, W.S. Gosset developed the t-distribution concept to help analyze the quality data
at the Guinness factory (an Irish brewery). Around the same time, A.K. Erlang, who worked for
Copenhagen Telephone Company, was likely one of the first to apply probability theory in an effort
to increase the reliability of telephone service (which inherently had a great deal of randomness). His
efforts led to modern queuing and reliability theory. In 1855, Florence Nightingale demonstrated that
statistics provide an organized way of learning; and her accomplishments led to improvements in
medical and surgical practices.

Quality Movements in the 20th Century


During the past 115 years, there have been thousands of people who have made contributions to the
quality body of knowledge. Some of the most significant movements were:

1900–1945 Early 20th century quality pioneers

Early 1950s Americans taking methods to Japan

Late 1950s Quality revolution in Japan

1970s–1980s Moving towards total quality

1.3 Early 20th Century Quality Pioneers


1.3.1 Walter A. Shewhart (1891–1967)
American Physicist, Engineer, Statistician, Father of Statistical Quality Control

When Walter A. Shewhart joined the Inspection Engineering Department at Western Electric
Company in 1918, quality control consisted of inspecting finished products and removing the
ones with defects. That practice changed when Shewhart created a simple diagram in 1924 that is
commonly recognizable today as a “control chart” (also known as a process behavior chart). His work
established the essential principles of what later became known as process quality control (reduction
of variation in manufacturing processes). Shewhart understood that when manufacturing reacted to
nonconformance by continually adjusting their processes, variation and quality degradation actually
increased.

Shewhart characterized the issue of variation as assignable-cause and chance-cause variation, and
using his control chart tool, he was able to distinguish between the two. He emphasized that in order
to predict future output and manage processes economically, a production process must be brought
into a state of statistical control, in which only chance-cause variations exists.

Shewhart, who worked at Bell Laboratories from its founding in 1925 to his retirement in 1956,
improved voice clarity in carbon transmitters that were a part of telephone handsets then and used his
statistical methods to improve the installation of switching systems and factory production.

Shewhart published a series of papers in the Bell System Technical Journal while working for Bell

4 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Laboratories. He compiled his work in his seminal book, Economic Control of Quality of Manufactured
Product (1931), which is a comprehensive presentation of the basic principles of quality control.

Shewhart also created the plan-do-check-act (PDCA) cycle (also known as the Shewhart cycle),
which is a four-step model for implementing change. Shewhart depicts the cycle as a circle without

Unlawful to replicate or distribute


end to emphasize that continuous improvement requires continuous repetition of the cycle.

PDCA Cycle
1. Plan - Establish a plan for achieving a goal.

2. Do - Enact the plan.

3. Check - Measure and analyze the results.

4. Act - Implement the necessary reforms when the results are not as expected.

1.3.2 Henry Ford (1863–1947)


Founder of Ford Motor Company, Father of Modern Assembly Lines

Henry Ford was born and raised on a farm near Dearborn, Michigan. He had several jobs (apprentice
machinist, sawmill operator, and steam engine repairman) before becoming an engineer with the
Edison Illuminating Company in Detroit. In 1893, he became chief engineer at Edison, a promotion
that would give him the time and capital to work on his personal experiments with the internal
combustion engine.

In 1903 Henry Ford founded the Ford Motor Company; and in 1908 he introduced the Model
T, which ushered in a new era of transportation that made the automobile an essential form of
transportation for the common man. To support the growing demand for the Model T, Ford opened
a factory in Highland Park, Michigan, and it was at this factory that his contributions to mass
production became evident.

In 1910 Ford was already using efficient techniques in the Michigan factory, e.g., interchangeable parts,
division of labor, and precision manufacturing. However, it was Ford’s introduction of the assembly
line that revolutionized the manufacturing process. The assembly line reduced the construction of a
chassis from 12 hours to less than two hours.

On Ford’s assembly line, workers remained in place and added parts to the automobile as it passed by
them on the line. Required parts were delivered to the workers via conveyor belts on a carefully timed
schedule to ensure continuity on the line. The use of the assembly line reduced production time and
lowered costs, allowing sales of the Model T to flourish and making Ford Motor Company the largest
automobile manufacturer in the world.

Ford also took interest in the cost of raw materials and how they affected the cost and productivity of
the manufacturing process, which he addressed with vertical integration in the design of the massive
Ford Rouge Factory near Dearborn, Michigan. The end product was a facility in which all the steps
of the manufacturing process (from the refinement of raw materials to the final assembly of the
automobile ) could take place.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 5
Chapter 1: Evolution of Lean Six Sigma (LSS)

1.3.3 Frederick Winslow Taylor (1856–1915)


Engineer, Efficiency Expert, Father of Scientific Management

As a mechanical engineer working at Midvale Steel Works, Frederick Winslow Taylor was stunned
by the amount of worker inefficiency he witnessed. He found there was no standard for work and that
Unlawful to replicate or distribute

workers were placed in jobs without any regard for their aptitude for the task. Furthermore, most of
the workers, when forced to perform repetitive tasks or all were paid the same amount, would work at
the rate of their slowest co-worker (referred to as “soldiering”).

In 1911, Taylor published a book titled The Principles of Scientific Management, in which he
explained how the application of science to management (scientific management) could improve
the productivity of workers. Scientific management attempted to deal with process improvement
and management as a scientific problem by transferring control of the work from the workers to
management. Taylor felt there should be a greater distinction between the planning of work (mental
labor) and executing work (manual labor). He further advocated that management should create plans
stating how the job should be done and then communicate those plans to the workers.

Workers were taught the “one best way” to complete their tasks. This was a drastic departure from
a system that relied on skilled craftsmen who completed the work on their own terms. In Taylor’s
Principles of Scientific Management (1911), the skills of an expert were converted to a series of easily
repeatable steps that could be accomplished by any unskilled worker. Taylor also emphasized that the
system must be beneficial for both the employer and the employee, i.e., it is possible to have higher
wages and lower production costs simultaneously. He believed that when compensation is linked to
output, productivity goes up.

Frederick W. Taylor was the first man in recorded history who deemed work
deserving of systematic observation and study. On Taylor’s “scientific management”
rests, above all, the tremendous surge of affluence…which has lifted the working
masses in the developed countries well above any level recorded before.1

–Peter Drucker, Management Expert and Author

Taylor’s Four Principles of Scientific Management


1. Develop a science for each element of an individual’s work.

2. Scientifically select and then train, teach, and develop the worker.

3. Cooperate with the workers to ensure that all work is done in accordance with the developed
principles of the science.

4. D
 ivide work and responsibility almost equally between management and workers.
Management takes over all work for which it is better fitted than the worker.

1.4 Americans Taking Methods to Japan


1.4.1 W. Edwards Deming (1900–1993)
American Statistician, Professor, Author, Japan’s post-WWII Transformation and Statistical Quality
Control Expert.

1  Peter F. Drucker, Management (New York: Harper & Row, 1974).

6 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

In 1938, W. Edwards Deming was working at the U.S. Department of Agriculture and was responsible
for their Graduate School courses in mathematics and statistics. His work, studying the physical
properties of materials drew him to the application of statistics. Deming was introduced to Shewhart
and invited Shewhart to lecture at the school. Shewhart became a critical influence on Deming’s
work. His concepts led to Deming’s theory of management - the application of quality control to the

Unlawful to replicate or distribute


processes by which companies are managed.

Using Shewhart’s principles, Deming applied statistical quality control principles to the clerical
operations of the 1940 U.S. Census. During World War II (WWII), Deming taught basic applied
statistics to workers engaged in production in support of the war effort. In 1943, W. Allen Wallis of
Stanford University asked Deming to begin a statistics training program at Stanford, through which he
trained almost 2,000 people over the course of two years, using the Shewhart Cycle for Learning and
Improvement (the PDCA cycle) and the plan-do-study-act (PDSA) cycle, which was developed by
Deming building off the original PDCA cycle introduced by Shewhart.

PDSA Cycle
1. Plan - Identify a goal and define how success will be measured.

2. Do - Implement the plan.

3. Study - Monitor the outcomes; look for problems or successes.

4. Act - Integrate what you have learned.

In 1947, Deming was asked by the U.S. Occupation authorities to assist with assessing the problems
of nutrition and housing after WWII and the planning of the 1951 census in Japan. During his visits,
he worked with Japanese statisticians and became involved in Japanese society which, combined with
his expertise in quality control techniques, led to an invitation from the Union of Japanese Scientists
and Engineers (JUSE) to teach statistical methods to Japanese industries. During the summer of 1950
and his five return trips, Deming trained hundreds of managers, engineers, and scholars in the SPC
techniques as well as quality concepts through his eight-day course on quality control. He taught the
chief executives of Japanese industries that improving quality can increase productivity and market
share while reducing expenses.

As a result of his lectures, several Japanese manufacturers applied his techniques and realized
considerably higher levels of quality and productivity, which, combined with lowered costs, created
international demand for Japanese products. JUSE’s Board of Directors established the Deming Prize
in 1951, which is awarded each year in Japan to a statistician for contributions to statistical theory, to
thank Deming for his friendship and contributions to Japan’s statistical quality control after WWII. He
had more impact on Japanese manufacturing and business than any other individual not of Japanese
heritage.

An American television episode from the NBC White Paper series — “If Japan Can...Why Can’t
We?”— introduced Deming’s methods to American managers in 1980. The episode, which
discussed Japan’s capturing of the world’s automotive and electronics markets, explained that Japan
was realizing their success because of Deming’s advice to practice continual improvement and to
think of manufacturing as a system. The United States was by now facing increased industrial and
manufacturing competition from Japan; and as a result, there was an increase in the demand for
Deming’s consulting services in his home country. Deming continued to offer his consulting services
to various industries across the world until his death in December 1993.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 7
Chapter 1: Evolution of Lean Six Sigma (LSS)

In his book, Out of the Crisis (1986), Deming outlined 14 points for transforming American industry.
Deming understood that improving quality hinged on top management being a part of the solution by
actively participating in a quality control program. He felt that by adopting his 14 points, management
was stating their intention to not only stay in business, but to protect investors and jobs.
Unlawful to replicate or distribute

Deming’s 14 Points for Management2


1. Create constancy of purpose toward improvement of products and services.

2. Adopt the new philosophy.

3. Cease dependence on inspection to achieve quality.

4. End the practice of awarding business on the basis of price tag. Instead, minimize total cost.

5. Improve constantly and forever the system of production and service, to improve quality and
productivity, and thus constantly decrease costs.

6. Institute training on the job.

7. Institute leadership. The aim of supervision should be to help people, machines, and gadgets
to do a better job.

8. Drive out fear, so that everyone may work effectively for the company.

9. Break down barriers between departments.

10. Eliminate slogans, exhortations, and targets for the workforce asking for zero defects and new
levels of productivity.

11. a) Eliminate work standards (quotas) on the factory floor. Substitute leadership.
b) Eliminate management by objective. Eliminate management by numbers, numerical goals.

12. a) Remove barriers that rob the hourly worker of his right to pride of workmanship.
b) Remove barriers that rob people in management and in engineering of their right to pride
of workmanship.

13. Institute a vigorous program of education and self-improvement.

14. Put everybody in the company to work to accomplish the transformation.

1.4.2 Joseph M. Juran (1904–2008)


Twentieth Century Management Consultant, Quality Guru, Evangelist for Quality and
Quality Management.

Joseph M. Juran was the first member of his family to attend college, graduating in 1924 with a
bachelor’s degree in electrical engineering from the University of Minnesota. After serving a year in
the U.S. Army Signal Corps, he took a job at Western Electric’s Hawthorne Works. After his initial
training, he was assigned to the inspection branch at the plant to work with a small group of engineers
charged with applying and disseminating Bell Laboratories (Western Electric’s partner) statistical
sampling and control chart techniques. He was promoted to a managerial position two years later

2  W. Edwards Deming, Out of the Crisis (Cambridge, Mass: MIT, Center for Advanced Engineering Study, 1986).

8 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

(chief of quality inspections) and then, at the age of 24, was promoted to chief of Western Electric’s
inspection results division where he oversaw five departments.

In 1937, after earning a law degree, Juran moved to New York to work for Western Electric as a
corporate industrial engineer. In 1941, during WWII, he was loaned to the U.S. government to work

Unlawful to replicate or distribute


in the Lend-Lease Administration, where he used his skills in statistical analysis and engineering to
improve budgeting and purchasing processes.

Just before the end of WWII, Juran resigned from Western Electric and Lend-Lease and joined the
Department of Industrial Engineering of New York University as a department chair and taught
quality control and conducted seminars for business executives. In 1946, he became one of the
founding members of the American Society for Quality Control (ASQC) and served on the editorial
board for the Society’s publication, Industrial Quality Control.

After starting his own freelance company, the Juran Institute, he published the first edition of Juran’s
Quality Control Handbook in 1951, which attracted the attention of JUSE. He traveled to Japan in
1954 to focus on managing for quality, which expanded quality from its statistical beginnings. Juran
emphasized to the middle and top-level managers with whom he worked in Japan that in order for a
company to become a quality leader, it must adopt revolutionary rates of improvement in quality and
make continual quality improvements by the thousands, year after year.

Through his concept of quality by design (QbD), Juran outlined how the highest levels of leadership
must be involved in quality in order to be successful. Their responsibilities include the following actions3:

◆◆ Establish a quality council

◆◆ Serve on the quality council

◆◆ Establish quality policies

◆◆ Deploy the goals

◆◆ Provide the resources


◆◆ Provide problem-oriented training

◆◆ Serve on quality improvement teams

◆◆ Review progress

◆◆ Stimulate improvement

◆◆ Give recognition

◆◆ Revise the reward system

Juran’s Contributions to the Quality Field:


1. Juran observed that human relations problems all had one root cause: people were resistant to
change (cultural resistance). His discovery came while reading Margaret Mead’s book, Cultural
Patterns and Technical Change (1953), in which Mead described the resistance encountered by
3  Joseph M. Juran, Juran on Planning for Quality (New York: The Free Press, 1988).

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 9
Chapter 1: Evolution of Lean Six Sigma (LSS)

United Nations teams while trying to improve conditions in developing countries as clashes
between cultures. Juran realized that these same clashes were occurring between management
and employees as well as in situations where clients were rejecting changes for no logical
reason. In 1964, he published Managerial Breakthrough, which laid the foundation for the
science of managing for quality (the human element).
Unlawful to replicate or distribute

Juran applied the Pareto principle (also known as the 80-20 rule, which was based on the work
of 19th century engineer and economist Vilfredo Pareto) to quality, stating that 80 percent of
the problems come from 20 percent of the causes and that management should focus on that
20 percent “vital few.”

2. Juran’s process for managing quality, the Juran Trilogy®, includes the concepts of quality
planning, quality control, and quality improvement.

Quality planning. The activity of developing the products and processes required to meet
customers’ needs. The steps of the quality planning exercise are:

Step 1. Establish quality goals.


Step 2. Identify the customers.
Step 3. Determine the needs of the customers.
Step 4. Develop product features that respond to the needs of the customers.
Step 5. Develop processes that are able to produce those product features.
Step 6. Establish process controls; transfer the plans to the operating forces.

Quality control. The operating forces use this process as an aid to meeting the product and
process goals. It is based on the feedback loop and consists of the following steps:

Step 1. Evaluate actual performance.

Step 2. Compare actual performance to quality goals.

Step 3. Act on the difference.

Quality improvement. This third member of the Juran Trilogy® aims to attain
unprecedented levels of performance, levels that are significantly better than any past
performance. The methodology consists of a process that is an unvarying series of steps:

Step 1. Prove the need for improvement.


Step 2. Establish the infrastructure.
Step 3. Identify the improvement projects.
Step 4. Establish project teams.
Step 5. Provide the teams with resources, training, and motivation to:
•• Diagnose the causes.
•• Stimulate remedies.
Step 6. Establish controls to hold the gains.

10 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

1.5 Quality Revolution in Japan


1.5.1 Kaoru Ishikawa (1915–1989)
University Professor, Influential Quality Management Innovator, Creator of the Ishikawa (Fishbone)
Diagram, Developer of a Specifically Japanese Quality Strategy.

Unlawful to replicate or distribute


As an active promoter of quality in Japan, Dr. Kaoru Ishikawa began several Japanese quality
programs and ensured the translation of Deming’s and Juran’s lectures into a uniquely Japanese
perspective on quality improvement. Ishikawa had a total quality viewpoint for company-wide quality
control and an emphasis on the human side of quality. He believed in quality through leadership and
that quality could do more than just transform manufacturing - it could improve our lives.

Six of Ishikawa’s principles helped create an integrated Japanese quality model and redefined the way
Japan viewed manufacturing:4

1. All employees should clearly understand the objectives and business reasons behind the
introduction and promotion of company-wide quality control.

2. The features of the quality system should be clarified at all levels of the organization and
communicated in such a way that the people have confidence in these features.

3. The continuous improvement cycle should be unremittingly applied throughout the whole
company for at least three to five years to develop standardized work. Both statistical quality
control and process analysis should be used, and upstream control for suppliers should be
developed and effectively applied.

4. The company should define a long-term quality plan and carry it out systematically.

5. The walls between departments or functions should be broken down, and cross functional
management should be applied.

6. Everyone should act with confidence, believing his or her work will bear fruit. Ishikawa was
the first one to emphasize the seven basic tools of quality:

1. Pareto analysis What are the big problems?

2. Cause-and-effect diagrams What is causing the problem?

3. Stratification How is the data made up?

4. Check sheets How often does it occur?

5. Histograms What is the overall variation?

6. Scatter charts What are the relationships between factors?

7. Process control charts Which variations are controllable and how?

He is most widely recognized for developing the Ishikawa diagram (cause and effect diagram), which
is often referred to as a fishbone diagram. Ishikawa believed these tools should be taught to everyone
in the organization and used to analyze problems and develop improvements.

4  Kaoru Ishikawa, What is Total Quality Control? The Japanese Way, trans. by David J. Lu (New Jersey: Prentice-Hall, 1985).

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 11
Chapter 1: Evolution of Lean Six Sigma (LSS)

In 1993 ASQ established the Ishikawa Medal in order to recognize leadership in improving the human
aspects of quality and is awarded annually to a team or an individual.

1.5.2 Genichi Taguchi (1924–2012)


Unlawful to replicate or distribute

Consultant, Quality Guru, Father of Quality Engineering.

In 1942, Genichi Taguchi was drafted to serve in the Navigation Institute of the Imperial Japanese
Navy. After WWII, he worked for the Institute of Statistical Mathematics of the Ministry of Education,
where he studied with renowned statistician Matasaburo Masuyama. In 1950, Taguchi went to work
for the Electrical Communication Laboratory (ECL) of Nippon Telephone and Telegraph Company, a
Bell Laboratories competitor. ECL and Bell Laboratories were both developing cross bar and telephone
switching systems. Both companies completed their work at about the same time; but thanks in part to
the work done by Taguchi to improve production, Nippon awarded its contract to ECL.

Taguchi remained at ECL for six years, developing telephone switching systems and gaining
experience with data analysis and experimental design. Through his work with ECL, Taguchi won his
first Deming Prize in 1960 for his contributions to the field of quality engineering. He also received the
Deming Literature Award three times for his books on quality control methodologies and industrial
design.

After working with industrial statisticians in the United States at Bell Laboratories, Taguchi worked
as a consultant for ECL, then worked for the Japanese Standards Association, founded the Quality
Research Group, and spent 17 years developing his methods as a professor at Aoyama Gakuin
University in Japan.

In the 1950s, Taguchi developed methods for modern quality control and low-cost engineering, which
became known as the Taguchi methods. The Taguchi methods seek to improve product quality at
the design stage by integrating quality control into product design using experiments and statistical
analysis. Taguchi developed quality engineering techniques that enabled engineers to develop products
and processes in a fraction of the time required by conventional engineering practices.

Taguchi’s philosophy is that products should be designed to be robust and insensitive to variations in
the manufacturing process while other quality experts were focused solely on reducing or eliminating
the variation. In addition, Taguchi wanted to focus on creating product designs that could handle
the variation. His work was based on the principles of experimental design, and his methods were
concerned with the routine optimization of products and processes prior to manufacturing, i.e.,
quality and reliability are to be pushed back to the design phase (off-line quality control). He separated
the focus into three areas:

1. System design: process of applying scientific and engineering knowledge to produce a basic
functional prototype design.

2. Parameter design: investigation conducted to identify settings that minimize (or reduce) the
performance variation.

3. Tolerance design: method for determining tolerances that minimize the sum of the product
manufacturing and lifetime costs.

12 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Taguchi made a very influential contribution to industrial statistics. The key elements of his quality
philosophy include the following tenets:

1. The loss function: an equation that quantifies the decline of a customer’s perceived value of a
product as that product’s quality declines.

Unlawful to replicate or distribute


2. O
 rthogonal arrays and linear graphs: the philosophy of off-line quality control; designing
products and processes so they are insensitive to parameters outside the design engineer’s
control.

3. Robustness: a prototyping method that enables product designers to identify the optimal
settings to produce a robust product. His definition of robust meant that a product could
survive manufacturing time after time, piece after piece, and provide what the customer
wanted.

1.5.3 Shigeo Shingo (1909–1990)


Consultant, Quality Guru, Just-In-Time-Manufacturing Expert, Inventor of Single Minute
Exchange of Die.

After earning a degree in mechanical engineering in 1930, Shigeo Shingo worked as a fusions
specialist for the Taipei Railway Factory in Taiwan, where he became interested in scientific
management and process improvement. During WWII, Shingo worked for the Ministry of Munitions
as the manufacturing section chief at the Amano Manufacturing Plan in Yokohama, Japan, where he
increased productivity by 100 percent.

In 1946 Shingo became a member of and a consultant for the Japan Management Association (JMA)
in Tokyo, focusing on improving factory management and the problems associated with how factories
were laid out. Shingo realized that processes and operations were inseparable and needed to be
addressed simultaneously in order to increase productivity.

Shingo’s contributions to quality include Poka Yoke (mistake-proofing), source inspection, single
minute exchange of die (SMED), and just-in-time (JIT) production.

Poka Yoke works on eliminating the cause of defects and detecting them before they reach the
production line through source inspection. Shingo’s production devices were simple, yet they made
it so that parts would not fit incorrectly. Missing parts also became obvious when using these simple
production devices.

SMED techniques were developed by Shingo in order to facilitate quick changeovers on production
lines. Shingo found that by simplifying materials, machinery, processes, and skills, changeover times
were reduced from hours to just minutes. SMED techniques also facilitate smaller batch production.

JIT production addresses supplying what the customer wants exactly when the customer wants it.
Traditional manufacturing tends to enlarge batch production as orders are pushed through the system.
The aim of JIT production is to minimize inventories by only producing what is required when it is
required; and production is triggered by a customer purchase order that is pulled through the system,
thereby reducing costs and waste throughout the production process.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 13
Chapter 1: Evolution of Lean Six Sigma (LSS)

Shingo was recognized in 1988 by the Jon M. Huntsman School of Business at Utah State University,
for his lifetime accomplishments. They also created the Shingo Prize, which recognizes world-class,
Lean organizations and operational excellence.
Unlawful to replicate or distribute

1.5.4 Taiichi Ohno (1912–1990)


Automotive Executive, Father of the Toyota Production System.

Taiichi Ohno joined Toyota in 1932 and spent 20 years working his way up to executive vice president.
In the 1940s and 1950s, while Toyota was on the verge of bankruptcy, Ohno worked as an assembly
manager and developed several improvements in order to avoid buying new equipment or keeping
large amounts of inventory on hand. During this time, Ohno went to the United States to spend a few
months at Ford’s Rouge Factory observing how Ford managed his business. Seeing that Ford focused
on total elimination of non-value-added wastes, Ohno returned to Japan and updated Ford’s work,
reducing changeover times from days and hours to minutes and seconds, with the help of Shingo.
Ohno also eliminated job classifications so that workers would have more flexibility and to support his
belief that there should be respect for humanity in the manufacturing process.

Ohno’s contributions include identification of the seven wastes, developing Kanban, designing a
pull system, implementing JIT, and ultimately the Toyota Production System, which became “Lean
manufacturing” in the United States. His work has been one of the most influential models for the
quality improvement community.

1.5.5 Eiji Toyoda (1913–2013)


Engineer, Automotive Executive, Automotive Production Visionary.

After graduating from Tokyo Imperial University with a degree in mechanical engineering, Eiji Toyoda
went to work with his cousin, Kiichiro Toyoda (President of Toyota Motor Company) in 1936. He started
his work in the Toyota research laboratory in Tokyo, where he studied engines and car repair along with
a team of engineers. He was briefly drafted into the Army during WWII, but was released to make trucks
in the automotive industry. Unfortunately, after the war, the company encountered some problems
during the rebuilding effort and a massive labor strike led to the resignation of Kiichiro Toyoda.

Meanwhile, Toyoda was named the managing director of the manufacturing arm of Toyota Motor
Company. In 1950, he visited the United States to tour automotive manufacturing facilities. He
returned confident that Toyota could be competitive in the automotive industry; however, he knew
that Toyota would not be able to employ the same mass production approach. His focus was to
efficiently produce cars in small batches. Using the principles of JIT, kaizen, kanban, and jidoka,
Toyoda and Taiichi Ohno built what is now known as the Toyota Production System.

Eiji Toyoda went on to become the President of Toyota Motor Company from 1967 to 1982. After the
merger of Toyota Motor Company and Toyota Motor Sales, he was Board Chairman for 12 years until
1994.

14 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

1.6 Moving Towards Total Quality


1.6.1 Philip B. Crosby (1926–2001)
Businessman, Author, and Originator of Zero Defects/Do It Right the First Time, Cost of Poor Quality.

Unlawful to replicate or distribute


Philip B. Crosby’s quality improvement process is based on his Four Absolutes of Quality
Management:

1. Quality is defined as conformance to the requirements, not as “goodness” or “elegance.”

2. The system for producing quality is prevention, not appraisal.

3. The performance standard must be zero defects, not “that’s close enough.”
4. The measurement of quality is the price of nonconformance, not indices.

Crosby focused on zero defects, his ideas were developed from his assembly line experience.
According to Crosby, in order to create zero defects in a manufacturing process, the idea must
originate from upper management. The benefits of zero defects include decreases in wasted resources,
including time spent on creating products customers are not interested in buying.

According to Crosby, quality must conform to specifications that management sets according to
customer needs and wants. To implement his quality improvement process, Crosby introduced a 14-
step approach containing activities that fall under the responsibility of upper management. The steps
represent his techniques for managing quality improvement and his four absolutes of quality.

Crosby's 14 Steps to Quality Improvement


Step 1. Establish management commitment.
Step 2. Create quality improvement teams.
Step 3. Measure processes to determine current and potential quality issues.
Step 4. Calculate the cost of (poor) quality.
Step 5. Raise quality awareness of all employees.
Step 6. Take actions to correct quality issues.
Step 7. Monitor the progress of quality improvement.
Step 8. Train supervisors in quality improvement.
Step 9. Hold zero defects days.
Step 10. Encourage employees to create their own quality improvement goals.
Step 11. Encourage employee communication with management about obstacles to quality
(error-cause removal).
Step 12. Recognize participants’ efforts.
Step 13. Create quality councils.
Step 14. “Do it all over again” (quality improvement does not end).

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 15
Chapter 1: Evolution of Lean Six Sigma (LSS)

1.6.2 James P. Womack and Daniel T. Jones


Influential Authors and Promoters of Lean Production to the Western World.

James P. Womack and Daniel T. Jones have been researching the automotive industry since 1979.
In a study published in 1984 titled “The Future of the Automobile,” they discovered that Japanese
Unlawful to replicate or distribute

automakers were surpassing the productivity of their Western competitors. This discovery led
to a more comprehensive five-year, $5 million study supported by the Massachusetts Institute of
Technology (MIT). Out of this study came their book, The Machine That Changed the World: The
Story of Lean Production (Womack, Jones, & Roos, 1990), and the concept of Lean production, a
manufacturing system that yields higher productivity and more cost-efficient products. The findings
related in their book were stunning - automobiles with fewer defects were being built in a smaller
factory and utilized less man-hours.

The Machine that Changed the World


There is no book on the topic of Lean Production that was more thoroughly researched as a
foundation than The Machine That Changed the World: The Story of Lean Production. Special research
assistants focused on subjects such as supply chain, production, and product development.

The book deals with the issue of diffusion of Lean Production beyond Toyota and throughout the
industry and addresses the following three obstacles:

1. Existing stronghold of mass production on existing companies:

a. Get a Lean competitor and change will be forced.

b. G
 et a better financial measurement system, where the cost of quality and waste can be
more visible, and the visibility will drive a change.

c. An economic crisis will drive the change.

2. Outdated thinking about the world economy and globalization.

3. Inward focus and selective implementation of the methodology.

Lean Thinking
Another pivotal work of Womack and Jones that further formalized and simplified teaching the basic
principles of Lean Production is Lean Thinking: Banish Waste and Create Wealth in Your Corporation
(1996). This work filled a gap in The Machine That Changed the World in that it explained how Lean
Production can actually be applied in any industry and in any area of an organization.

Womack and Jones also co-authored Lean Solutions: How Companies and Customers Can Create Value
and Wealth Together (2005) and Seeing the Whole: Mapping the Extended Value Stream (2002), which
was the 2003 Shingo Prize winner. Womack went on to be the founder and president of the Lean
Enterprise Institute, which is a non-profit education and research organization based in Massachusetts.
Jones founded the Lean Enterprise Academy in the United Kingdom. Both organizations are affiliated
with and dedicated to promoting Lean Thinking.

16 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

1.6.3 Armand V. Feigenbaum (1922–2014)


Quality Control Expert, Businessman, Devisor of the Concepts of Total Quality Management.

Armand V. Feigenbaum earned his master’s and doctorate degrees from MIT, publishing his first
book on total quality control while a doctoral student. His ideas on total quality control originated

Unlawful to replicate or distribute


from his work at General Electric (GE), where he began his career in 1937. After earning his doctorate,
Feigenbaum was transferred to Ohio as the assistant general manager for GE’s aircraft engine business
and later became the director of manufacturing operations at GE (1958-1968).

Feigenbaum has written several books, including Total Quality Control (1951), and served as president
of ASQ from 1961-1963. He also co-founded the International Academy for Quality (IAQ) with Kaoru
Ishikawa of Japan and Walter Masing of Germany.

Feigenbaum saw modern quality control as a fundamental way of managing and made the following
recommendations:

1. Increase operator efficiency by educating them on quality in order to enhance overall quality.

2. Aim to increase quality awareness throughout the organization.

3. Involve the entire organization in each and every quality initiative undertaken.

His ideas on total quality control, known today as Total Quality Management (TQM), resulted from
the idea that quality is more than a philosophy, but rather should be based on economics, industrial
engineering, management science, and the existing statistical and management methods.

Feigenbaum’s Crucial Benchmarks for Total Quality Success


1. Quality is a company-wide process.

2. Quality is what the customer says it is.

3. Quality and cost are a sum, not a difference.

4. Quality requires both individual and team zealotry.

5. Quality is a way of managing.

6. Quality and innovation are mutually dependent.

7. Quality is an ethic.

8. Quality requires continuous improvement.

9. Quality is the most cost-effective, least capital-intensive route to productivity.

10. Quality is implemented with a total system connected to customers and suppliers.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 17
Chapter 1: Evolution of Lean Six Sigma (LSS)

1.6.4 Malcolm Baldrige (1922–1987)


26th U.S. Secretary of Commerce.

Malcolm Baldrige served as the 26th U.S. Secretary of Commerce from 1981 until his death in 1987.
It was Baldrige’s managerial excellence that contributed to long-term improvement in the economy,
Unlawful to replicate or distribute

efficiency, and effectiveness in government. Within the Commerce Department, Baldrige was able to
reduce the budget by more than 30 percent and administrative personnel by 25 percent.

The economic liberty and strong competition that are indispensable to economic
progress were principles that ‘Mac’ Baldrige stressed.5

–Ronald Reagan, 40th President of the United States

After Baldrige’s death, Ronald Reagan decided to create a quality program in his name. The National
Productivity Advisory Committee established the Malcolm Baldrige National Quality Improvement
Act of 1987, Public Law 100-107. The act included the establishment of the Malcolm Baldrige
National Quality Award Program “with the objective of encouraging American business and other
organizations to practice effective quality control in the provision of their goods and services.”6 The
first awards were presented to companies in 1988.

The Baldrige Criteria for Performance Excellence


1. Leadership: How upper management leads the organization and how the organization leads
within the community.

2. Strategic Planning: How the organization establishes and plans to implement strategic
directions.

3. Customer Focus: How the organization builds and maintains strong, lasting relationships with
customers.

4. Measurement, Analysis, and Knowledge Management: How the organization uses data to
support key processes and manage performance.

5. Workforce Focus: How the organization empowers and involves its workforce.

6. Operations Focus: How the organization designs, manages, and improves key processes.

7. Results: How the organization performs in terms of customer satisfaction, finances,


human resources, supplier and partner performance, operations, governance, and social
responsibility, and how the organization compares to its competitors.

5  White House Ceremony speech to launch the Baldrige Program (1988).


6  Public Law 100-107: “The Malcolm Baldrige Quality Improvement Act of 1987” (August 20, 1987).

18 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Chapter 2: Integration of Lean and Six Sigma

Unlawful to replicate or distribute


Key Terms
customer requirements Lean thinking
Design for Six Sigma (DFSS) muda
DMAIC perfection
DMADV Six Sigma methodology
Lean methodology value

Body of Knowledge
1. Compare and contrast the Lean and Six Sigma methodologies.

2. Identify the problem-solving tools used in the Define­­­‑Measure‑Analyze‑Improve‑Control


(DMAIC) framework.

3. Identify the Womack and Jones Five Guiding Principles for Lean.

4. Explain how muda is the enemy of a Lean organization.

5. Identify the seven types of waste outlined by Taiichi Ohno.

6. Define and describe Lean concepts, such as the theory of constraints, flow, and perfection.

7. Describe the role of the Design for Six Sigma (DFSS) methodology.

8. Distinguish between DMADV, IDOV, and DMEDI and how these methodologies are used for
improving the end product or process during the design phase.

9. Define and describe the Lean Six Sigma (LSS) methodology.

L ean and Six Sigma were both developed in order to improve manufacturing processes, but their
integration and application across all types of business processes also makes them valuable across
every industry.

2.1 Six Sigma Methodology


The Six Sigma methodology was developed at Motorola in 1986. By the end of 2006, Six Sigma was
being practiced by 53 percent of Fortune 500 companies (82 percent of Fortune 100 companies),
saving them an estimated $427 billion.1

1  Michael Marx, “Six Sigma Saves a Fortune,” iSixSigma Magazine (January/February 2007).

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Chapter 2: Integration of Lean and Six Sigma

While Six Sigma was built upon some existing methods, it is the first improvement methodology to be
directly linked to real, measurable business results. This is one of the main reasons so many companies
have embraced the methodology.

Six Sigma is a systematic approach that delivers high quality products and services. It examines the
Unlawful to replicate or distribute

span (range) of performance of a process. This examination gives us more insight into the capabilities
of that process than looking at averages since customers don’t feel averages - they feel each actual
performance.

The statistical term sigma (σ) refers to the standard deviation of a process that also describes the
variation of the process. The standard deviation is a measure of the spread of process performance
from the “best” case to the “worst” case.

The following chart helps explain the difference between a good process and a great process. Before the
measurement of Six Sigma came around, we measured everything in percentages and thought that was
perfectly acceptable. When you look at Table 2.1 below, notice how drastic the levels of performance
differ from good to great.

Table 2.1 Good vs. Great Levels of Quality

GOOD GREAT
If these various processes operated at If these various processes operated at
99% or 3.8 Sigma, the measures of their 99.9999998 % or 6 Sigma, the measures
performance would be: of their performance would be:

20,000 lost articles of mail per hour 7 lost articles of mail per hour
15 minutes per day of unsafe drinking water 1 minute every 7 months of unsafe drinking
water
5,000 incorrect surgical procedures per week 1.7 incorrect surgical procedures per week
2 short or long landings at major airports each 1 short or long landing at a major airport every
day 5 years
200,000 wrong drug prescriptions each year 68 wrong drug prescriptions each year

No electricity for almost 7 hours each month 1 hour without power every 34 years

As shown in the examples displayed in the table above, “Good / 99 percent / 3.8 Sigma” is just
not good enough. Before Six Sigma came along, many organizations measured quality by whole
percentages only, rather than calculating them out to seven decimal places.

Sigma is a measure of a process’s variation or spread around its mean. The process is improved by
making the spread smaller, which produces outputs that are more consistent and have fewer defects
or errors. Under traditional quality standards, variation is reduced until the specification limit is three
standard deviations from the process mean. See Figure 2.1.

20 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Normal Standard Distribution


• 68.26% of the observations will fall within 1σ of µ
• 95.44% of the observations will fall within 2σ of µ
• 99.73% of the observations will fall within 3σ of µ

Unlawful to replicate or distribute


3σ 2σ 1σ µ 1σ 2σ 3σ

68.26%

95.44%

99.73%
Figure 2.1 Normal Standard Distribution

With six sigma quality, the process variation is reduced even more- to six standard deviations from the
specification limits to the process mean. This defines a six sigma process. See Figure 2.2. Additionally,
an underlying assumption of Six Sigma was added, which states that a process will shift or drift +/- 1.5
Six Sigma Process Distribution
sigma over the long term.

When six standard deviations fit on each side of process average


without exceeding the specification limits, 99.99966% of our
“opportunities” will meet customer requirements (3.4 ppm).

Lower spec limit Upper spec limit

6σ 5σ 4σ 3σ 2σ 1σ µ 1σ 2σ 3σ 4σ 5σ 6σ
Figure 2.2 Six Sigma Process Distribution

Historically, the standard normal distribution table was used to calculate the percent in specification
and parts per million defects, which assumed that the process was stable and centered. No
considerations were given for the long term. Until Six Sigma became popular, all quality calculations
were based on this distribution without any “adjustments.” See Table 2.2 under “Standard Normal
Distribution.”

With the advent of Six Sigma, a new conversion table was built which incorporated the 1.5 sigma shift.
With these conditions, the defect rate would be 3.4 ppm for a Six Sigma process, as opposed to 0.002
ppm where there was no adjustment for the long term. See Table 2.2 under “Six Sigma Distribution.”

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 21
Chapter 2: Integration of Lean and Six Sigma

Table 2.2 Equivalent Six Sigma Levels, Percent in Specification, and PPM Defects

Standard Normal Distribution Six Sigma Distribution


Process is centered and stable- Process with sigma shift
no 1.5 sigma shift
Unlawful to replicate or distribute

Sigma level % in spec PPM defects Sigma level % in spec PPM defects

1 68.27 317311 1 30.23 697672


2 95.45 45500 2 69.12 308770
3 99.73 2700 3 93.32 66811
4 99.99 63.3 4 99.38 6210
5 99.99 0.6 5 99.98 233
6 99.9999998 0.002 6 99.99966 3.4

2.1.1 The Six Sigma Culture


For years, organizations have incorporated LSS in order to build a quality culture and generate real
business results.

A Six Sigma culture focuses on the “big picture,” requiring those within the organization to
communicate and collaborate on projects, and to remain dedicated to their customers. Six Sigma
cultures are characterized by a focus on processes and the customer, data and fact-driven management,
boundary-less collaboration, and a drive for perfection (continuous improvement, adapting to change,
etc.).

Companies will usually incorporate Six Sigma into their mission, vision, and value statements as a way
to define their commitment to exceeding customer expectations through their products and services.
Six Sigma principles are a disciplined approach to achieving operational excellence and should be
incorporated into every aspect of the business.

2.1.2 Define-Measure-Analyze-Improve-Control (DMAIC)


The following is taken from “The Six Sigma Memory Jogger™ II” and is reprinted with the permission of
GAOL/QPC.2

The DMAIC (pronounced “duh-MAY-ick”) method includes five steps: Define­­­, Measure, Analyze,
Improve, and Control.

This method is used to improve the current capabilities of an existing process. This is by far the most
commonly used methodology of Sigma improvement teams. It is suitable for all types and sizes of
projects in any organization.

The five steps of the DMAIC method are outlined below.

Step 1. Define the problem and scope of work required for the project.
•• Describe the problem and impact on business.
2  Michael Brassard et al., The Six Sigma Memory Jogger II (Salem, NH: GOAL/QPC, 2002), 8-10.

22 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

•• Collect background information on the process and your customers’ needs and
requirements.

Step 2. Measure the current process or performance.


•• Identify and gather data to provide a clearer focus for your improvement effort.

Unlawful to replicate or distribute


Step 3. Analyze the current process or performance to identify the problem.
•• Identify the root cause(s).

•• Confirm them with data.

Step 4. Improve the problem by selecting the solution.


•• Develop, try out, and implement solutions that address the root causes.

•• Use data to evaluate the results for the solutions and the plans used to carry them
out.

Step 5. Control the improved process or performance to ensure that target(s) are met.
•• Maintain the gains that you have achieved by standardizing your work methods or
processes.

•• Anticipate future improvements and make plans to preserve the lessons learned
from this improvement effort.

Basic Problem Solving Tools


The basic DMAIC framework is suitable for all types and sizes of projects. However, rigorously
following all the steps takes some time. For basic, simple, straightforward problems, it makes sense to
use some very basic problem-solving tools:

1. Brainstorming

2. Affinity diagram

3. Tree diagram

4. Cause and effect/Fishbone diagrams

5. Prioritization matrix

6. Process mapping

These tools can be very effective and can be used to quickly identify the problem and implement
solutions. That being said, it is important to differentiate between simple and complex projects. It is
human nature to want to skip the rigor and go straight to a solution.

2.1.3 Design for Six Sigma (DFSS)


When applying the DMAIC methodology, the focus is improving processes that already exist. When
using the Design for Six Sigma (DFSS) methodology, the objective is to figure out the company and the
customer’s needs; and that information then is used to create a new product, design, or solution.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 23
Chapter 2: Integration of Lean and Six Sigma

While DMAIC focuses on continually improving existing processes, DFSS creates a new process and/
or design by using systems engineering techniques, with a greater focus in the design phase. These
techniques predict, model, and simulate the new product, helping to ensure customer satisfaction.
DFSS is similar, not only to systems engineering, but to operations research, concurrent engineering,
and systems architecting as well.
Unlawful to replicate or distribute

DFSS was created to strengthen an organization’s competitive advantage in innovation. Using these
methodologies will help managers encourage growth and creativity, which in turn produces better
ideas and happier employees.

DFSS requires specialized tools such as quality function deployment (QFD), axiomatic design, TRIZ,
design of experiments (DOE), Taguchi methods / robust engineering, tolerance design, and response
surface methodology.

The use of DFSS methodologies vs. DMAIC should be decided based on an evaluation of the project
and the wants and needs of the client and the stakeholders.

Define-Measure-Analyze-Design-Verify (DMADV)
DMADV is the most popular Six Sigma framework used within DFSS projects. It is an acronym for
the following actions:

1. Define the customer's needs and the metrics to measure success.


2. Measure the processes involved in creating the new product or service.
3. Analyze the results of those processes to determine if they are achieving the desired results.
4. Design the new product or service, incorporating the results of the internal analysis and
customer feedback.
5. Verify on a continuous basis that the final product or service meets the customer's needs.

Idenfity-Design-Optimize-Verify or Validate (IDOV)


IDOV is a DFSS methodology for designing products and services to meet Six Sigma standards and
can help reduce the development time normally associated with a DFSS project. IDOV consists of the
following four-phase process.

1. Identify the customer and the product’s technical requirements – Critical to Quality (CTQ).
2. Design a concept and alternatives using the CTQs and functional requirements.
3. Optimize the performance, reliability, sigma, and cost using advanced statistical tools and
modeling.
4. Verify/Validate the design by assessing performance, failure modes, reliability, and risks.

Design-Measure-Explore-Develop-Implement (DMEDI)
The DMEDI redesign methodology was developed to incorporate the elements from a LSS approach.
While similar to DMADV, Lean tools have been added to DMEDI to ensure efficiency and speed. The
phases are:

24 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

1. Define the problem or new requirements.

2. Measure the process and gather data.

3. Explore the data to identify a cause-and-effect relationship between key variables.

Unlawful to replicate or distribute


4. Develop a new process so that the problem is eliminated and the measured results meet the
new requirements.

5. Implement the new process under a control plan.

Implementing DFSS
At first glance, DMADV appears to be very similar to DMAIC, but DMADV actually requires a higher
level of knowledge and effort to implement.

Companies do not usually include DFSS when they initially implement LSS; and when it is included,
they normally train less people to use its tools and techniques. Here are a few things to consider when
deciding whether or not an organization is ready to implement DFSS:

◆◆ Teams have already been trained on the DMAIC methodology and they have produced
successful projects (yielded earnings).

◆◆ A lot of the current processes have been documented/mapped.

◆◆ Many lower-priority improvements have been implemented.

◆◆ The Sigma levels across many processes in the organization are steadily-rising.

◆◆ A structured project selection process is already in place, and there are projects accumulating.

◆◆ The organization is ready to work on difficult and complex projects that require significant
process redesign.

One last thing to consider when deciding whether or not an organization is ready to implement DFSS
as their current development process by asking the following questions:

◆◆ Are the processes documented?

◆◆ Are the processes acceptable across the organization?

◆◆ Are standardized templates in place?

It is important to assess and understand the starting point of the process. If the current development
process is strong but needs the enhancements that the DFSS tools can provide, the organization is
likely ready for DFSS. If not, there is a lot of work that needs to be completed within the organization
before DFSS may be considered.

2.2 Lean Methodology


As with Six Sigma, Lean methodology was originally developed as a set of practices to improve
manufacturing processes and eliminate waste; however, its application has also extended to other types
of business processes. Waste and inefficiency are the enemies of Lean.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 25
Chapter 2: Integration of Lean and Six Sigma

2.2.1 Toyota Production System


Many good American companies have respect for individuals, and practice kaizen
and other TPS tools. But what is important is having all elements together as a
system. It must be practiced every day in a very consistent manner - not in spurts -
in a concrete way on the shop floor.3
Unlawful to replicate or distribute

–Fujio Cho, Honorary Chairman, Toyota Motor Corporation

In the 1950s Eiji Toyoda and Taiichi Ohno visited Ford’s Rouge Factory in Dearborn, Michigan as part
of their tour of American automotive manufacturing facilities. Toyota was experiencing a financial
crisis and the company needed to change how they manufactured cars in order to remain competitive.
At the time of their visit, the Toyota Motor Company was producing 2,500 cars per year, while the
Rouge Factory, the largest in the world, was producing 8,000 cars per day.

After visiting the plant, Ohno realized that the mass production system would not work for the small
and diversified Japanese car market; and because of their financial situation, Toyota would never be
able to purchase the equipment or build the facilities needed to re-create Ford’s factory in Japan. Ohno
and Toyoda built a new method of automotive production instead. Their new methods were developed
and perfected over 40 years and became the Toyota Production System (TPS).

TPS has two major supporting sub-systems (primary pillars; see Figure 2.3): jidoka and JIT. Jidoka
roughly translates to “automation with a human touch,” meaning that when a problem occurs on
the line, the process stops immediately and the production of defective materials is prevented. The
principles of jidoka quality are built into the process. With JIT, each process produces only what is
needed by the next process in a continuous flow.

3  Jeffrey K. Liker, The Toyota Way (New York: McGraw-Hill, 2004).

26 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Best Quality - Lowest Cost - Shortest Lead Time -


Best Safety - High Morale

Unlawful to replicate or distribute


Just-in-Time People & Teamwork Jidoka
Right part, right (In-station quality)
amount, right time • Selection • Ringi decision Make Problems Visible
• Common making
• Takt time planning • Automatic stops
goals • Cross-trained
• Andon
• Pull system • Person-machine
• Quick changeover separation
• Integrated logistics
Continuous Improvement • In-station quality
control
• Solve root cause
Waste Reduction of problems (5 Why’s)

• Genchi • Eyes for Waste


Genbutsu • Problem
• 5 Why’s Solving

Leveled Production (heijunka)


Stable and Standardized Processes

Visual Management
Toyota Way Philosophy

Figure 2.3 Toyota Production System: House Diagram with all the Elements
Based on graphic from: Jeffrey Liker, The Toyota Way:
14 Management Principles from the World’s Greatest Manufacturer
[New York: McGraw-Hill Education, 2004], 33. Used with permission.

The Toyota Production System has four basic aims:4

1. Provide world class quality and service to the customer.

2. Develop each employee’s potential based on mutual respect, trust, and cooperation.

3. Reduce cost through the elimination of waste and maximize profit.

4. Develop flexible production standards based on market demand.

2.2.2 Lean Thinking


As explained by Womack and Jones (2003), Lean Thinking starts with a conscious attempt to
precisely define value in terms of specific products with specific capabilities offered at specific
prices through a dialogue with specific customers. The way to do this is to ignore existing assets and
technologies and rethink the company on a product-line basis with strong, dedicated product teams.5
4  “Toyota Production System Basic Handbook,” www.artoflean.com, accessed July 15, 2015.
5  Reprinted with the permission of Free Press, a Division of Simon & Schuster, Inc., from Lean Thinking: Banish Waste and

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 27
Chapter 2: Integration of Lean and Six Sigma

The customer defines value. Value is what the customer is willing to pay for something, which is a
specific product with specific capabilities at a specific price and at a specific time. This concept is so
important that it is the first step in the five-step process for implementing Lean principles:

1. Specify/Identify Value: Define value from the customer’s perspective and express value in
Unlawful to replicate or distribute

terms of a specific product or service. (See Chapter 3: Value of Lean Six Sigma).
2. Map the Value Stream: Map all of the value added and non-value added steps that bring a
product or service to the customer. (See Chapter 14: Value Stream Mapping).
3. Create/Establish Flow: Create the continuous flow of products, services, and information
from start to finish in the process. (See Chapter 15: Lean Tools for Optimizing Flow). It was
Taiichi Ohno who taught that one-piece flow, or continuous flow, is ideal. Products that move
continuously through the processing steps with minimal wait time in between and the shortest
distance traveled will be produced with the highest efficiency. Flow reduces throughput time,
which shortens the cost to cash cycle and can lead to quality improvements.
4. Establish/Implement Pull: Customers signal the need and demand pulls the product or
service through the value stream. (See Chapter 15: Lean Tools for Optimizing Flow). The
pull system makes JIT possible. Pull is a concept that dictates when material is moved and
who determines that it is moved. Taiichi Ohno found inspiration for this pull system while
studying American supermarkets, where items are not replenished until the product on the
shelf has been used.
5. Seek Perfection: All activities create value for the customer through the elimination of waste
and continuous improvement. The key is incremental improvement by constantly examining
the process for areas of waste and inefficiency. LSS is not a phase; rather, it is a journey to
perfection.

Womack and Jones explained that organizations can work towards becoming Lean organizations
when they clearly understand the principles and fully integrate Lean techniques. The organization
will then continue to improve its processes every day, eliminating more and more waste, and making
incremental improvements while striving toward perfection.

2.2.3 Muda
To become a Lean organization, one must first understand the enemy of a Lean organization: muda,
which is a Japanese word that means waste. Womack and Jones define waste as any human activity that
absorbs resources but creates no value, such as the following examples:

◆◆ Mistakes that need to be fixed


◆◆ Production of items no one wants
◆◆ Processing steps that are not actually needed
◆◆ Moving of employees and transporting of goods from one place to another for no reason
◆◆ Groups of people remaining idle because an upstream activity is not delivered on time
◆◆ Products and services that do not meet the needs of the customer
Create Wealth in Your Corporation by James P. Womack and Daniel T. Jones. Copyright © 1996, 2003 by James P. Womack
and Daniel T. Jones. All rights reserved.

28 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Lean thinking is the answer to finding and eliminating waste. It provides a way to specify value, line
up value-creating actions in the most effective and efficient sequence, conduct these activities without
interruption when the customer requests them, and perform this process more effectively each day.
It means doing more with less (less human effort, time, machinery, and space) while at the same time
moving closer to meeting the customer's needs.

Unlawful to replicate or distribute


Lean thinking also can help employees feel more satisfied with their work when the organization
provides immediate feedback on efforts to quickly convert waste into value. It provides a way to
recreate the way the work that is done rather than destroy or eliminate jobs in the name of efficiency.

Categories of Waste
The seven types of waste were originally identified by Taiichi Ohno during the development of TPS.
In 1996, Womack and Jones added an eighth waste: the under-utilization of employee creativity
and intellect6. Employees comprise the largest percentage of overhead costs, and it is essential that
organizations maximize the value of their employees. An example of this waste is an engineer
inputting data or making copies when the engineer’s highly compensated time could be devoted to
design activities. The original seven types of wastes are listed in Table 2.3 and possible causes for each
of the wastes are listed in Table 2.4.

Table 2.3: Seven Types of Waste

Waste Type Manufacturing Example Service Example


Overproduction ◆◆ Sub-assemblies and ◆◆ Processing before next
components between operation is ready
feeder and main lines ◆◆ Excess capacity—server or
storage
Inventory ◆◆ Inventory stored in ◆◆ Multiple applications
warehouses waiting for approval
◆◆ Buffer and safety stock ◆◆ Under-utilized equipment
Extra Processing ◆◆ Planned re-work ◆◆ Multiple ways of completing
◆◆ Handwork—polishing, the same tasks
deburring ◆◆ Printing
◆◆ More data than is required
Motion ◆◆ Operators bending, ◆◆ Navigating through multiple
turning, walking screens to input/extract data
◆◆ Searching for data

Defects ◆◆ Poor quality ◆◆ Data inputs are incorrect


◆◆ Equipment failures ◆◆ Not meeting standards
◆◆ Missing on-time targets ◆◆ Missed deadlines

6  Reprinted with the permission of Free Press, a Division of Simon & Schuster, Inc., from Lean Thinking: Banish Waste and
Create Wealth in Your Corporation by James P. Womack and Daniel T. Jones. Copyright © 1996, 2003 by James P. Womack
and Daniel T. Jones. All rights reserved.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 29
Chapter 2: Integration of Lean and Six Sigma

Waste Type Manufacturing Example Service Example


Transportation ◆◆ Conveyance of materials ◆◆ Delivering or shipping hard
copies
Waiting ◆◆ Operators waiting ◆◆ Waiting for approval
Unlawful to replicate or distribute

◆◆ Machines waiting

Table 2.4 Possible Causes of Waste


Waste Type Possible Causes of Waste
Overproduction ◆◆ Just-in-case logic
◆◆ Misuse of automation
◆◆ Long process setup
◆◆ Unleveled scheduling
◆◆ Unbalanced work load
◆◆ Overengineering
◆◆ Redundant inspections
Inventory ◆◆ Protecting the company from inefficiencies and unexpected problems
◆◆ Product complexity
◆◆ Poor market forecast
◆◆ Unbalanced workload
◆◆ Unreliable shipments by suppliers
◆◆ Misunderstood communications
◆◆ Reward systems
Extra Processing ◆◆ Overengineered for the real customer requirement
◆◆ Excessively-tight tolerancing
◆◆ Inflexible equipment
◆◆ Inappropriate processing or too many process steps
Motion ◆◆ Excess movements like bending, stretching, walking, lifting, or
reaching
◆◆ Poorly designed work areas

Defects ◆◆ Weak process control


◆◆ Poor quality
◆◆ Unbalanced inventory level
◆◆ Lack of planned maintenance
◆◆ Inadequate education/training/work instructions
◆◆ Product design
◆◆ Customer needs not being understood

30 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Waste Type Possible Causes of Waste


Transportation ◆◆ Poor plant layout
◆◆ Poor understanding of the process flow for production
◆◆ Large batch sizes, storage areas, and long lead times

Unlawful to replicate or distribute


Waiting ◆◆ Unbalanced work load
◆◆ Unplanned maintenance
◆◆ Long process setup times
◆◆ Misuses of automation
◆◆ Upstream quality problems
◆◆ Unleveled scheduling
Non-Utilized Talent ◆◆ Old guard thinking, politics, business culture
◆◆ Poor hiring practices
◆◆ Low or no investment in training
◆◆ Low pay, high turnover strategy

2.2.4 Transitioning to Lean


Lean thinking is characterized by the following attributes:

◆◆ Focusing relentlessly on the customer and providing customer value

◆◆ Operating on the philosophy of continuous and incremental improvement

◆◆ Providing exactly what is needed at the right time based on customer demand

◆◆ Keeping things moving

◆◆ Respecting people

◆◆ Taking a long-term view

When the Lean methodology is implemented, the resulting process changes will often be a radical
departure from the way things are currently done in the organization, causing some level of
controversy in the organization. The type of transformation that Lean requires cannot be done without
strong management involvement.

When transforming any organization into a Lean organization, it is not uncommon to see results that
realize the following benefits:

◆◆ Labor productivity: 100% increase

◆◆ Throughput time: 90% reduction

◆◆ Inventories: 90% reduction

◆◆ Customer errors: 50% reduction

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 31
Chapter 2: Integration of Lean and Six Sigma

◆◆ In-house scrap: 50% reduction

◆◆ Injuries: 50% reduction

◆◆ Productive development time: 50% reduction


Unlawful to replicate or distribute

2.3 Comparison of the Methodologies


Six Sigma analyzes problems statistically and looks for sources of variation, while Lean focuses
on value (eliminated waste) and flow (improving process speed). Six Sigma focuses on improving
effectiveness while Lean focuses on improving efficiency.7

Table 2.5 Comparison of Improvement Programs


Six Sigma Lean Thinking
Theory Reduce variation Remove waste
Application Define Identify value
guidelines Measure Identify value stream
Analyze Flow
Improve Pull
Control Perfection
Focus Problem-focused Flow-focused
Assumptions A problem exists Waste removal will improve business
Figures and numbers are valued performance
System output improves if variation in Many small improvements are better
all processes is reduced than systems analysis
Primary effect Uniform process output Reduced flow time
Secondary Less waste Less variation
effects Fast throughput Uniform output
Less inventory Less inventory
Fluctuation - performance measures New accounting system
for managers Flow - performance measure for
Improved quality managers
Improved quality

2.4 Lean Six Sigma (LSS)


LSS, when properly implemented, spreads into every aspect of an organization. It combines two
complementary methodologies into one, resulting in improved quality (Six Sigma) and waste
reduction (Lean). Combining the two methods will give an organization a comprehensive tool set
to improve processes, resulting in increased revenue and collaboration and reduced costs. LSS is a
management philosophy, a culture.

LSS is not a methodology an organization should use just to save money. To be effective, LSS must
produce results that can be validated. Successful implementation requires dedication and patience.
Following is a list of requirements an organization must meet in order to be successful using LSS:

7  Reprinted with permission from Quality Progress ©2002 ASQ, http://asq.org


No further distribution allowed without permission.

32 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

◆◆ Organizations must have a reason for implementing LSS.

◆◆ Upper management must be invested in and committed to achieving success with LSS.

◆◆ Organizations must be willing to invest in suitable, qualified resources for the initiative,

Unlawful to replicate or distribute


whether those resources are employees, materials, or technologies.

◆◆ Stakeholders and team members must work together to implement LSS.

◆◆ Team members must be empowered to carry out initiatives without the need for constant
evaluation and approval.

◆◆ Organizations must commit sufficient time and resources to training employees in the LSS
methodology.

The benefits of using LSS include:

◆◆ Expanded knowledge of products and processes through characterization and optimization

◆◆ Decreased defects and cycle times through improved processes

◆◆ Improved customer satisfaction due to improved quality and service

◆◆ Improved profitability and growth of business

◆◆ Improved communication and teamwork through sharing of ideas, problems, successes, and
failures

◆◆ A well-developed common set of tools and techniques with a methodology that can be applied
by anyone in the organization

◆◆ The language a business lives by, “the way we work”

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 33
This page intentionally left blank.
Unlawful to replicate or distribute

34 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Chapter 3: Value of Lean Six Sigma (LSS)

Unlawful to replicate or distribute


Key Terms
cycle time required non-value added activities
non-value-added activities value-added activities

Body of Knowledge
1. Recognize why organizations use LSS and how they apply its philosophy and goals.

2. Differentiate between value-added and non-value-added steps.

3. Explain the importance of creating value for the customer.

4. Identify the tools an organization can use to analyze the needs of its customers.

5. Explain the advantages of an organization using LSS.

6. List examples of how various industries can apply the LSS methodology.

T he use of Lean and Six Sigma methods continues to grow because of the widespread publication of
the successes of multiple companies across a broad range of industry sectors. Organizations have
finally come to understand that process control, combined with continuous improvement, is the only
answer to real long-term success.

3.1 Creating and Delivering Value


We want to be not just better in quality, but a company 10,000 times better than
its competitors. We want to change the competitive landscape by being not just
better than other competitors, but by taking quality to a whole new level. We want
to make our quality so special, so valuable to our customers, so important to their
success that our products become the only real choice.11

–Jack Welch, Former Chairman and CEO, General Electric

“First time right” is becoming a basic requirement. For example, most everyone has placed an order
at a restaurant and had to repeat the order several times in an attempt to have it recorded correctly,
only to find that something is wrong or missing upon receiving the meal. Often a restaurant (or any
business) gets one opportunity to “get it right,” and if it does not, the customer goes elsewhere. It is the
same with speed of service: it is not uncommon for customers to start calculating process cycle time as
soon as they begin to interact with a process. For example, customers might ask themselves how long
they have been waiting on a meal to arrive; how long they have been on hold waiting for a customer

1  Mark A. Nash, Sheila R. Poling, and Sophronia Ward, Using Lean for Faster Six Sigma Results: A Synchronized Approach
(New York: Productivity Press, 2006).

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Chapter 3: Value of Lean Six Sigma (LSS)

relations representative; how long they have been waiting in the checkout line; how long it takes to
complete a credit card application; etc.

3.1.1 Defining Value


Unlawful to replicate or distribute

Accurately specifying value is a critical first step in LSS. Value is the gain the customer receives for
the benefit weighed against the cost they must pay to acquire the benefit (Benefits - Cost = Customer
Value). In other words, customer value is the difference between what a customer gets from a product
and/or service and what they have to give in order to get it. The value the individual customer places
on a product or service becomes the customer value. Value should always be defined from the
customer’s perspective.

Most organizations have a difficult time defining value because they think they already know
what their customer needs. Others create products that are too expensive, or even irrelevant to the
customer. Not understanding value, as specified by the customer, can mean the beginning of the end
for an organization.

3.1.2 Value-Added vs. Non-Value-Added Activities


A value-added vs. non-value-added analysis is a method of looking at process steps from the
customer’s perspective. By performing this analysis, organizations can identify hidden costs, reduce
process lead times, and increase the overall capacity of resources.

To analyze the steps in any process, there are three basic questions to determine if a step is adding
value from the customer’s perspective:

1. Does the customer care?

•• Would the customer be willing to pay for that step to be done?

2. Was this step done right the first time?

••  e order in which testing, reviewing, checking, revising, etc., occur are examples of
Th
rework because it was not done correctly the first time.

3. Was there a physical change?

•• I s the item, as it flows through the process, actually physically changed? Is it different in
some way?

By asking these questions for every step of the process, organizations can classify process steps as
value-added, non-value-added, or required non-value-added as defined below:

1. Value-added activities: Essential processes that are necessary to deliver the product or service
to the customer.

2. Required non-value-added: Business processes that may not be meaningful to the customer
but are an essential part of conducting business.

3. Non-value-added activities: Also known as waste, these processes add no value from the
customer’s perspective and serve no critical business function.

36 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

3.1.3 Tools to Specify Value


Specifying value is the essential first step in following LSS principles. Organizations must stay close
to their customers by continually communicating in order to understand what their customers truly
value. Although many organizations might believe so, they do not set the selling price for a product,
rather the market, i.e., the customer establishes the price based on how the product is valued. To keep

Unlawful to replicate or distribute


in touch with and analyze the needs of their customers, companies can use the following tools (for
more information about the majority of these tools, refer to the Table of Contents):

1. Voice of the customer (VOC)

2. CTQ trees (detailing requirements)

3. Customer segmentation

4. Identifying and analyzing sources of customer data

5. Quality function deployment (QFD)

6. Supplier-Input-Process-Output-Customer (SIPOC)

7. Customer metrics tables

8. Kano analysis

9. Shadowing: becoming a customer for a day

10. Point of product/service use observation

11. Various market research techniques: interviews, surveys, and focus groups

Value, as defined by the customer, encompasses the entire process from the moment the item has
been ordered to the moment it is received by the customer, which means the entire process must
be examined from end-to-end to remove inefficiencies. Also, awareness of the differences between
external customers and internal customers is important; for example, internal customers may require
inputs to complete the next step in the process.

Organizations must continually challenge themselves to provide customers with a completely hassle-
free experience. They must step outside of their traditional boundaries and ask the following question
over and over: “Why can’t that be done?” The delivery of a product or service also must be scrutinized
from the customer’s perspective, even if other suppliers are on the front-end, back-end, or both ends of
the experience. The customer wants a seamless, fast delivery of their product or service.

3.2 Advantages of Lean Six Sigma (LSS)


Clearly, implementing Lean and Six Sigma together has value. Lean reduces waste while improving
process speeds, and Sigma reduces defects and variation in a process. If an organization’s goals include
processes that are fast and efficient (without waste or defects), then LSS is the answer.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 37
Chapter 3: Value of Lean Six Sigma (LSS)

LSS can maximize shareholder value through cost reductions, productivity improvements, increased
throughput, defect reduction, market growth, and customer satisfaction and retention.

LSS helps an organization to focus on what is really important, i.e., customers and their critical-to-
quality factors. LSS process improvement keeps everyone focused on what will have the greatest impact
Unlawful to replicate or distribute

on the customer, ensuring that the organization makes improvements that benefit the customer.

LSS helps an organization see waste in its processes more readily than it ever did before. When using
LSS to analyze any process within an organization, as much as 95 percent of the steps in that process
are determined to be non-value-added. Customers are not willing to pay for non-value-added steps.
They are only willing to pay for steps that add value to the product produced by the process for them.
Given that the market, i.e., the customer, sets the selling price for a product, the cost of a product
produced by a process with multiple non-value-added steps may be too high to achieve an attractive
profit margin.

One of the most powerful things that LSS provides is that it allows everyone involved in the process
to understand how that process operates and how each process improvement project directly impacts
the bottom line. The benefit provided by having everyone in the company understand this is extremely
powerful and helps create an action-oriented culture.

LSS is far more than just a set of tools and techniques, it provides value because it helps create a culture
that possesses the following positive attributes:

1. Customer-centric: the voice of the customer (VOC) rules the organization.

2. Focused on financial results: every improvement project is evaluated and prioritized based on
its financial bottom-line impact to the organization.

3. Passionately involved: the CEO and managers at all levels of the organization are directly
involved and visibly committed to improvement.

4. Committed: adequate resources are dedicated to LSS efforts; employees regularly participate in
projects.
5. Disciplined: the specific roles (such as Black Belts and Master Black Belts) provide a
framework for rolling out, mentoring, and sustaining LSS efforts.

3.3 Application across Various Industries


While LSS is rooted in the manufacturing industry, it has been adopted as a business improvement
methodology by service industries as well, such as healthcare, utilities, financial services organizations,
and human resources. Table 3.1 offers examples of applying LSS in various industries.

38 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Table 3.1 Examples of Applying Lean Six Sigma (LSS) across Various Industries

Industry Examples of Applying Lean Six Sigma


Automotive ◆◆ Optimizing inventory levels for all major parts
◆◆ Reducing supplier lead time

Unlawful to replicate or distribute


◆◆ Improving safety and reliability of finished vehicles
◆◆ Reducing manufacturing defects at each stage
◆◆ Improving first-time yield and efficiency of each step on the
manufacturing line
Continuous Process ◆◆ Improving operator productivity
Manufacturing Plants ◆◆ Improving overall yield per shift
◆◆ Reducing lost time accidents
◆◆ Reducing scrap or spilled materials
◆◆ Increasing the utilization of plant capacity
Engineering/Manufacturing ◆◆ Reducing or optimizing inventory levels
Parts ◆◆ Reducing manufacturing cycle time
◆◆ Reducing rejections due to design errors
◆◆ Reduce number of environmental incidents
◆◆ Reducing cost of poor quality
Information Technology/ ◆◆ Reducing customer complaints
Software Development ◆◆ Improving the estimation process to reduce time and cost
overruns
◆◆ Creating a system to detect defects early in the process
◆◆ Improving the requirements-gathering process
◆◆ Improving the existing process by automating a standard
validation process
R&D/Product Design ◆◆ Improving quality of design reviews by reducing errors
◆◆ Reducing time to market
◆◆ Reducing defects in final product and saving on warranty costs
◆◆ Improving the overall performance and quality of product
◆◆ Improving quality of research process through multivariate
studies
Healthcare ◆◆ Reducing medication error percentage
◆◆ Reducing number of patient falls
◆◆ Reducing percentage of patient readmissions
◆◆ Reducing room turnover time
◆◆ Reducing error percentage for the billing process
◆◆ Reducing patient telephone wait time

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 39
Chapter 3: Value of Lean Six Sigma (LSS)

Industry Examples of Applying Lean Six Sigma


Clinical Research ◆◆ Facilitating the successful adoption of research findings into
practice
◆◆ Tracking laboratory quality, establishing benchmarks, and
Unlawful to replicate or distribute

measuring changes in laboratory performance over time


◆◆ Reducing auto-verification errors in a laboratory information
system
◆◆ Assuring the repeatability and reproducibility of testing
among different laboratories

3.4 Real-Life Success Stories


Kodak’s LSS journey began in the 1990s with the introduction of the Kodak Operating System (KOS).
KOS was implemented at the Kodak GCG factory in Leeds, England in 2002 based on the principles of
LSS, but with mixed results. After mapping out their processes, they realized their biggest challenge at
the factory was cycle time: 23 days was the shortest lead-time, but 100 days was common.

They began their Four Day Factory program, using Lean and Six Sigma methodologies, and reduced
the cycle time for about 60 percent of their production volume to 10 to 12 days. The savings realized
from one project alone was approximately $2 million.2

LSS provides breakthrough bottom-line financial results for those organizations (large or small) that
invest in the cultural transformation. In a 2006 article for ASQ’s “Making the Case for Quality,” Janet
Jacobsen wrote:

When Cummins Inc. took a leap of faith…in labeling Six Sigma as the process
improvement methodology for the company, top leadership meant the entire
company, not just the engineering departments and the shop floors where their
renowned diesel engines are produced. …[I]t branches from the legal department
to manufacturing to human resources and even to the treasury department, where
innovative employees are saving the company millions of dollars by conducting Six
Sigma projects to reduce earnings volatility and to lower interest rate expenses.3

Jacobsen continues by noting that at the time her article was written in 2006, Cummins had enjoyed
its most profitable year in 2005 by earning $550 million on nearly $10 billion in sales thanks to its Six
Sigma initiatives. In 2006, Cummins had achieved the following:

◆◆ Completed more than 5,000 Six Sigma projects, resulting in nearly $1 billion in savings

◆◆ 3,700 employees had received Six Sigma training, including 500 Black Belts and 65 Master
Black Belts

Following are a few examples of the bottom-line financial results Six Sigma has produced in other
companies:

2  Matthew Moore, “The Kodak Operating System: Successfully Integrating Lean and Six Sigma,” www.onesixsigma.com
(June 2008).
3  Janet Jacobsen, “Cummins Capitalizes on Six Sigma to Minimize Long-Term Interest Rate Risk,” Making the Case for
Quality, www.asq.org (September 2006).

40 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

◆◆ Motorola saved $17 billion from 1986 to 2004 as a result of their Six Sigma efforts.4

◆◆ General Electric saved $750 million by the end of 1998.5

◆◆ Allied Signal Honeywell initiated Six Sigma efforts in 1992 and saved more than $600 million

Unlawful to replicate or distribute


a year by 1999.6

◆◆ Ford added approximately $52 million to their bottom line in 2000 and approximately $300
million in 2001 while seeing a waste elimination savings of more than $350 million in 2002.7

◆◆ American Standard doubled their production capacity on one assembly line; reduced energy
costs by more than $300,000 at one plant; cut faucet casting losses by $2.1 million; and saved
$35 million in 2001 through increased quality and efficiency.8

◆◆ From 1987 until 2007, Six Sigma saved Fortune 500 companies an estimated $427 billion - an
average of two percent of total revenue per year when Six Sigma was deployed company-wide.9

◆◆ With a corporate-wide commitment to the Six Sigma quality approach, GENCO realized
$22.7 million in cost savings for the first quarter of 2013 and over $104 million in cost savings
in 2012.10

There is no denying that these numbers are impressive. Some CEOs and senior leaders have been and
continue to be very vocal about the value Six Sigma has provided their organizations.

The financial returns from Six Sigma have exceeded expectations. In 1998, we
achieved three quarters of a billion dollars in Six Sigma-related savings over and
above our investment, and this year [1999] that number will go to a billion and a
half, with billions more to be captured from increased volume and market share as
customers increasingly ‘feel’ the benefits of GE Six Sigma in their own businesses ...
Six Sigma has forever changed GE. Everyone ... is a true believer in Six Sigma, the
way this company now works.11
Jack Welch, Former Chairman and CEO, General Electric
[In May 2002,] we are just beginning to measure the outcome from projects. I have
in mind … one unit that is delivering 160,000 euros per Green Belt project and
another business that is reporting at least $350,000 savings per project ... After one
year of active deployment, we now have 80 to 90 percent of the company moving
forward with Six Sigma.12

François Zinger, Former Vice President of Quality & Six Sigma, ALSTOM

4  “About Motorola University,” www.muelearn.com (Archived from the original on December 22, 2005. Accessed on August
25, 2014).
5  Ibid.
6  Ibid.
7  Ibid.
8  Ibid.
9  “Six Sigma Saves a Fortune,” Research Report, www.iSixSigma.com (2006)
10  “GENCO – Information at a Glance,” www.genco.com (Accessed August 26, 2014).
11  “A Company To Be Proud Of,” Address to stockholders at the 1999 General Electric Annual Meeting (Cleveland, Ohio,
April 21, 1999).
12  Thomas Bertels, Ed. Rath & Strong’s Six Sigma Leadership Handbook (New Jersey: John Wiley & Sons, 2003).

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 41
This page intentionally left blank.
Unlawful to replicate or distribute

42 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Chapter 4: Lean Six Sigma (LSS) and Organizational Goals

Unlawful to replicate or distribute


Key Terms
balanced scorecard performance metrics
organizational driver process system y = f(x)

Body of Knowledge
1. Identify the linkages and supports that need to be established between a selected LSS project and
the organization’s goals.

2. Describe how process inputs, outputs, and feedback at all levels can influence the organization as a
whole.

3. Recognize the key business drivers for all types of organizations.

4. Understand how the key metrics and scorecards are developed and how they impact the entire
organization.

5. Identify the four perspectives of a balanced scorecard.

S uccessful LSS deployment and sustainability is directly linked to the degree to which LSS goals
are aligned with the organization’s long-term strategic plan and business goals. It is essential in
identifying opportunities and stumbling blocks, strengthening the organization’s performance, and
selecting and managing LSS projects effectively.

If an organization fails to do this, it could lead to a situation where the LSS implementation team may
be able to achieve its individual targets, but the main goals and objectives of the organization may be
neglected. Most importantly, without this link to business performance, top management may lose
interest and support will fade away.

What does this mean to LSS practitioners? They must build the process of project selection around the
most immediate organizational objectives. Successful LSS projects must demonstrate that they have
contributed to the organization’s overall objectives, as well as the immediate cost savings. Furthermore,
it is up to the LSS practitioners to communicate this fact to all levels of their organization.

4.1 Organizational Strategic Goals and Lean Six Sigma (LSS) Projects
Strategic planning is the continuous process of making present entrepreneurial (risk-
taking) decisions systematically and with the greatest knowledge of their futurity;
organizing systematically the efforts needed to carry out these decisions; and

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Chapter 4: Lean Six Sigma (LSS) and Organizational Goals

measuring the results of these decisions against the expectations through organized,
systematic feedback..1
Peter Drucker, Management Expert and Author

Integrating LSS into an organization’s long-term strategic plan is essential in identifying opportunities
Unlawful to replicate or distribute

and stumbling blocks, strengthening the organization’s performance, and selecting and managing LSS
projects effectively.

LSS projects must be aligned2 with and tied to an organization’s strategy for improvement as well as its
strategic goals. When this linkage is missing, the organization will be unable to create a portfolio that
helps meet the strategic goals and objectives of the organization or to select projects that are essential
to meeting the needs of the company and its customers.

Strategic goals are the objectives that help achieve long-term organizational goals and translate
the organization’s vision into specific projects. Strategic goals and objectives are broken down into
operational-level performance and process improvement metrics. Using Six Sigma terminology,
the “Big Y’s” (Y) are broken down into “smaller y’s” (y). The “smaller y’s” are then addressed at the
operational level.

4.1.1 Processes and Systems Thinking


A process is a series of actions, steps, or functions/operations that bring about a result; and for the
purposes of an organization, processes create products and/or services. Being able to understand and
improve processes is crucial to every LSS project undertaken. The process for any product or service
involves the following: 1) the inputs (labor, knowledge, skills, technology, materials, etc.) an
organization needs to produce the output (the final product or service); and 2) the process of
transforming and adding value to the inputs and delivering the outputs so that they meet the needs of
current and future customers (see Figure 4.1).

Feedback
information, new ideas, expertise,
customer feedback

Input Process Output


capital, labor, transforming end product,
raw materials, and adding customer and
knowledge, value to inputs employee
technology, etc. satisfaction

Figure 4.1 Sample Input, Process, Output, and Feedback Loop

1  Peter F. Drucker, Management (New York: Harper & Row, 1974), 125.
2  When LSS projects are aligned to an organization’s strategic goals, it means the requirements of the strategic goals and
objectives have been successfully translated into project solutions.

44 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

An organization can be viewed as a system with inputs, throughputs (the amount of material or items
passing through a system or process), and outputs, all of which are connected by feedback loops. The
feedback loop illustrates the idea that systems, like processes, can be influenced by inputs, as well as by
outputs, e.g., products and services. The system ensures each process has the required resources when
needed and collects and analyzes data in an effort to continually improve the outputs.

Unlawful to replicate or distribute


Management of a system... requires knowledge of the interrelationships between all
the components within the system and of the people that work in it.3

W. Edwards Deming, Engineer, Statistician, Professor, and Consultant

A system is a group of interacting, interrelated, or interdependent elements (parts) that forms a


complex whole, each of which can affect the behaviors or properties of the other parts. In other words,
performance of the system is determined by how the various parts interrelate. For example, how an
organization’s sales, manufacturing, procurement, and marketing units relate to one another is what
actually drives the organization’s performance. Individually, the parts are unable to make a significant,
lasting impact. At the organization level, all the processes and resources (people, technologies,
materials, etc.) need to work together in order to create a product and/or service.

If one input, process, or output is changed, it can influence or have an impact on the rest of the
system. For example, a major cell phone manufacturer is considering adding a built-in projector
to an upcoming model; but before they can add the feature, they must first talk to their marketing
department to see what their competitors are doing and to assess the needs and wants of their
customers by gathering and analyzing customer data. Marketing will also provide the advertising costs
associated with launching a new feature. The finance department will need to provide information
on obtaining resources, and accounting will provide information on the cost of training, labor, and
overhead to implement the new feature. Purchasing will need to identify suppliers and procure the
materials, supplies, services, and equipment after product design has engineered the new design and
feature. Human resources will need to recruit and train any new personnel, and operations will need
to update production scheduling and procedures, establish quality standards, and update the user
instructions for the phone.

LSS uses systems thinking by considering all of the process interactions, not just the parts. For
example, the Toyota Production System (TPI) is a systems thinking model. The strength of systems
thinking is that it focuses on the whole as well as the parts of the system for problem solving and
solutions, rather than decomposing the whole into smaller parts and studying them in isolation.
Within the system, it is important to understand linkages - main processes are linked internally, and
supporting processes are linked to the main processes.

4.1.2 Avoiding Project Failure


LSS projects commonly fail due to lack of executive/upper management and/or process owner support
or involvement. They also fail when there is a general lack of leadership, resources, or a rewards and
recognition program. Use the following questions to help increase the chances for success on a LSS
project:

3  W. Edwards Deming, The New Economics: For Industry, Government, Education (Cambridge, MA: Massachusetts Institute
of Technology, Center for Advanced Educational Services, 1994), 50.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 45
Chapter 4: Lean Six Sigma (LSS) and Organizational Goals

◆◆ Is there a direct link between the focus of the project and real business impact?

◆◆ Is there executive support in the form of a project sponsor and funding sources?

◆◆ Is the process owner engaged and involved?


Unlawful to replicate or distribute

◆◆ Have LSS improvement efforts spread across the entire organization?

◆◆ Have sufficient resources been dedicated and are the “best and brightest” being selected to lead
LSS initiatives in your organization?

◆◆ Is the project supported by the right data and metrics and aligned with the organization’s
strategic objectives?

◆◆ Is there a process for celebrating success and are rewards linked to the key metrics for the
project or overall process improvement?

◆◆ Is there a detailed plan in place (who, what, when) to provide clear and consistent
communication at all levels of the organization?

◆◆ Are there sufficient software programs or IT solutions in place for project management,
financial linkage, and monitoring results?

4.1.3 Transfer Function of y=f(x)


A transfer function is a mathematical expression of the relationship between the inputs and outputs of
a process. The transfer function, y = f(x), or y = f(x1, x2, x3, …xN), illustrates the causal relationship
among the key business measures (designated as Y), the process outputs directly affecting the big
Y’s (designated as y), and the factors directly affecting the process outputs (designated as x). The y is
usually the primary metric or the measure of process performance for the improvement. The observed
output is a function of the inputs, or in simple terms, “y is a function of x.”

Example
y = a person’s body weight

Things that influence or control, x, the moving of an individual’s body weight in either a
positive or a negative direction:

x1 = number of calories consumed per day

x2 = minutes of exercise performed each day

x3 = grams of sugar consumed per day

x4 = grams of fat consumed per day

Identifying the exact levels needed to maintain for x1, x2, x3, and x4 and being disciplined
enough to keep those levels there day after day will always maintain the desired outcome (y
= body weight).

The goal of any LSS project is to identify the critical x’s, the ones that have the most influence on the
output, y, and adjust them so that y improves. This process helps determine all potential x’s that might
influence y and then determine through measurements and analysis which inputs do influence the
output, y. When critical x’s are addressed and corrected, greater improvement to the overall process is

46 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

possible. Which is why the y and/or the x is the focus of each DMAIC phase as follows:

◆◆ Define: Understand the process that produces y.

◆◆ Measure: Understand how to measure and develop a baseline for y.

Unlawful to replicate or distribute


◆◆ Analyze: Perform root cause analysis to find the critical x’s.

◆◆ Improve: Modify the critical x’s so that y is improved.

◆◆ Control: Control the critical x’s and monitor y to sustain the gains.

4.2 Organizational Drivers


A business, or organizational driver, is a resource, process, or condition that is vital for the continued
success and growth of a business. A company must identify its business drivers and attempt to
maximize any that are under their control. There are always outside business drivers that a company
cannot influence, such as economic conditions or trade relations with other nations.4

For most companies, the key business drivers are related to profit, market share, customer satisfaction,
efficiency, and product differentiation. These drivers can often change with business circumstances
and time due to growing or evolving business, changing markets, and changing technology. Business
drivers will also vary based on the industry, e.g., business drivers in healthcare will be different from
those in the software industry (see Table 4.1).

Table 4.1 Key Business Drivers


Key Business Driver Examples
Profit ◆◆ Stockholder value
◆◆ Return on investment
◆◆ Sales dollars
◆◆ Profit margin on sales
Market Share ◆◆ Market-share growth
◆◆ Market surveys to customers
◆◆ Analysis of returns
◆◆ New product development
Customer Satisfaction ◆◆ Customer retention
◆◆ Courtesy ratings
◆◆ Customer relations improvements
◆◆ Product and service improvements
Efficiency ◆◆ Defect reductions
◆◆ Productivity improvements
◆◆ Cycle-time reductions
◆◆ Existing cycle times

4  “Definition: Business Driver,” www.techopedia.com (2010-2015). Accessed August 24, 2015.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 47
Chapter 4: Lean Six Sigma (LSS) and Organizational Goals

Key Business Driver Examples


Product Differentiation ◆◆ Activities of competitors
◆◆ Contrasting qualities with competition
◆◆ Brand loyalty
Unlawful to replicate or distribute

◆◆ Advertising campaigns

4.3 Organizational Metrics


When you can measure what you are speaking about, and express it in numbers,
you know something about it; but when you cannot express it in numbers, your
knowledge is of a meager and unsatisfactory kind.5

Sir William Thomson, 1st Baron Kelvin (Lord Kelvin)

Being able to measure key metrics is vital to an organization. Measuring performance will let an
organization know how well they are doing in the following areas: if they are meeting goals, if
their customers are satisfied, if processes are in statistical control, and if and where improvements
are necessary. Collecting data and measuring process and product performance also enable an
organization to implement a standardized control system. In order to initiate and sustain change, the
performance metrics at every level of the organization must meet the following criteria6:

1. A metric must have a scale, such as the frequency or rate of occurrence, the units produced
correctly over time, and the number of defects or dollars. To be effective, the measurement
scale must be meaningful, valid, and reliable.

2. The metric must have a standard or goal.

3. Compensation and other forms of recognition must be related to the performance goal for
the metric. While many companies have a scale of measure and a goal, they do not reward or
recognize those who contribute to achieving this goal.
4. A metric should be reviewed on a regular basis throughout the organization. An organization
should distribute performance data to all executives, managers, and employees who can
impact the metric.

5. A metric should have meaning and impact across various functions and levels of the
organization.

6. A metric must be highly correlated with one or more of the following criteria for performance
metrics at the business, operations, and/or process level of the organization:

•• Aligned: Performance metrics must always align with corporate strategies and objectives.

5  Sir William Thomson, Nature Series: Popular Lectures and Addresses, Volume 1, Constitution of Matter (London: Macmillan
and Co., 1889), 73.
6  Mikel J. Harry, et. al., Practitioner’s Guide to Statistics and Lean Six Sigma for Process Improvements (Hoboken, New Jersey:
John Wiley & Sons, Inc., 2010), 32-34.

48 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

•• Owned: Performance metrics must be owned by those who are accountable for their
outcome.

•• Predictive: Performance metrics must be a leading indicator of business value.

Unlawful to replicate or distribute


•• Actionable: Performance metrics must reflect timely, actionable data so users can
meaningfully and effectively intervene.

•• Minimal/Few in Number: Performance metrics must focus users on high-value tasks and
not scatter their attention.

•• Simple/Easy to Understand: Performance metrics must be straightforward, not based on


complex indices.

•• Correlated/Balanced and Linked: Performance metrics must be vertically correlated and


reinforce each other and not compete and confuse.

•• Transformative: Performance metrics must trigger a chain reaction of positive changes in


the organization.

•• Standardized: Performance metrics must be based on standard definitions, rules, and


calculations.

•• Contextual/Context Driven: Performance metrics must be contextually dependent so as to


ensure their relevance.

•• Reinforced: Performance metrics must be tied to the reward and recognition system.

•• Validated/Relevant: Performance metrics must be periodically reviewed to ensure


relevance and validity.

4.3.1 Developing Performance Metrics


The following section is taken from the U.S. Department of Energy’s “How To Measure Performance:
A Handbook of Techniques and Tools” and is reprinted with the permission of the Performance-Based
Management Special Interest Group.7

Performance metrics should be constructed to encourage performance improvement, effectiveness,


efficiency, and appropriate levels of internal controls. They should incorporate best practices related to
the performance being measured and cost/risk/benefit analysis, where appropriate.

The first step in developing performance metrics is to involve the people who are responsible for the
work to be measured because they are the most knowledgeable about the work. Once these people are
identified and involved, it is necessary to do the following:

◆◆ Identify critical work processes and customer requirements.

◆◆ Identify critical results desired and align them to customer requirements.

◆◆ Develop measurements for the critical work processes or critical results.

7  Performance-Based Management Special Interest Group (PBM SIG), “How to Measure Performance: A Handbook of
Techniques and Tools,” www.orau.gov (October 1995).

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 49
Chapter 4: Lean Six Sigma (LSS) and Organizational Goals

◆◆ Establish performance goals, standards, or benchmarks.

The SMART test is frequently used to determine the quality of a particular metric:

◆◆ Specific: clear and focused to avoid misinterpretation; include assumptions and definitions
Unlawful to replicate or distribute

◆◆ Measurable: can be quantified and compared to other data; allow for meaningful statistical
analysis

◆◆ Attainable: achievable, reasonable, and credible

◆◆ Realistic: fits into the organization’s constraints and is cost-effective

◆◆ Time-Bound: can be completed within a given time frame

Most performance metrics can be grouped into one of the following six general categories. However,
organizations may develop their own categories as appropriate depending on the organization’s
mission:

1. Effectiveness: A process characteristic indicating the degree to which the process output (work
product) conforms to the requirements.

2. Efficiency: A process characteristic indicating the degree to which the process produces the
required output at minimum resource costs.

3. Quality: The degree to which a product or service meets customer requirements and
expectations.

4. Timeliness: Measures whether a unit of work was done correctly and on time. Criteria must
be established to define what constitutes timeliness for a given unit of work. The criterion is
usually based on customer requirements.

5. Productivity: The value added by the process divided by the value of the labor and capital
consumed.

6. Safety: Measures the overall health of the organization and the working environment of its
employees.

4.3.2 Balanced Scorecard


One of the tools that organizations use to manage metrics is the balanced scorecard. Balanced
scorecards are based on strategy and provide a summary of the performance metrics that can help an
organization maintain a balanced perspective to ensure that the metrics within an organization are
not just financial but also include items such as customer satisfaction, employee satisfaction, or even
creativity.

A balanced scorecard allows an organization to view their performance from four perspectives (see
Figure 4.2.

50 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Financial
Objectives Measures Targets Initiatives

Unlawful to replicate or distribute


Customer Internal Business Process
Objectives Measures Targets Initiatives Objectives Measures Targets Initiatives
Vision and
Strategy

Learning and Growth


Objectives Measures Targets Initiatives

Figure 4.2 Balanced Scorecard Template

1. Financial­: how an organization is viewed by its shareholders. Examples are as follows:

a. Inventory levels

b. Cost per unit

c. Activity-based costing

d. Cost of poor quality

e. Overall project savings

2. Internal business process: internal processes that are critical to shareholder and customer
goals. Examples are as follows:

a. Defects, inspection data, DPMO, and sigma level

b. Rolled throughput yield

c. Supplier quality

d. Cycle time

e. Volume shipped

f. Rework hours

3. Learning and growth—determining if the organization can continue to improve and create
value and where innovation is required. Examples are as follows:

a. LSS tool utilization

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 51
Chapter 4: Lean Six Sigma (LSS) and Organizational Goals

b. Quality of training

c. Meeting effectiveness

d. Lessons learned
Unlawful to replicate or distribute

e. Number of projects completed

f. Total savings to date

4. Customers: understanding how the organization is viewed by the customer (quality,


timeliness, performance and service, and value). Examples are as follows:

a. On-time delivery

b. Final product quality

c. Safety communications

d. Technical support

Using a balanced scorecard, organizations can develop metrics based on the four perspectives listed
above, and then collect and analyze the data to measure performance. Metrics are used to track
progress, reward the organization, and continually drive additional improvements. Whatever metrics
an organization decides to track, it is important to continually communicate progress and keep the
metrics visible.

52 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Part II: Project Management Basics

Unlawful to replicate or distribute


T he following chapters introduce the basic elements of project management that are necessary
for the successful completion of LSS projects. Successful projects follow a structured project
management methodology. The LSS methodology, as it is defined here, incorporates the basic elements
of the Project Management Institute, A Guide to the Project Management Body of Knowledge, (PMBOK®
Guide) – Fifth Edition, Project Management Institute, Inc., 2013.1

Basic Project Management Concepts:

◆◆ Project management is the application of knowledge, skills, tools, and techniques for project
activities to meet project requirements.2

◆◆ Project management is the discipline of planning, organizing, and managing resources to


bring about the successful completion of specific project goals and objectives.

◆◆ A project is any temporary, organized effort that creates a unique product, service, process, or
plan.3

◆◆ A project has a definite beginning and end.4

Within the framework of DMAIC, projects will typically become an idea during a strategic planning
session or a standard process review meeting. The details of these projects can be refined in several
different ways. A management or leadership team might actually develop a project charter (see
Chapter 13), or a team is formed and given the direction to develop a project charter based on certain
details provided by the champions of the projects. The project champions are typically looking for the
following in order to deem a project successful:

◆◆ The customer is satisfied with the final deliverable.

◆◆ The project has met all of its stated goals and objectives.

◆◆ The deliverable is given to the customer on time.

◆◆ The project has stayed within the budget and staffing limits.

1  PMI and PMBOK are registered marks of the Project Management Institute, Inc.
2  Definitions are taken from the Project Management Institute, A Guide to the Project Management Body of Knowledge
(PMBOK® Guide) – Fifth Edition, Project Management Institute, Inc., 2013
3  Ibid.
4  Ibid.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
This page intentionally left blank.
Unlawful to replicate or distribute

54 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Chapter 5: Seven Quality Control (7QC) Tools

Unlawful to replicate or distribute


Key Terms
cause and effect diagram Pareto chart
control chart scatter diagram
flow charts seven quality control (7QC) tools
histogram

Body of Knowledge
1. Define, select, and apply the quality control tools.

A s a project manager you should be aware of all the tools and techniques used to control quality,
which tools should be used and when, and by whom and how. Project managers fail to control
quality by choosing the incorrect tools or techniques for the current project’s needs.

Reference books and articles on quality frequently mention the seven basic quality control (7QC)
tools and were first emphasized by Kaoru Ishikawa in the 1960s, who claimed that 95 percent of
the problems a company faces could be resolved using the 7QC tools. These tools are a given set of
graphical techniques identified as being helpful in troubleshooting issues related to quality. These
seven are called "basic" because they can be used easily by anyone to solve the vast majority of quality-
related issues.

The 7QC tools can assist quality management decision-making by referring to the factual data
displayed by each tool. The tools are fundamental to improving the quality of the product or service
and are used to analyze the production process, identify major problems, control fluctuations of
product quality, and provide solutions to avoid future defects.

While statistical literacy in accumulating and analyzing data is necessary to use control charts
effectively, the rest are simple to use. The 7QC tools were designed to organize collected data so that it
is easy to analyze and understand. They are not mandatory for every project, but rather should be used
based on the needs of each individual project.

The 7QC tools include the check sheet, Pareto chart, histogram, scatter diagram, flow chart (some lists
replace the flow chart with stratification), control chart (also known as a process behavior chart), and
the cause and effect diagram as shown in Table 5.1. Since most of the tools are more applicable to the
Measure, Analyze, and Control Phases of DMAIC, these tools will be presented in depth in later chapters.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Chapter 5: Seven Quality Control (7QC) Tools

Seven Quality Control Tools

Table 5.1 Seven Quality Control Tools


Graphical
Name Used to:
Representation
Unlawful to replicate or distribute

• Easily collect data


Check Sheet • Make decisions and take actions that are
based on the data collected

• Define problems and establish their priorities


• Illustrate the problems detected
Pareto Chart during data collection
• Illustrate the frequency of the problems
occurring in the process

• Show a bar chart of accumulated data


Histogram • Provide the easiest way to evaluate the
distribution of data

• Graphically represent the data points


Scatter collected
Diagram • Show a pattern of correlation between two
variables

• Show a process step-by-step


Flow Chart • Graphically understand the process
• Identify an unnecessary procedure

• Provide control limits


• Show whether or not the process is in
Control Chart control
• Graphically depict variation over time

• Identify many possible causes for an effect or


Cause and
problem
Effect Diagram • Sort ideas into useful categories

5.1 Check Sheets


The check sheet is an organized way to quickly and easily collect the counts of defects, locations,
products, occurrences, or events. The form can be simple or formal and is adapted to the data need for
a given project.

Creating a Check Sheet:

Step 1. Determine the categories or defects or locations being tracked, and any further
stratifying information such as shift or month.
Step 2. Create a table with categories in the first column and stratifying information across the
top.

56 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Step 3. Distribute the forms to data collectors with clear instructions, and pick up the sheets to
tabulate when complete.
The check sheet below (see Figure 5.1) was created to capture how many defects occur in a wire cutting
operation by shift.

Unlawful to replicate or distribute


Rework Categories for Wire Cut Operation
Shift
Category
1st 2nd
Wrong Length
Wrong Terminal
Marking Error
Wire Damaged
Wrong Color

Figure 5.1 Check Sheet

5.2 Pareto Charts


A Pareto chart is a tool to visualize the frequency of defects or occurrences. Pareto charts help
prioritize action by emphasizing higher counts in a bar chart format. It is a very good tool to analyze
frequency data collected with check sheets.

Defect
100.0%

12 92.3%
90.0%

84.6%
80.0%
10

70.0%

8 61.5%
60.0%
Defect

50.0%
6

5 40.0%
38.5%

4 30.0%

3 3

20.0%
2

1 1 10.0%

0 0.0%
Marking Error Wrong Length Wrong Terminal Wire Damaged Wrong Color
Categories

Figure 5.2 Pareto Chart


Chart produced using QI Macros™ software. KnowWare International, Inc.
www.qimacros.com

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 57
Chapter 5: Seven Quality Control (7QC) Tools

Creating a Pareto Chart:

Step 1. Collect count or frequency of occurrence data for a set of variables or categories.
Step 2. Tabulate this data in descending order of frequency or use a program like QI Macros or
MiniTab® to automatically do this.
Unlawful to replicate or distribute

Step 3. Create a bar chart of this descending order data or use a statistical analysis package as
mentioned above.
Step 4. Pareto charts can be “nested,” e.g., a Pareto chart may be used to drill down further on
the highest impact variable.

5.3 Histograms
A histogram is a bar chart representing the distribution of the data set. It is a quick and useful way to
evaluate for centered or skewed data.

Creating a Histogram:

Step 1. Collect and tabulate data by the counts of occurrences in a given range; for example,
1,000 data points measuring the time it takes an emergency room to respond to a code
blue situation may have very few identical data points, but there may be 20-50 columns
of data if it is grouped into 10-minute increments (any data in the 10-minute window
counts in the frequency of that column).
Step 2. Plot the counts grouped by range on a bar chart, showing the target and specification
limits or use a statistical tool to automatically generate the histogram.
The example histogram (see Figure 5.3) was created using QI Macros and has options to calculate
many other statistical values on the data when the histogram is created. This histogram represents a
well centered, nearly normal distribution that sits within the specification limits.

Wait Hrs
12 LSL 0.3 USL 1.5
Mean 0.855833333
Median 0.82
Mode 0.82
Cp 1.49
10 n 24
Cpk 1.38
CpU 1.60
CpL 1.38
Cpm 1.26
8 Cr 0.67
ZTarget/∆Z 0.29
Pp 1.31
Ppjk 1.22
PpU 1.41
Number

6 PpL 1.22
Skewness 1.02
Stdev 0.152398695
Min 0.6
Max 1.3
4 Z Bench 4.13
% Defects 0.0%
PPM 0.00
Expected 17.86
Sigma 5.63
2

0
0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6
Values

Figure 5.3 Histogram


Chart produced using QI Macros™ software. KnowWare International, Inc.
www.qimacros.com

58 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

5.4 Scatter Diagrams


A scatter diagram, also known as a scatter plot, plots two variables for a given data set as a collection
of points. From this plot a visual display of correlation between the two variables can be seen.

Unlawful to replicate or distribute


Creating a Scatter Diagram:

Step 1. Collect two variables for an event, such as temperature and pressure to extrude plastic,
or time and amount of starch required for pudding to set.
Step 2. Plot a data point for each pair of data, one on the x-axis and one on the y-axis. If
one variable is being controlled experimentally, it is the independent variable and is
typically plotted on the x-axis.
The example below (see Figure 5.4) plots imaginary hot chocolate sales at a high school football game
vs. the temperature. As expected, a correlation exists between low temperatures and high sales, which
is a negative correlation as one variable goes down as the other goes up. Note that correlation does not
prove causation, even though it is clear that cold weather drives hot chocolate sales in this particular
example.
Hot Chocolate Sales vs. Temperature
50

45

40

35

30
Sales

25

20

15

10

0
0 10 20 30 40 50 60 70 80

Temp

Figure 5.4 Scatter Diagram


Chart produced using QI MacrosTM software. KnowWare International, Inc.
www. qimacros.com

5.5 Flow Charts


Flow charts are typically used in LSS projects to document process steps. They can be used at an
overview level or at very specific levels of detail. Different shapes represent start/stop points (ovals or
rounded shapes), process steps (rectangles), decision points (diamonds), etc. These three shapes can
document a process quite well. Additionally, there are other sources, such as the SME or the ASQ, that
can provide more details about the myriad of options and shapes.

Creating a Flow Chart:

Step 1. Create a list of actions with a start point, a list of process steps or decision point, and an
end point.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 59
Chapter 5: Seven Quality Control (7QC) Tools

Step 2. Enter the actions into appropriate shapes as described above and lay them out in
sequential order.
Step 3. Connect the shapes in order with arrows.
The flow chart example below (see Figure 5.5) shows an order to delivery process for a baking
Unlawful to replicate or distribute

company. Note the following:

◆◆ The steps are further broken into “swim lanes” to show what department owns the item.

◆◆ Most
Process Name
boxes start with verbs as they represent actions.
Order to Delivery Overview
Who
Service Customer

Start
Orders
Product
Customer

Rep.

Log Order Verify order


Fulfillment

Fulfill order in Verify count


Team

specified with Customer YES


racks order log

NO
Delivery
Driver

Load box Delivery More


End
truck LIFO order orders?
Receiving
Customer

Driver and
Customer
verify and
receive order

Figure 5.5 Flow Chart


Chart produced using QI Macros™ software. KnowWare International, Inc.
www.qimacros.com

5.6 Control Charts


A control chart is a graphical representation of data variation over time, which
makes it possible to observe the normal and non-normal behavior of a process.
In the Control Phase, control charts help the team monitor the process behavior
for change through the mean, range, and standard deviation statistics.

There are many types of control charts that are applied based on the type of data collected. More
information about selecting the correct control chart and control limits, and how to apply them, can
be found in Part VI: Principles of Statistical Process Control.

Creating a Control Chart:

Step 1. Create a horizontal scale representing time or run order.


Step 2. Create a vertical axis representing the scale of measure for the characteristic.
Step 3. Plot each observation as a dot, using its order and measurement.
Step 4. Connect the dots by drawing a line between each point, in sequential order, to
emphasize the change that has occurred.
Or, you can use a statistical software package to create the chart. The control chart shown (see Figure
5.6) is an XmR type of chart that plots the variable (X) over time in the top half and the range (R)
between data points in the bottom half. This example represents a bushing diameter with a nominal
value of 148 mm and a process that is in control.

60 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

X Chart
148.095
UCL 148.078
148.075

148.055

148.035

148.015

X Values
CL 148.000

Unlawful to replicate or distribute


147.995

147.975

147.955

147.935
LCL 147.922
147.915

147.895
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Date/Time/Period

R Chart
0.120

0.100 UCL 0.095


0.080
Range

0.060

0.040
CL 0.029
0.020

0.000
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Date/Time/Period

Figure 5.6 Control Chart


Chart produced using QI Macros™ software. KnowWare International, Inc.
www.qimacros.com

5.7 Cause and Effect Diagrams


A cause and effect diagram is also known as a fishbone diagram (due to its shape) or Ishikawa
Diagram (after the man who popularized its use). This diagram is used to drill down to the root cause
for a given problem or defect. Generic categories, or "bones," include Materials, Methods,
Measurement, People, Machines, and Environment (see Figure 5.7). Process steps may also be used as
"bones" in lieu of general categories.

Materials Methods Measurement

Additive Materials are over consumed


- Production pulls full bags Consumption different between Production and Logistics
- Full bags are 10-50% overage - Different procedures between dept
- Bags are not divisble - Manual vs ERP system
- Change vendor pack - No single point of accountability
Sales accuracy
- Rough allocation scales don’t match
- Calibration not performed on Mach B
Materials lost sharing between lines Custom items not accounted for
- Blending room not organized - New item bills of material not accurate
- Operations unclear - ERP system not updated nightly
- No clear SOP for this function
Problem Statement

Unable to control inventory


for additives

Material shared among operators


- Not tracked when passed along Extra bag bottom scrap in summer
- Next line over slower than storage - Moisture clumps additives
- Storage too far away - Summer humidity not controlled

Under reporting of machine consumables


- Usage covers production only
- Waste from machine down time not covered
Items missed when leaving or returning to storage
- Shortages and overages in transfer
- Transfer not communicated

People Machines Environment

Figure 5.7 Cause and Effect Diagram

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 61
Chapter 5: Seven Quality Control (7QC) Tools

Creating a Cause and Effect Diagram:

Step 1. Draw a “fishbone” template.


Step 2. Write the problem statement (effect) at the head of the "fish"—be clear and specific.
Unlawful to replicate or distribute

Step 3. Write the major categories of potential causes of the problem on the "bones" of the
body.
Step 4. Brainstorm potential causes for possible errors in each category and add them to the
fishbone diagram.
Step 5. Ask “why?” for each potential cause and keep asking/answering “why?” until reaching
the potential root cause.
When drawing a cause and effect diagram, enough space should be left between major categories so
more details can be added later. The purpose of this tool is to keep the project team focused on the
causes of the problem, not the symptoms.

62 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Chapter 6: Seven Management and Planning Tools

Unlawful to replicate or distribute


Key Terms
activity network diagram prioritization matrices
affinity diagram process decision program chart (PDPC)
interrelationship digraph seven management and planning (MP) tools
matrix diagram tree diagram

Body of Knowledge
1. Define, select, and apply the management and planning tools.

S ince the 7QC tools focus solely on product and process improvement, practitioners saw a need
to develop tools that promote innovation, better communication of information, and successful
planning of major projects. As a result, the Union of Japanese Scientists and Engineers (JUSE)
developed the seven management and planning (MP) tools. The Japanese effort was conducted by a
committee of the Society for QC Technique Development; between 1972‐1979, this committee refined
and tested these individual tools and the overall cycle.

The seven management and planning (MP) tools include the affinity diagram, tree diagram,
interrelationship digraph, matrix diagram, prioritization matrices, process decision program chart,
and activity network diagram, as shown in Table 6.1 (see next page).

The MP tools allow for more effective planning and decision-making when working with project
teams, by ensuring that everyone is actively involved in solving the problem. Organizations also use
them to implement those decisions with greater success. The purpose of these tools is to convert
apparent chaos into a workable, action plan that can be implemented. Individually, they organize
thinking and decision-making, but collectively they provide a way for teams to respond to problems
effectively by strengthening creativity and originality.

The MP tools are far more powerful when they are combined into a cycle, or a logical progression
from one tool to the next, in which the output of one tool becomes the input for the next. For example,
information from either an affinity diagram (creative thinking) or an interrelationship digraph (logical
thinking) becomes the input for a tree diagram, which then progressively flows into a prioritization
matrix, a matrix diagram, and finally either into a process decision program chart or an activity
network diagram.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Chapter 6: Seven Management and Planning Tools

Seven Management and Planning Tools


Table 6.1 Seven Management and Planning Tools

Graphical
Name Used to:
Representation
Unlawful to replicate or distribute

Affinity • Organize a large set of ideas


• Help a team after a brainstorming session
Diagram
• Analyze customer requirements

• Break a broad goal into increasing levels of


Tree detail
Diagram • Create a detailed action plan
• Graphically communicate information

• Look for drivers and outcomes


Inter-
• Identify, analyze, and classify cause-and-
Relationship
effect relationships
Digraph • Identify causes that are key drivers
A1 A2 A3 A4 X Axis

• Identify and rate the strength of


Legend
1a

Matrix
concentric
3
circles

relationships between two or more sets of


1b

Diagram 1c
circle 2

1d triangle 2 information
Y Axis

Items
Criteria
Low cost of
implementation
High increase
in sales Final
score
• Narrow down options through a systematic
Prioritization approach
to prioritize Weight = 2 Weight = 4

Add larger bandages 3 6 2 8 14

Matrices Remove outdated antiseptic

Use container with tighter lid


2

5
4

10
4

3
16

12
20

22
• Compare choices by selecting, weighing, and
applying criteria

Process
Decision • Improve implementation through
Program Chart contingency planning
(PDPC)

Activity • Schedule sequential and simultaneous tasks


Network • Find the most efficient path and realistic
Diagram schedule for the completion of a project

6.1 Affinity Diagrams


An affinity diagram is used to organize facts, opinions, or issues into groups to help diagnose a
complex situation or develop a theme. It can be used in any phase of the DMAIC methodology to
organize ideas from a brainstorming session, but an affinity diagram is most often used during the
planning stages of a problem. It can be used to organize the voice of the customer and the voice of the
business data gathered from customer statements, interviews, surveys, or focus groups. An affinity
diagram helps to eliminate duplicate items and flush out potential missing items.

64 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge
Affinity Diagram

Work Environment Hardware and Software

Ergonomics Monitors
Computers
Keyboard Printers
Timeline

Unlawful to replicate or distribute


Lighting Scanner
Mouse MS Office Suite
Flex Schedule
Chair Adobe Products
Comfort

Professional Development
First Draft
Crafting a Message
Writer/Designer Requirement Editing
Page Design
Writing Draft
Grammar Final
Punctuation Distribution
Spellcheck Font Selection No Definition or Quality
Design
Lack of Measurement
Industry Terms Slang No Feedback

Figure 6.1 Affinity Diagram

Creating an Affinity Diagram:

Step 1. Define the focus of the affinity diagram, for example:


•• Analyzing a problem;

•• Organizing ideas for a solution to a problem, product, or service; and/or

•• Organizing collected data, i.e., voice of the customer, voice of the business.

Step 2. Write the ideas/data on cards or sticky notes (only one idea per card and stay as close
to the original language as possible).
Step 3. Place the sticky notes or cards on a wall or conference table (in random order).
Step 4. Team members should silently move the sticky notes around to form groups. The
silence is critical in order to not have the individuals’ thought patterns influenced.
Step 5. Arrange the groups into similar thought patterns or categories.
Step 6. Develop a main category or idea for each group, which then becomes the header card.
Step 7. Once all of the cards have been placed under a header card, draw borders around the
groups.

6.2 Tree Diagrams


A tree diagram is an ordered structure, similar to an organization chart or family tree, and is used to
outline the activities and details for completing an objective. The tree diagram can be used to:

◆◆ Develop the elements for a new product;

◆◆ Show the relationships of a production process;

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 65
Chapter 6: Seven Management and Planning Tools

◆◆ Create new ideas in problem solving; and/or

◆◆ Outline the steps to implement a project.

Creating a Tree Diagram:


Unlawful to replicate or distribute

Step 1. Determine the overall objective of the tree diagram. Write this objective on a sticky
note and place it to the far left on a wall (first level).
Step 2. Determine the means that would achieve the objective. Write the means on sticky notes
and line them up just to the right of the overall objective (second level).
Step 3. Determine all the details for each of the means necessary to solve the overall objective.
Write these details on sticky notes and line them up just to the right of each of the
means (third level).
Step 4. Continue this process until the adequate detail is reached.
Step 5. After finishing the diagram, review it and confirm that each step is expected to lead to
successfully meeting the objective. If it appears there is a clear line of sight for meeting
the objective, the tree is complete.
Note: If a team cannot meet in person, this process can be conducted online using a MS Excel® template.
Balanced
Figure 6.2 is an example of a tree diagram using this Scorecard
software.

Long-Term Short-Term Measures Targets

Financial Increase # of
Vision % Increase
Growth Customers Customers

Increase Average
% Increase
Order Size Sale

Increase Frequency
% Increase
Frequency of Sale

Increase
Customer
Customer Customer % Increase
Satisfaction
Satisfaction

Increase Referral
% Increase
Referrals Rate

Increase
Frequency % Increase
Frequency

Figure 6.2 Tree Diagram

6.3 Interrelationship Digraphs


An interrelationship digraph allows a team to systematically identify, analyze, and classify cause-
and-effect relationships that exist among all the critical issues so that the key drivers or outcomes
can become the heart of an effective solution.5 A digraph is best used for more complex problems for
which the exact cause-and-effect relationship is difficult to determine. An interrelationship digraph
allows a team to uncover all of the problems or issues, even the most controversial, as it encourages
team members to think in multiple directions rather than unilaterally.
5  Michael Brassard and Diane Ritter, The Memory Jogger 2: Tools for Continuous Improvement and Effective Planning, Second
Edition [Salem, NH: GOAL/QPC, 2010], 101. Used with permission. www.goalqpc.com

66 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Creating an Interrelationship Digraph:6

Step 1. Agree on the issue/problem statement and write it on a card or a sticky note.
•• If using an original statement that did not originate from a previous tool or
discussion, create a complete sentence that is clearly understood and meets the

Unlawful to replicate or distribute


approval of team members.

•• If using input from other tools (such as an Affinity Diagram), make sure that the
goal under discussion is still the same and clearly understood.

Step 2. Assemble the right team.


•• The interrelationship digraph requires more in-depth knowledge of the subject
under discussion than is needed for the affinity diagram, which is important if the
final cause-and-effect patterns are to be credible.

•• The ideal team size is generally four to six people. However, this number can be
increased as long as the issues are still visible and the meeting is well facilitated to
encourage participation and maintain focus.

Step 3. Lay out all of the ideas/issues that have either been established from other tools or were
previously brainstormed.
•• Arrange 5-25 cards or sticky notes in a large circular pattern, leaving as much space
as possible for drawing arrows. Use large, bold printing and include a large number
or letter, e.g., 1 or A-Z, on each idea card or note for quick reference later in the
process.

Step 4. Look for cause/influence relationships between all of the ideas, and draw relationship
arrows.
•• Choose any of the ideas as a starting point. If all of the ideas are numbered or
lettered, work through them in sequence.

•• An outgoing arrow from an idea indicates that it is the stronger cause or influence.
Ask the following questions:

•• Is there a cause/influence relationship between these two items?

•• Which direction of cause/influence is the strongest?

•• Note: Draw only one-way relationship arrows in the direction of the stronger cause
or influence. Make a decision on the stronger direction. Do not draw two-headed
arrows.

Step 5. Review and revise the first round of the interrelationship digraph.
•• Get additional input from people who are not on the team to confirm or modify the
team’s work.

Step 6. Record and mark the number of outgoing and incoming arrows and select key items
for further planning.

6  Michael Brassard and Diane Ritter, The Memory Jogger 2: Tools for Continuous Improvement and Effective Planning, Second
Edition [Salem, NH: GOAL/QPC, 2010], 101-105. Used with permission. www.goalqpc.com

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 67
Chapter 6: Seven Management and Planning Tools

•• Find the item(s) with the most outgoing arrows and the item(s) with the most
incoming arrows.

•• Outgoing arrows. A high number of outgoing arrows indicates an item that is a


root cause or driver. This is generally the issue that teams tackle first.
Unlawful to replicate or distribute

•• Incoming arrows. A high number of incoming arrows indicates an item that is


a key outcome. This can become a focus for planning, either as a meaningful
measure of overall success or as a redefinition of the original issue under
discussion.

•• Note: Use common sense when you select the most critical issues on which to focus.
Issues with very close number of arrows must be reviewed carefully; but in the end,
it is a judgment call, not science.

Step 7. Draw the final interrelationship digraph (see Figure 6.3).


•• Identify visually both the ID
key Diagram
drivers (most outgoing arrows) and the key outcomes
(most incoming arrows). Typical methods are double boxes or bold boxes. 

What are the issues related to reducing


plastic bottles in waste receptacles?

A Driver
F Lack of B
respect for Lack of
Unnecessary environment awareness
bottling of of impact on
water In=2 Out=0 environment
In=1 Out=1 In=0 Out=5

C
E
Lack of Inadequate
examples conse-
by adults quences
D
In=4 Out=1 In=1.5 Out=1
Outcome Not enough
recycling
receptacles

In=1 Out=1.5

Figure 6.3 Interrelationship Digraph


Based on graphic from: Michael Brassard and Diane Ritter,
The Memory Jogger 2: Tools for Continuous Improvement and Effective Planning, Second Edition
[Salem, NH: GOAL/QPC, 2010], 105. Used with permission. www.goalqpc.com

68 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

6.4 Matrix Diagrams


Matrix diagrams are used to identify, analyze, and rate the strength of the relationship between two or
more sets of information in order to show the relationship between objectives and methods, tasks and
people, results and causes, and customerMatrix
specifications and requirements. The strength of the
Diagram

Unlawful to replicate or distribute


relationship is determined at the intersection of each row and column.

Quality Assurance Marketing Sales Legal Human Resources

Hire Marketing
Manager with
DB Experience

Develop
Product
Presentation to
Demo the
Software

Screen Shots
Showing
Comparison to
Other Products

Develop Ads

Figure 6.4 Matrix Diagram

Depending on its application, a matrix diagram can help a team do the following:

◆◆ Identify the patterns between the loads of the tasks assigned to people and to efficiently and
evenly distribute work.
◆◆ Reach a consensus on a decision.
◆◆ Develop a disciplined approach to systematically incorporate a large number of factors into
decision-making.

There are several basic types of matrices:

◆◆ L-type: element on both the y-axis and x-axis.


◆◆ T-type: two sets of elements on the y-axis, split by a set of elements on the x-axis.
◆◆ X-type: two sets of elements on both y-axis and x-axis.
◆◆ Y-type: two L-type matrices joined at the y-axis to produce a matrix design in three planes.
◆◆ C-type: 3D matrices joined at the y-axis but with only one set of relationship indicated in 3D
space (use of a computer software package is recommended for this type).

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 69
Chapter 6: Seven Management and Planning Tools

Variations on utilizing the above matrices can be made to obtain additional types; for example, the
results of a tree diagram, or even two tree diagrams, can be meshed into a single matrix. Outlined next
are the steps to create the most common matrix diagram, the L-shaped matrix.

Creating a Matrix Diagram:


Unlawful to replicate or distribute

Step 1. Determine the basic problem to be solved. Create a complete sentence that is clearly
understood. Example: What are the most critical factors driving low scores on
employee opinion surveys?
Step 2. Brainstorm the list of issues that best represent the problem to be solved. List these
items in the left-hand column of the matrix.
Step 3. Brainstorm the list of reasons why these issues are occurring. List these items as the
row across the top of the matrix.
Step 4. Begin evaluating the relationship in each cell by comparing the item in each row to
every item in each column. Use the following symbols to represent the strength of the
relationships.
Double circle is a strong relationship = 9
Open circle is a medium-strength relationship = 3
Triangle is a weak relationship = 1
Empty cell = no relationship

6.5 Prioritization Matrices


Prioritization matrices are used to rank order and ultimately to select the best of several options
based on this systematic approach. This tool helps a team to do the following:

◆◆ Quickly understand any basic disagreements and see where more data are needed to
completely understand the relationship being reviewed.

◆◆ Focus on the top priorities for selection or implementation. The matrix is looking at the
customer requirements vs. the features and functions to be developed into a software system
that would help determine which features and functions should be in the first release of the
software vs. the second or third.

◆◆ Eliminate any hidden agendas; all the information must be on the matrix for evaluation.

Creating a Prioritization Matrix:

Step 1. Identify the overall objective.


Step 2. Create and agree upon the criteria with which to judge how well each item on the list
meets the objective. To create the appropriate criteria:
•• Identify the key components for meeting the objective.

•• Identify any constraints for meeting the objective.

•• Create measurable criteria.

Step 3. Identify items to prioritize.


Step 4. Identify the criteria you will use to determine how well each item meets the objective.

70 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Step 5. Score each item against each criterion using the method shown in Figure 6.5. (How
likely is it to improve the objective?)
Step 6. Select the criteria that will be used to prioritize each item.
Step 7. Define the scoring method, including the voting system, limited set of values, negative

Unlawful to replicate or distribute


scores for negative effects and positive scores for positive effects, percentage scale, etc.
Step 8. Score each item using the identified criteria.
Step 9. Add the weighted
Stepsscores to determine
to Create the final score
a Prioritization for each item.
Matrix

Criteria are prioritized


Criteria provide common by weighting values
method of judging items (e.g. 4 means ‘twice as
to be prioritized important as 2’)

Low cost of High increase


Criteria Final
Items implementation in sales
score
to prioritize Weight = 2 Weight = 4

Add larger bandages 4 8 1 4 12


x

Remove outdated antiseptic 3 6 2 8 14


+
Use container with tighter lid 5 10 4 16 26

Items scored
Weighted score Weighted scores
against criteria
is score x weight added for final score
(e.g. 4 x 2 = 8) (e.g. 6 + 8 = 14)

Figure 6.5 Calculating Weighted and Final Scores on a Prioritization Matrix

6.6 Process Decision Program Charts (PDPC)


The process decision program chart (PDPC) uses a tree diagram as its base and adds the steps to
assess risk and to perform contingency planning to counter any possible problems or obstacles that
might keep the team from achieving its goal.

Creating a Process Decision Program Chart:

Step 1. Identify the objective for using a PDPC. e.g., identify the risks in a specific area of a
plan, identify countermeasures that will reduce risk and cost.
Step 2. Identify the highest risk areas of the plan which may cause the plan to not meet its
objectives.
Step 3. Determine which risk areas should be included in the scope of the PDPC effort.
Step 4. For each risk included in the scope, identify possible countermeasures for eliminating
the risk or reducing the impact of the risk.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 71
Chapter 6: Seven Management and Planning Tools

Internet search
PDPC CHART

Plan vacation Identify Make decision Choose hotel Make hotel


locations reservation
Unlawful to replicate or distribute

Get family buy


in

Clothing and Shorts


shoes don’t fit
Go
Pack Get suitcases toiletries
shopping
Activities and Need
games shampoo

work
Get time off
school

Go on family
Prepare Use
vacation Hold at post
Mail office kennel

Take care of Sitter not


pets Get pet sitter
house available

newspapers Turn off


delivery

Do
car Get gas
maintenance

Transportation taxi

Don’t have
phone time
Call
garage

Budget $ Open vacation


Save Direct deposit
account

Not enough
Get Credit
money

Figure 6.6 Process Decision Program Chart


Based on graphic from: Kerry Donelan, CQM-OE, CSSBB, Meegan Dowling, CQM-OE, CSSBB, and Owen
Ramsay, BSChE, MSEE, CQE, CQM-OE, CSSBB, Quality Management & Planning (7M or 7MP) Tools,
Seminars/2009_04_307M_Presentation.pdf. Used with permission. www.asqlongisland.org .

6.7 Activity Network Diagrams


The activity network diagram (arrow diagram) is a tool for scheduling sequential and simultaneous
tasks. This tool helps a team identify the best path for completing a project. It provides a graphical
representation of the total time necessary to complete a project as well as the individual tasks that must
be completed. The diagram shows which tasks must be completed sequentially and which in parallel.

This tool offers the following benefits:

◆◆ Allows each of the team members to realistically explain each of the tasks for which they are
responsible in the plan.

◆◆ Helps team members to see how critical the on-time delivery of tasks are to the successful
completion of the project.

72 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

◆◆ Visually expands the team’s thinking to allow for more creative solutions to arranging tasks to
optimize the outcome.

Creating an Activity Network Diagram:

Unlawful to replicate or distribute


Step 1. Brainstorm all the tasks required to complete a project. Record each task on a sticky
note.
Step 2. Identify the task that must be completed first. Place it to the far left of the other tasks
(see Figure 6.7).
Activity Network Diagram: Identify First Task

Figure 6.7 Activity Network Diagram - Identify First Task


Based on graphic from: Michael Brassard and Diane Ritter,
The Memory Jogger 2: Tools for Continuous Improvement and Effective Planning, Second Edition
[Salem, NH: GOAL/QPC, 2010], 4. Used with permission. www.goalqpc.com

Step 3. Review the remaining tasks to determine if any can be completed at the same time as
task #1.
Step 4. Place tasks that can be done simultaneously with task #1 directly above or below task
#1 (see Figure 6.8).
Step 5. Identify the task that must be completed next. Place it to the right of the first column of
the tasks.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 73
Chapter 6: Seven Management and Planning Tools

Activity Network Diagram: Identify Simultaneous Tasks


Unlawful to replicate or distribute

Figure 6.8 Activity Network Diagram - Identify Simultaneous Tasks


Based on graphic from: Michael Brassard and Diane Ritter,
The Memory Jogger 2: Tools for Continuous Improvement and Effective Planning, Second Edition
[Salem, NH: GOAL/QPC, 2010], 4. Used with permission. www.goalqpc.com

Step 6. Review this cycle until all of the tasks have been placed in sequential order.
Step 7. Identify and agree on the completion time required for each task and record it on each
of the appropriate sticky notes (see Figure 6.9).
Step 8. Beginning with task #1, number each task.

Then, complete the calculations for determining the project's critical path. The critical path is the path
on which a delay of any of the tasks leads to a delay to the project’s completion (Figure 6.9).

◆◆ ES = Earliest start (the largest EF of any previous connected task)

◆◆ EF = Earliest finish (ES + the time to complete this task)

◆◆ LS = Latest start (LF - the time to complete this task)

◆◆ LF = Latest finish (the smallest LS of any connected following task)

◆◆ When ES = LS and EF = LF, this task is on the critical path

74 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Advanced Stage of Activity Network Diagram

2. Review
feedback from

Unlawful to replicate or distribute


similar programs

1. First, determine 14 21
target audience T = 7 days
28 35
for new topic.
0 14 3. Assess
T = 14 days 0 14
competitor’s
offerings
14 35
ES EF T= 21 days
14 35
LS LF

Figure 6.9 Activity Network Diagram - Advanced Stage


Based on graphic from: Michael Brassard and Diane Ritter,
The Memory Jogger 2: Tools for Continuous Improvement and Effective Planning, Second Edition
[Salem, NH: GOAL/QPC, 2010], 5. Used with permission. www.goalqpc.com.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 75
This page intentionally left blank.
Unlawful to replicate or distribute

76 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Chapter 7: Project Tracking

Unlawful to replicate or distribute


Key Term
critical activity milestone schedule
deliverables schedule PERT chart
Gantt chart project plan

Body of Knowledge
1. Systematically plan and complete project work activities.

2. Use Gantt charts and program evaluation and review technique (PERT) charts to plan projects and
monitor their progress.

F or a project to be successful, it is very important to constantly monitor its progress in delivering


the expected product and/or service on schedule and within budget. Based on the knowledge
gained by properly planning, executing, and monitoring a project, the team can work towards
completion while ensuring that it is on track.

7.1 Planning and Completing Project Work


The key to a successful project is proper planning, and LSS projects are no exception. The participation
of the entire project team is required as well as extensive preparation and knowledge of the work
required to successfully complete each project. Following is a list of items needed to complete and/or
consider when creating a LSS project plan:

1. Fully document the entire scope of work needed to complete the project.

2. Create a work breakdown structure (WBS) for planning and communication.

3. Let the project and the team determine the tools that will be used on a specific project.

4. Create and include plans to handle project communication, quality control, resource staffing,
reporting, etc.

5. Start communications when the project launches, and communicate often.

6. Allow for changes to the project plan as the project progresses.

7. Keep an eye on opportunities and threats.

A project plan is much more than a timeline since it provides the team with a roadmap to guide them
through the project. The project plan for a LSS project should be used like any other project tool, the only
difference being that it is used throughout the project instead of just during one or more of the phases.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Chapter 7: Project Tracking

7.2 Project Planning and Monitoring Tools


Using the project plan as a roadmap also allows the team to monitor the project's status once work has
begun. Monitoring a project is important to a project's success as it provides accountability, a way to
compare actual progress against what was planned and document lessons learned. It also helps to keep
Unlawful to replicate or distribute

the project team engaged in the project.

In order to help plan and monitor a project's work activities, utilizing a project-tracking system
will visualize the project activities and their progress for team members. Depending on the system
chosen, it could also be used as a reporting tool to communicate the status of the project and results to
stakeholders. Some of the ways projects can be monitored include:

1. Spreadsheets: A spreadsheet can include timelines with acceptable delays, projected budgets
and resource hours with expected increases, and contact information for project resources in
case of any emergency.

2. Software: When working on a large or complex project, a software program will handle
tracking and reports better than a spreadsheet.

7.2.1 Gantt Charts


A Gantt chart, developed by American engineer and social scientist Henry Gantt in 1917, is a horizontal
bar chart used for scheduling, which displays what has to be done (activities) and when (timeline/
schedule). On the left side of a Gantt chart (see Figure 7.1) there is a column listing all of the project
activities for a specific project, and across the top is the time scale. Each of the activities is represented by
a bar, the length and position of which reflect the duration, start date, and finish dates. The graphical
representation of a schedule helps the team plan, organize, and track specific project activities.
Q4 2015 Q1 2015 Q2 2015
Task Name Sep ‘15 Oct ‘15 Nov ‘15 Dec ‘15 Jan ‘16 Feb ‘16 Mar ‘16 Apr ‘16
Design
Project Charter
Measure
Process Mapping
Analyze
Test Hypothesis
Improve
FMEA
Control

Figure 7.1 Gantt Chart

Project activities and schedule information can be entered into a project management software
program, which provides the team with a way to update and monitor the schedule as the project
progresses. A software program will also track and display which activities are behind schedule or
more time will be required to complete than originally estimated.

7.2.2 Milestone Schedule


A milestone schedule (see Figure 7.2) allows the team to take the goal of the project (to create a final

78 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

deliverable), divide it into major sub-goals, and assign deadlines to each sub-goal. A milestone is a task
of zero duration that represents an event or time when one or more project activities are completed. A
milestone can be:

◆◆ An important task/activity or event

Unlawful to replicate or distribute


◆◆ A project phase tollgate

◆◆ The completion of one or more planned project deliverables

◆◆ A specific amount of time

◆◆ Any significant situation unique to the project

Milestone Schedule
Project: Transition of Traditional Pharmacology Course to Online Delivery

9/1 10/1 11/1 12/1 1/1 2/1 3/1 4/1 5/1 6/1 7/1 8/1 9/1

Project Online Course Course Technical Final


Starts Delivery Schedule Media, Development Edits
9/5 Course Finalized Materials, Completed 8/1
Model 12/18 and 5/30
Finalized Resources
10/25 Developed
3/9
Textbook Testing
and Course Finished
Ancillary Media, 7/18
Materials Materials
Selected and
11/13 Resources
Finalized
Project Plan 5/1 Launch
Complete Online Course
10/1 8/23

Execution Phase
(Create Deliverables)
Life Cycle Stages 1-5

Figure 7.2 Milestone Schedule


Based on graphic from: Karen Tate and Paula Martin,
The Project Management Memory Jogger, Second Edition
[Salem, NH: GOAL/QPC, 2010], 104. Used with permission. www.goalqpc.com.

Milestones are identified and defined when the project charter is being written and are then used in
the milestone schedule to manage project work and monitor the results of the activities. A milestone
schedule can also be used to communicate the status of the project to stakeholders and to set
expectations for the work activities being completed by the project team.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 79
Chapter 7: Project Tracking

Criteria for Selecting Milestones


◆◆ Question: How important is this task, decision, or event to the execution of the overall project?

Answer: highly important = milestone


Unlawful to replicate or distribute

◆◆ Question: What is the likely impact if this task, decision, or event is not met on time or as
needed?

Answer: serious impact = milestone

◆◆ Question: Can this task, decision, or event be used as an indicator of project success?

Answer: yes = milestone

7.2.3 Deliverables Schedule


A deliverables schedule (see Figure 7.3) shows the sequence of deliverables to be created, from first to
last, and who is accountable for meeting the delivery date for each deliverable. This schedule provides
the team with a way to keep the production of the final deliverables on track.

Partial Deliverables Schedule


Project: Transition of Traditional Pharmacology Course to Online Delivery

Partial Subproject Tree

Online Delivery Course Model

Instructor/Content Specialist Textbook Selection

Ancillary Material Selection

Course Schedule

Develop Course Media

Course Design Develop or Select Other Course Materials

Develop Online Resources

Provide Quality Assurance

9/1 12/1 3/1 6/1 9/1

Partial Milestone Schedule

Project Course Course Launch


Starts Schedule Media, Online
9/5 Finalized Materials, Course
12/18 and 8/23
Resources
Developed
3/9

Figure 7.3 Deliverables Schedule


Based on graphic from: Karen Tate and Paula Martin,
The Project Management Memory Jogger, Second Edition
[Salem, NH: GOAL/QPC, 2010], 111. Used with permission. www.goalqpc.com.

A deliverable is a measurable and verifiable outcome or object that a project team must create and
deliver according to the terms of an agreement. Deliverables can be tangible (material or substantial
object) or intangible (an outcome without a physical existence). Deliverables can also be something
that contributes to the completion of the project or the final results of the project.

80 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

7.2.4 The Critical Path Method (CPM)


Originally developed in the 1950s by the U.S. Navy, the critical path method (CPM) is an analytical
method for scheduling interdependent project activities. Using CPM, a project team can create a
model of a project incorporating a work breakdown structure (WBS) that lists all the activities
required to complete the project, the duration of each activity (how much time it will take to

Unlawful to replicate or distribute


complete), and any dependencies.

Using the precedence diagramming method called activity-on-node (AON), as shown in Figure 7.4
below, and an activity network diagram (see Section 6.7 Activity Network Diagrams), CPM calculates
the earliest date and latest date each planned activity can start and finish without causing schedule
delays on the project. Using total estimated durations for each path in the schedule, the longest path,
called the critical path, can be identified. Any delay of an activity on the critical path directly impacts
the planned project completion date.

ES D EF
Activity
LS TF LF

Figure 7.4 Activity on Node

CPM Terms and Definitions


◆◆ The duration (D) of an activity is the amount of time it will take to complete that activity,
which can be displayed as minutes, hours, days, weeks, etc.
◆◆ Float (slack) is the amount of time that a task can be delayed without causing a delay to
subsequent tasks or the project completion date. There is no float on the critical path.
◆◆ Total float (TF) is the amount of time an activity can be delayed or extended from its early
start date without delaying the project finish date, which is calculated by subtracting the EF
from the LF of each activity (LF–EF).
◆◆ A critical activity is an activity with zero float.
◆◆ The early start date (ES) is the earliest possible date an activity can begin (the time at which
all predecessor activities are completed).
◆◆ The early finish date (EF) is the earliest possible date an activity can finish if it starts on the
ES.
◆◆ The late start date (LS) is the latest possible date an activity can start without delaying the
project's completion.
◆◆ The late finish date (LF) is the latest possible date an activity can finish if it starts at the LS.
◆◆ A forward pass calculates the ES and EF for each activity by adding the duration to the ES to
calculate each EF (ES+D=EF). Each activity that does not have a predecessor starts on time
zero.
◆◆ A backward pass calculates the LF and LS for each activity by subtracting the duration from
LF to get each LS (LF–D=LS).

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 81
Chapter 7: Project Tracking

7.2.5 PERT Charts


A PERT chart (see Figure 7.5) is a visual representation of a project's schedule which shows the
sequence of tasks, the tasks that can be performed in parallel, and the critical path of tasks that must
be completed on time in order for the project to complete on time. A PERT chart, which can
document an entire project or focus on a project phase, allows a team to avoid unrealistic time
Unlawful to replicate or distribute

estimates, identify bottlenecks, and focus on the most critical tasks. A PERT chart is a variation on
CPM that focuses on time estimates for individual tasks rather than a complete set of interdependent
activities. It also uses an activity network diagram to display the sequence of activities involved in a
project.

Legend
Start Task 1
0 0 days 1 21 days
Task Name
09/02/2015 09/02/2015 09/02/2015 09/22/2015 Task # Duration
Start Date End Date
Task 2 Task 4
2 20 days 4 42 days
09/02/2015 09/21/2015 09/22/2015 11/2/2015
Task 6
Task 3 Task 5 6 27 days

3 23 days 5 35 days 11/3/2015 11/29/2015

09/02/2015 09/24/2015 09/25/2015 10/26/2015


Task 7
7 18 days
11/29/2015 12/16/2015

Figure 7.5 Sample PERT Chart

PERT is an acronym for Program Evaluation and Review Technique, a methodology developed in the
1950s by the U.S. Navy and some of its contractors to manage the Polaris submarine missile program.
The Navy used PERT to coordinate over 3,000 contractors working on the project and credited PERT
with shortening the project's duration by two years.

To calculate the expected time of an individual task, estimate the shortest possible time each activity
will take (O for optimistic), the most likely length of time (M for most likely), and the longest time that
might be taken if the activity takes longer than expected (P for pessimistic). Use the formula shown in
Figure 7.6 below to complete the calculations for expected time:

O + 4M + P
Expected Time =
6

Figure 7.6 PERT formula

82 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Steps for Creating a PERT Chart:


Step 1. Identify all the tasks or components of the project.
Step 2. Identify the first task that must be completed.
Step 3. Identify any other tasks that can be started in parallel to the first task.

Unlawful to replicate or distribute


Step 4. Identify the second task that must be completed in the sequence.
Step 5. Identify any other tasks that can be started in parallel to the second task.
Step 6. Continue this process until all tasks have been sequenced.
Step 7. Identify the duration of each task.
Step 8. Construct the PERT chart by numbering each task, drawing connecting arrows, and
documenting the duration, start date, and end date for each task.
Step 9. Determine the critical path.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 83
This page intentionally left blank.
Unlawful to replicate or distribute

84 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Chapter 8: Project Teams

Unlawful to replicate or distribute


Key Terms
Black Belt project champion
executive recorder
facilitator timekeeper
Green Belt brainstorming
Master Black Belt multi‑voting
process owner nominal group technique

Body of Knowledge
1. Provide positive leadership energy to accomplish project goals through people: communicate,
convince, coordinate, and compel.

2. Define and describe the stages of team evolution.

3. Identify and help resolve negative team dynamics.

4. Define the LSS and general team member roles and responsibilities.

5. Define and apply team tools.

6. Facilitate effective brainstorming.

7. Describe the steps of the nominal group technique.

8. Employ multi-voting to prioritize actions.

P roject teams serve as the basic building blocks of any LSS project. Once the project's scope has
been determined, the project team members should be selected based on their level of influence
and knowledge of the process as well as their skills and abilities. It is also important to ensure team
members are properly trained. The project team can resolve negative team dynamics and perform
cohesively when they understand team building processes, tools, and team roles and responsibilities.

8.1 Leading Project Teams


While it is easier to manage a project than to lead people, a good project leader can balance the two
and focus on the project and the people. As a general rule of thumb, you should manage tasks, events,
and processes, and lead people.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Chapter 8: Project Teams

Managers typically tell people what to do, leaders motivate team members on an individual level.
Leaders inspire their team members to contribute to the organization while recognizing their strengths
and helping them to think of their work as more than just a job. Leaders listen to people and empower
them instead of just telling them what to do.
Unlawful to replicate or distribute

In The Wisdom of Teams,7 Katzenbach and Smith note six key skills for team leaders to be successful:

1. Keep the purpose, goals, and approach relevant and meaningful.

2. Build commitment and confidence.

3. Strengthen the mix and level of skills.

4. Manage relationships with outsiders, including removing obstacles.

5. Create opportunities for others.

6. Do real work.

8.2 Stages of Team Development


Psychologist Bruce W. Tuckman first identified the four stages of “forming, storming, norming, and
performing” as a developmental sequence for groups. This sequence, or development stages, outline
the path that most teams follow as they work towards becoming a high performance team. In later
years, Tuckman added a fifth stage, “adjourning.”

8.2.1 Forming
When a team first comes together during the forming stage, its team members are filled with
excitement and optimism about the new opportunity. This stage is often referred to as the
“honeymoon” period. As team members work through this phase, there is a natural tendency for
members to be on their best behavior in order to be accepted within the group. Team members
are also highly dependent on the team leader during this stage. It is the team leader’s responsibility
to provide guidance and a clear structure by using a facilitative approach. If this stage is handled
effectively, the team will have a good foundation for success.

8.2.2 Storming
As the honeymoon period wears off, the team enters the second stage: storming. During this stage,
team members are comfortable enough to reveal their true selves and to challenge the status quo.
This stage is usually the most difficult for teams as they realize the amount of work left and feel
overwhelmed. They are not yet team improvement skills experts, but they do want the project to move
forward. Team members can cling to their own opinions and personal experience and subsequently
may resist seeking the opinions of others, which can lead to hurt feelings and unnecessary disputes.
Disciplined use of the quality improvement process and the proper tools and communication skills
can assist team members in expressing their various theories, lower their anxiety levels, and reduce the
urge to assign blame.

7  Jon R. Katzenbach and Douglas K. Smith. The Wisdom of Teams: Creating the High-Performance Organization [Boston,
MA: Harvard Business School Press, 1993].

86 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

There are healthy and unhealthy types of storming. Conflicts often occur due to authority issues, vision
and value disagreements, and personality and culture differences. However, if dealt with appropriately,
these stumbling blocks can later be turned into performance. As a leader, it is important to remember
that storming is a normal phase in the group life cycle. The best strategy is to stay calm and face the
issues head on.

Unlawful to replicate or distribute


8.2.3 Norming
During the third stage of norming, a sense of group cohesion is developed. Team members accept
each other and develop norms for making decisions, completing assignments, and resolving conflicts.
Norming takes place in three ways:

1. As storming is overcome, the team becomes more relaxed and steady. Conflicts are less
frequent and no longer throw the team off course.

2. Norming occurs when the team develops a routine. Scheduled team meetings give a sense of
predictability and orientation.

3. Norming is cultivated through team-building events and activities. Norming is a necessary


transition phase; a team cannot perform if it does not norm.

8.2.4 Performing
Performing is the payoff stage. The group has developed its relationships, structures, and purpose and
begins to tackle the tasks at hand and begins to work effectively and cohesively. Because of the synergy
within the group, the leader can take a less directive approach and relinquish some of the leadership
tasks to other members of the team. Be aware that even during this highly productive stage, however,
the team may still have its ups and downs. Feelings that occasionally surfaced during the storming
stage may recur.

8.2.5 Adjourning
Team members may be concerned if the project team is being dissolved, and this could lead to anxiety
about their future roles and responsibilities. Because they have spent significant time with their fellow
project team members, they may feel sadness about the changes in team relationships; but at the same
time, they are hopefully feeling a sense of accomplishment for the team’s work. Team morale can either
rise or fall as team members go through this ending stage of the project.

Also, during this stage, some of the team members may become less focused on their tasks, creating
a drop in productivity. Others may increase in productivity as they lose themselves in focusing on
their work rather than on the end of the project. During the adjourning stage, the focus should be
on ensuring the deliverables are completed, evaluating the team, documenting lessons learned, and
acknowledging individual contributions and team accomplishments.

8.3 Rewards and Recognition


The purpose of a recognition program is to recognize and reward work and behaviors that support and
further the mission, goals, and values of the organization. Giving recognition helps your employees to:

◆◆ Take pride in their work and in their job responsibilities.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 87
Chapter 8: Project Teams

◆◆ Feel appreciated for their contributions.

◆◆ “Go the extra mile”.

◆◆ Increase their level of commitment to the organization.


Unlawful to replicate or distribute

◆◆ Improve relationships with their co-workers.

◆◆ Be more open to constructive feedback.

◆◆ Strive to meet and/or exceed performance expectations.

◆◆ Support and promote a positive atmosphere in which praise prevails.

◆◆ Get more enjoyment out of the work they do.

Realizing these benefits does not have to cost the organization a lot of money or time. Most programs
can be set up using low or no-cost job recognition and intangible rewards.

No-Cost Job Recognition/Intangible Rewards:

◆◆ Interesting Work: Even people with inherently boring jobs become more productive when they
are given at least one stimulating task or project.

◆◆ Involvement: The people who are closest to a situation have the best insight on how to
improve it, yet are rarely asked for it. Their involvement enhances commitment and eases
implementation of changes.

◆◆ Increased Visibility: For some workers, getting company visibility is highly rewarding. Send
a letter of praise, acknowledge their work at a meeting, or hang photos on a “bravo wall” as
motivation.

◆◆ Information: Employees crave knowledge about how they are doing and how the company is
doing. Send monthly e-newsletters to keep employees informed about the company and give
them regular feedback on their job performance.

◆◆ Independence: Employees appreciate flexibility in their jobs, which is known to contribute to


more desirable performance. Provide assignments in a way that tells them what needs to be
done without dictating exactly how to do it.

8.4 Resolving Negative Team Dynamics


Team leaders on the lookout for regression into the storming stage should be aware of the symptoms of
negative team dynamics. The following list identifies common indicators that these negative dynamics
may exist:

◆◆ Unquestioned acceptance of opinions vs. facts: the group readily accepts the opinions of a
subject matter expert.

◆◆ Groupthink: high avoidance of conflict whereby group members go along with the majority.

◆◆ Rush to accomplishment: the team just wants to get through the project.

88 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

◆◆ “Yeah-but-ing”: automatically discrediting the ideas of others.

◆◆ Withdrawal of team members: team members cease contributing to the conversation.

◆◆ Tangents: off-topic discussions derail the progress of the group.

Unlawful to replicate or distribute


◆◆ Emotional arguments: taking a difference in opinion to a personal level is a non-productive
path, and if unchecked can lead to a division within the group.

Let’s take a look at some possible solutions to these problems in Table 8.1.

Table 8.1 Possible Solutions to Negative Team Dynamics

Possible Solutions to Negative Team Dynamics


Problem Resolution
Unquestioned acceptance of ◆◆ Subject matter experts should be discouraged
opinions from using technical jargon.
◆◆ All members of the team should be well-informed,
and it should be understood that non-experts can
contribute a fresh viewpoint.
Groupthink ◆◆ Appoint a “devil’s advocate” to raise objectives.
◆◆ Suggest that the team brainstorm several ideas
before coming to a conclusion.
Rush to accomplishment ◆◆ Remind the group of the vision.
◆◆ Facilitate a meeting with the group to pinpoint
frustrations.
◆◆ Provide support, resources, etc.
“Yeah but-ing” ◆◆ Revisit list of team rules.
Withdrawal ◆◆ Re-establish that it is important to receive input
from each group member.
◆◆ Include everyone in the process by going around
the room or having each person write down his or
her thoughts.
Tangents ◆◆ Make note of off-topic items on the “parking lot.”
◆◆ Use agendas to keep a clear directive of the
meeting objectives.
◆◆ Re-focus the team by purposely directing the
conversation to the project.
Emotional arguments ◆◆ Encourage parties to resolve the issue.
◆◆ Facilitate a meeting between the team members
with the conflict.
◆◆ Remind involved parties of the established team
norms.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 89
Chapter 8: Project Teams

8.5 Team Roles and Responsibilities


8.5.1 Lean Six Sigma (LSS) Roles and Responsibilities
Executive
Unlawful to replicate or distribute

The executive provides the direction and alignment for the success of LSS throughout the organization
and also links LSS to corporate strategy and projects the long-term contributions of the initiatives.
The executive clearly understands that LSS initiatives will falter without public executive support. By
becoming the consummate champion for LSS in the organization, the executive makes clear that “LSS
is the way we do business.”

Master Black Belt


A Master Black Belt is the keeper of the LSS process and serves as an advisor to senior leadership
and also trains and mentors Black Belts and Green Belts. Master Black Belts have substantial
experience and have typically conducted more than ten LSS projects. Master Black Belts also strive
to innovate the organization’s LSS process and ensure that projects are in line with the organization’s
strategic objectives. Master Black Belts are generally found in larger organizations and frequently are
responsible for multiple locations.

Black Belt
A Black Belt is the quality professional and change agent who leads project teams and handles the
detailed analysis that is required by the DMAIC and DMADV methodologies. In many organizations,
Black Belts perform duties that are normally attributed to Master Black Belts. Black Belts do not have
to be experts in the processes that are under review, but candidates should possess the following
qualities: understanding of statistical tools, ability to effectively lead a team, and the ability to remain
strong under pressure from upper management.

Green Belt
Green Belts work on projects on a part-time basis and generally work on projects that are compatible
with their skills and knowledge. Green Belts have a clear understanding of the DMAIC methodology
and can apply the tools to the project at hand. Green Belts can serve as a team member for complex
projects and as team leader for simpler projects.

Project Champion (Sponsor)


The project champion is the process owner that provides the business focus for LSS projects. They
coordinate with Black Belts to identify projects that are critical to the organizational objectives. They
support the project team by removing barriers to success and providing the necessary resources.

Process Owners
The process owner owns the processes that are being improved by LSS projects. They must be
educated on basic LSS concepts and provide support to the Black Belts that are running the projects.
Most importantly, they must sustain the changes that have been implemented in their areas.

90 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

8.5.2 General Team Roles and Responsibilities


Team Leader
Each team leader has the following responsibilities:

Unlawful to replicate or distribute


◆◆ Manage the progress of the team.

◆◆ Inform the team about projects requirements, scope, etc.

◆◆ Develop the skills of team members.

◆◆ Communicate with management about the progress and needs of the team.

◆◆ Remove barriers to success.

◆◆ Resolve conflicts within the team.

◆◆ Share in team responsibilities.

Team Member
Each team member has the following responsibilities:

◆◆ Participate in training to become an effective team member.

◆◆ Attend team meetings as required.

◆◆ Complete assignments between meetings.

◆◆ Participate actively during meetings by contributing information and ideas.

◆◆ Encourage active participation by other team members.

◆◆ Benefit from the experience, expertise, and perspectives of others.

◆◆ Apply the steps of the improvement process.

Coach
The project coach assists in selecting the project team members and ensures they all have the skills
required for the project. They answer questions and coach the project team members regarding the
LSS methodology and its principles. Coaches document lessons learned and award certificates to those
completing belt-specific training.

Facilitator
The facilitator is a person who is an expert in group dynamics and meeting facilitation. They are
generally more concerned with how the work gets done rather than the subject of the project. A
facilitator can maximize group participation and draws out the less vocal participants. They help the
team deal with conflict and are especially effective when a controversial topic is being discussed.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 91
Chapter 8: Project Teams

Recorder
The team recorder, also considered the secretary, is normally a full-fledged team member. The
recorder maintains the team’s minutes and agendas. Often selected by team members, the recorder
also coordinates the preparation of letters, reports, and other documents. And distributions relevant
materals to team members. This duty often is rotated among the team members.
Unlawful to replicate or distribute

Timekeeper
The timekeeper role is an optional responsibility. This function sometimes becomes the responsibility
of the facilitator when a facilitator is assigned to a team. The timekeeper advises the team of the
remaining time to review a project and enforces the norms of the team.

8.6 Team Tools and Techniques


Teams use a variety of tools to carry out their work, arrive at a decision, or resolve a problem. This
section contains common yet effective tools that should be mastered by project team members.

8.6.1 Brainstorming
Brainstorming is an efficient way to allow a team to be creative when generating a large number of ideas
on one topic by pulling from the collective team knowledge. It is a great tool to use when solutions are
not always obvious. In order for brainstorming to work effectively, the session must be free of criticism
and judgment so that team members can feel free to share any idea that comes to mind.

While brainstorming does not necessarily solve problems, it can be an effective tool for generating ideas
and, when used with other techniques such as multi-voting, can help the team arrive at a consensus.
Effective brainstorming encourages open (creative) thinking when a team is stuck in “same old way”
thinking; gets all team members enthusiastically involved; prevents individuals from dominating the
team; and allows team members to build on each other’s creativity.

The following steps are crucial for effective brainstorming:

Step 1. Decide on the type of brainstorming to be used:


•• S tructured: In a specific order, each team member takes a turn sharing (around
the table).
•• U
 nstructured: Each team member spontaneously shares their ideas as they come to
mind in any order.
Step 2. Decide whether the brainstorming activity will be done silently or out loud.
Step 3. Develop the agreed-upon brainstorming question and post it where everyone can see it.
Step 4. Each team member shares his or her ideas. No idea is criticized. Sharing can be
structured or unstructured. A structured rotation process encourages full participation;
however, it may also create some anxiety for inexperienced or shy team members.
Step 5. Write each idea in large, visible letters on a flip chart or wall.
Step 6. Make sure every idea is recorded with the same words of the speaker. Do not interpret,
edit, or abbreviate ideas. To ensure this, the person writing it on the chart should

92 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

always ask the speaker if the idea has been worded accurately.
Step 7. Continue until all ideas have been recorded.
Step 8. Review the written list of ideas for clarity and discard any duplicates.

Unlawful to replicate or distribute


Step 9. Discard only ideas that are virtually identical. It is often important to preserve subtle
differences that are revealed in slightly different wordings.
The following tools help stimulate creativity:8

◆◆ Visual Brainstorming: Individuals (or the team) produce a picture of how they see a situation
or problem.

◆◆ Analogies/Free-Word Association: Unusual connections are made by comparing the


problem to seemingly unrelated objects, creatures, or words. For example, “If the problem was
an animal, what kind would it be?”

◆◆ 6-3-5 Method: This powerful, silent method was proposed by Helmut Schlicksupp in his
book, Creativity Workshop and is conducted as follows:

•• Based on a single brainstorming issue, each person on the team (usually six people) has
five minutes to write down three ideas on a sheet of paper.

•• Each person then passes his or her sheet of paper to the next person, who has five more
minutes to add three more ideas that build on the first three ideas.

8.6.2 Nominal Group Technique


The nominal group technique is a tool that allows a team to quickly come to a consensus on
the relative importance of issues, problems, or solutions. This technique builds consensus and
commitment to the team’s choices through equal participation; prevents team members from
being pressured or influenced by others; and balances the participation of quiet team members and
dominant ones. The following steps are crucial when using the nominal group technique:

Step 1. Assign a facilitator to lead the discussion.


Step 2. All members create ideas silently and individually on a sheet of paper for 5 to 10
minutes.
Step 3. The facilitator then requests an idea from each member in sequence. Each idea is
recorded until ideas are exhausted. No discussion is allowed at this point.
Step 4. Discussion is opened to clarify and evaluate ideas.
Step 5. Label each idea with a letter (A to Z).
Step 6. Rank the ideas, with 5 as the best/most important ranking and 1 as the worst/least
important ranking.
Step 7. Each team member records the corresponding letter of each idea on a piece of paper
and ranks the ideas.

8  Michael Brassard, et al., The Six Sigma Memory Jogger II: A Desktop Guide of Tools for Six Sigma Improvement Teams
[Salem, NH: GOAL/QPC, 2002], 47. Used with permission. www.goalqpc.com

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 93
Chapter 8: Project Teams

Example: Team member’s sheet of paper looks like this:


A4
B5
C3
D1
Unlawful to replicate or distribute

E2
Step 8. Combine the rankings of all team members.

Example:
A 4 3 1 2 5 = 15
B 5 1 4 4 3 = 17
C 3 4 3 5 1 = 16
D 1 2 5 3 2 = 13
E 2 5 2 1 4 = 14

8.6.3 Multi‑Voting
There are many different forms of multi‑voting. It is a quick and easy tool for prioritizing a list of
items.

Steps for using the Multi‑Voting Technique:

Step 1. Generate and number a list of items.


Step 2. Create an affinity diagram of the items.
Step 3. Calculate the number of votes that will be distributed to each team member. (Do not
include header cards within the affinity in your count of the items to be prioritized.
Also, do not vote at the header card level).
Step 4. The number of votes to be distributed to each team member is equal to the number
items in the list (affinity) divided by 3.

Example: If n (number of items on the list) = 25,


25/3 = 8 votes (rounding to the nearest whole number)

One of the quickest and easiest ways to distribute the votes to each team member is to
use sticky dots (see Figure 8.1).

Figure 8.1 Sticky Dots to Distribute Votes

94 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Step 5. Allow each team member to distribute his or her votes onto the affinity diagram (see
Figure 8.2).

Unlawful to replicate or distribute


Figure 8.2 Affinity Diagram with Votes

Step 6. Tally the votes and record the number next to each individual item on the list.
Step 7. List the top five items (from largest number to smallest number).

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 95
This page intentionally left blank.
Unlawful to replicate or distribute

96 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Chapter 9: Project Communication

Key Terms

Unlawful to replicate or distribute


A3 one-page report project communication
active listening project documentation
communications plan project status report
project closeout report records management

Body of Knowledge
1. Explain how to develop a communications plan and build effective team communications.

2. Understand how to use an A3 one-page report to organize and communicate project activities.

3. Identify and use appropriate communication methods to report progress, conduct reviews, close
out a project, and support the overall success of the project.

4. Describe the types of information (data) needed to properly document a project.

5. Identify various presentation tools and help develop appropriate presentations for phase/tollgate
reviews and management updates.

P roject communication is the exchange of project-specific information. Effective communication


is important to a project's success. Project team members use various communication methods to
deliver project information, including meetings, telephone calls, email, voicemail, and websites.

Meetings are often the most effective way to distribute information to project stakeholders. Before
planning a meeting, the project manager should ensure that the delivery and content will meet
the objectives of the specific communication. The project manager needs to provide accurate
information to the stakeholders, which is information that comes from the project team and project
documentation.

Project managers use project communication management to:

◆◆ Develop a communication plan for the project.

◆◆ Distribute information using various methods.

◆◆ Store data/records.

◆◆ Archive records.

9.1 Building Effective Team Communications


Having effective team communication skills is a critical element of building successful LSS teams.
Team members must be able to communicate effectively within and outside the team. Having such
skills will enable the team to do the following:

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 97
Chapter 9: Project Communication

◆◆ Establish and maintain a healthy, harmonious team environment.

◆◆ Build relationships based on trust and respect.

◆◆ More effectively share their ideas and opinions with each other and with stakeholders.
Unlawful to replicate or distribute

◆◆ Stay focused on the overall goals and objectives of the project.

◆◆ Ensure that the messages are effective and understood.

9.2 Communication Tools and Techniques


Communication is critical to a project’s success and is used to keep team members updated on the
project and for winning support of the project’s key stakeholders. The communication tools and
techniques used on a project can vary based on the project complexity. Smaller, simpler projects will
not require the constant formal communications needed on a larger, complex project.

Here are five tools and techniques that can help improve team communication, regardless of the
project size.

9.2.1 Active Listening


Active listening is a very important part of the communication process. However, people are not
always effectively listening to what is being communicated, which can lead to a breakdown in the
communication process.

Factors that keep us from being good listeners include the following:
◆◆ Daydreaming.

◆◆ Thinking about something else, or being preoccupied with other responsibilities.

◆◆ Thinking about what you are going to say next instead of listening attentively to the speaker.

◆◆ Beginning to speak before the person has finished talking.

Tips for good listening include the following:


◆◆ Put the message sender at ease.

◆◆ Show that you want to listen.

◆◆ Empathsize with the person.

◆◆ Be patient with your response.

◆◆ Hold your own temper.

◆◆ Avoid arguing and criticism.

◆◆ Ask questions.

◆◆ Stop talking.

98 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Use your team ground rules as a mechanism to reinforce good listening skills. Discuss with your team
what it means to have good listening skills. If you witness poor listening behaviors, remind the team of
this operating ground rule.

Unlawful to replicate or distribute


9.2.2 Speaking Clearly and Purposefully
Learning to convey your message effectively is a critical project management skill. A project
manager will typically spend 75 to 90 percent of his or her working hours engaged in some form of
communication, e.g., conferences, meetings, writing memos, reading reports, and talking to team
members, top management, customers, and suppliers.

An individual can inspire and influence others to meet project goals, even when difficult situations
arise, if they employ effective communication skills. Here are a few tips for improving your
communication skills:

◆◆ Get straight to the point.

◆◆ Think before you speak.

◆◆ Speak slowly, clearly, and loud enough to be heard.

◆◆ Watch for listener feedback as you speak. Are they understanding the message?

◆◆ Speak with data and facts; show proof or statistics to validate your points.

◆◆ Be open-minded when challenged and be willing to make a change.

◆◆ Control your body language; look confident yet not defensive.

◆◆ Never criticize in public.

◆◆ Use storytelling to keep it interesting.

While it is not possible to cover everything a good LSS project leader should know about
communicating, this is a fundamental list that should be mastered.

9.2.3 Developing Effective Team Communication Skills


Developing effective communication skills within the team is important for maximizing team
performance.

Here are a few basic tips to remember:

◆◆ Always respect your team members.

◆◆ Ensure all team members thoroughly understand the project requirements.

◆◆ Have regular team meetings to keep the project team informed and encourage members to
share issues and concerns.

◆◆ Clearly define the roles and responsibilities of every team member.

◆◆ Let each person have an opportunity to speak.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 99
Chapter 9: Project Communication

◆◆ Ask for feedback and suggestions from your team members and listen to others’ opinions and
use their suggestions when applicable and appropriate.

◆◆ Repeat others’ words to acknowledge their points of view.


Unlawful to replicate or distribute

◆◆ Give sound and logical reasoning with your opinions.

◆◆ Always be polite in your way of speaking and behavior.

◆◆ Avoid giving out sensitive and confidential information.

◆◆ Display acknowledgement and appreciation through face-to-face interaction or electronic


modes when a team member performs well.

◆◆ Always have a friendly attitude toward each other.

◆◆ Deal with tense situations in a calm manner; do not become emotional.

◆◆ Avoid blaming others.

9.2.4 The A3 One-Page Report


The A3 process standardizes a methodology for innovating, planning, problem-
solving, and building foundational structures. [The goal is] a broader and deeper
form of thinking that produces organizational learning that is deeply rooted in the
work itself. ...[The A3 process is] the key to Toyota’s entire system of developing
talent and continually deepening its knowledge and capabilities.1

John Shook, President of TWI Network, Inc., and Toyota veteran

The A3 one-page report is a process pioneered by Toyota, which they use to identify, outline, and act
on problems within the organization. It is a simple way to identify a problem, an analysis, a corrective
action, or an action plan documented, often using graphics, on a single sheet of large paper. The term
“A3” derives from the international paper size on which the report fits (11 inches x 17 inches). An A3
one-page report is also used to identify and communicate critical project information and to facilitate
the decision-making process by visually telling the project’s “story” in a concise format that, satisfies
the needs of the reader and aligns with company values.

Most A3 reports do not follow the same outline. Most organizations revise the basic design to meet
their own requirements. Some reports follow the DMAIC outline, while others use the PDCA model.
While A3 reports should follow a basic template, the exact wording and format are flexible.

1  Austin Weber. Assembly Magazine (www.assemblymag.com). “Lean Manufacturing: The ABCs of A3 Reports” (BNP
Media, 2015). Posted February 24, 2010. Accessed July 31, 2015.

100 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

In general, A3 one-page reports may include the following components (see Table 9.1 for a sample
template):

◆◆ Background: A brief description of the problem, highlighting the importance to the


organization and the measures used.

Unlawful to replicate or distribute


◆◆ Current Situation/Conditions and Problem: Visual depictions of the problem under
consideration.

◆◆ Goals/Targets: A visual depiction of what the situation would need to be so that the problem
did not occur.

◆◆ Analysis: The analysis performed to determine the root cause(s).

◆◆ Recommendations/Proposed Countermeasures: The solution or proposed countermeasure that


will be (or has been) implemented.

◆◆ Implementation Plan: Tasks, start dates, duration, responsibilities, and completion status.

◆◆ Follow-Up: Post-implementation tasks to ensure solution benefits are maintained.


Table 9.1 A3 One-Page Report Template
I. Background V. Proposed Countermeasures
Cause Countermeasure Description Benefit Responsible/
Support

(use a target-state map to visually depict the


goals/targets of the countermeasures)
II. Current Conditions
(use a current-state map to visually depict VI. Implementation Plan
current conditions) Deliverables Timeline Responsible Support Review

III. Goals/Targets

IV. Analysis VII. Follow-Up


(use a tree diagram to break down the analysis
and list the root causes)

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 101
Chapter 9: Project Communication

9.2.5 Communications Plan


It is not uncommon for teams to view project communication as someone else’s job rather than the
responsibility of every team member. Developing a written communications plan will help the team
select the appropriate communication strategies and techniques to ensure the team’s success.
Unlawful to replicate or distribute

A communications plan is a written document that describes what the plan intends to accomplish
and how those objectives will be met (see Table 9.2). The plan should outline the following items:

◆◆ To whom the communications will be addressed (audience)

◆◆ How the objectives will be accomplished (tools and timetable)

◆◆ How the results of the program will be accomplished (evaluation)

While all projects share the need to communicate project information, the specific information needs
and the methods of distribution may vary. The types of communication include all written, spoken,
and electronic interaction. Tools for communication include:

◆◆ Print publications/written materials

◆◆ Bulletin boards

◆◆ Newsletters

◆◆ Videos

◆◆ Telephone

◆◆ Face-to-face

◆◆ Formal meetings

◆◆ Online communications
◆◆ Electronic displays

◆◆ Signs, posters, banners, stickers

Steps for Creating a Communication Plan:

Step 1. Brainstorm the specific messages that will be delivered throughout the course of the
project. Typical messages would include announcing the launch of the team, clarifying
the mission of the team, etc.
Step 2. Brainstorm the specific people (or groups of people) who will need to receive each
message.
Step 3. Select the appropriate media to best communicate the message. There are often
multiple types of messages that must be delivered multiple times to effectively
communicate a single message.
Step 4. Assign a member of the team to be responsible for completing each communication
task. (Note: It is sometimes helpful to identify key individuals who must be involved in
the actual delivery of the message).

102 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Step 5. Identify when and where the messages will be delivered.

Table 9.2 Communications Plan Template


Communications Plan

Unlawful to replicate or distribute


Message (Perform,
Persuade, Audience Media Who Where/When
Empower)

9.3 Project Documentation


Project documentation is a form of communication. The documentation that will be useful to a
particular project depends on the type of project, its size and complexity, and where the project
currently is in the schedule. Project documentation is best developed with input from team members,
key stakeholders, and the project sponsor, i.e., anyone who will be required to sign-off or agree to a
document as well as anyone responsible for the actual work.

Project documentation can be used to:

◆◆ Obtain resources and approval for the project.

◆◆ Define the project and its scope and timeframe.

◆◆ Define the project risks.

◆◆ Track the project’s progress.

◆◆ Measure the project’s success.

◆◆ Manage stakeholders’ expectations.

◆◆ Coordinate resources, roles, and responsibilities.

◆◆ Communicate project information.

◆◆ Brief a new project manager or team member.

◆◆ Analyze reasons for project success or failure (lessons learned).

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 103
Chapter 9: Project Communication

Types of Project Documentation


On a large project, project managers can create up to 50 different types of documents to help plan,
track, and report on a project, which can include a project plan, business case, project charter, product
requirements, contracts, communication plan, project status reports, and a project closeout report.
In order to successfully manage project documentation and manage the project work, experienced
Unlawful to replicate or distribute

project managers and leaders will create templates to simplify the documentation process. The
recommended starting point with documents are those needed for every project, such as the following:

◆◆ Project charters (covered in depth in Chapter 13), which answer the essential questions:

•• What are the project’s objectives?

•• What is the project intended to produce/deliver?


•• What is the business reason for the project?

◆◆ Project plans, or project management plans, listing the work activities that need to be
completed (the scope of the project), the resources needed to complete the work, the cost and
time required for the project, and a risk assessment.

9.3.1 Project Reports


Reporting on a project's performance is vital in communicating with stakeholders, who need to
be constantly updated on the project's progress, including the resources used. Performance, or
status, reports need to provide information appropriate for the audience so it is important that the
information is provided at the level of detail the project stakeholders need.

Information about the project's progress should be gathered throughout the duration by
communicating regularly with team members. The information should then be communicated to
stakeholders in a timely manner through regular status reports. All of the information gathered can
also be reviewed at the end of the project to assist with documenting lessons learned and in creating
the project closeout report.

Project Status Reports


A project status report is a basic communication tool used to inform project team members and
stakeholders about the current overall status of the project. Project status reports can be created and
distributed weekly or monthly, depending on the needs of the project and stakeholders. They should
be brief and clearly written and should communicate what the project team has achieved and the work
that remains.

While the report's purpose is to keep the stakeholders and project team members informed on the
project, a status report also can be used to convey or assist in making decisions and authorizing
changes. Conducting regular status updates will help ensure the project is staying on track. The same
template is appropriate for tollgate reviews and/or a presentation software application can be used to
create slides and graphs.

Most project planning and scheduling software packages generate project status reports, but the user
can also customize the reports using a basic template. The following sample template (see Table 9.3)
uses a simple project status dashboard to indicate the current status of each phase in the DMAIC

104 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

methodology. The colors green, yellow, and red are used to identify if each phase is on track (green),
has possible delays/risk identified (yellow), or is delayed/high risk (red). The phases can be changed
to specific project milestones, to track the overall scope, budget, and schedule of the project, or to key
project areas such as change management, functional/technical areas, etc.

Unlawful to replicate or distribute


Table 9.3 Project Status Report Template
The symbols on the left side of the status column are just one example of how to ensure people with red-green
color blindness can differentiate between phases or milestones that are on track and ones that are delayed.
Project Name Reporting Period

Project Scope Summary Phase Progress Status Finish


Define %  On Track (date)
Measure %
 Possible
Delays
Analysis %  Delayed
Improve % TBA

Control % TBA

Tasks Completed Tasks Delayed Tasks Planned

Project Budget Key Project Risks and Issues


Description Forecast Actual Type Description
$ $
$ $
Total $ $

Staying up-to-date on a project's progress and regularly creating project status reports not only
conveys progress to stakeholders but also creates a written record with valuable information that
can be used to make decisions on future projects. Key decisions, changes made, and the reasons why
changes were necessary should all be included when documenting lessons learned at the end of the
project.

Project Closeout Report


The project closeout report summarizes the outcome of the project by documenting the completion
of closeout tasks and project performance. It captures the original goals and objectives of the project
and compares them to the outcomes. The project closeout report identifies variances in the project,
describes lessons learned, and notes where and when the project’s resources were redistributed.

Table 9.4 illustrates one of the many ways to create a project closeout report. The report can be
customized, and the category headings and information will vary based on the specidics of the project.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 105
Chapter 9: Project Communication

Table 9.4 Project Closeout Report Template


Section 1. General Project Information
Project Name Date
Unlawful to replicate or distribute

Organizational Unit

Project Manager Phone Email Fax

Section 2. Goals / Objectives / Expectations of Project


(Describe specific goals, objectives, and expectations for the project; for each,
mark whether or not the goal/objective/expectation was met)
Item Project Goal / Objective / Expectation Met?
2.1 Yes  No 
Section 3. Project Risks and Issues
(Note each risk from the Risk Management Plan and indicate whether or not it occurred,
turned into issues, and whether the issue was resolved, and how, or if it is still open)
Item Risk Open Issue?
3.1 Yes  No 
Section 4. Project Quality
(List the project’s major work products / deliverables;
indicate the reviewer’s name and whether or not the item was approved)
Item Work Product / Deliverable Reviewer Name Approved?
4.1 Yes  No 
Section 5. Project Costs and Schedule
(Use Earned Value Management to determine the final project performance)
EVM Parameter Value Comments
Actual Cost (AC)
Planned Value (PV)
Earned Value (EV)
Cost Performance Index (CPI)
Schedule Performance Index (SPI)
Section 6. Redistribution of Resources
(List each resource used for project, and indicate whether they have been transferred,
terminated, reassigned, or other status; include an effective date for each resource)
Resource / Title Release Status and Location Effective Date

106 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Section 7. Project Files and Artifacts


(List the location of work products, deliverables, minutes, and all other project documents)
Artifact / File Name Location Contact

Unlawful to replicate or distribute


Section 8. Lessons Learned
(State lessons learned from the perspective of the problem / issue it is related to; describe the problem;
include references to project documentation; identify recommended changes to avoid problem in future)
Problem or Issue References Recommended Changes

Section 9. Post-Implementation Plan


(Note the post-implementation activities and the plan for completing each activity)
Activity Planned Date Assigned To Frequency

Section 10. Open Issues


(List open issues from Section 3 and the planned resolution for each)
Open Issue Planned Resolution

9.4.2 Project Record

9.3.2 Project Records Management


An important part of project management is maintaining the records related to the project's activities,
stakeholders, team members, risks/issues, and communications. Records management includes
organizing, planning, tracking, storing the various versions, and retrieval of project-related documents
during and after a project.

Each organization should have a system or process in place for managing project records, which
should include information such as the following:

◆◆ What should be recorded?


◆◆ What is the process for recording information?

◆◆ What type of records should be kept for various types of projects?

•• Project charter/proposal

•• Project meeting notes/handouts

•• Status and closeout reports

•• Procurement documents

•• Stakeholder information, including contact information

•• Change requests/log

•• Issue/risk logs

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 107
Chapter 9: Project Communication

•• Project team information

•• Official mail/e-mail, including attachments

◆◆ How will records be handled and collected?


Unlawful to replicate or distribute

◆◆ How long will records be retained?

◆◆ How will record disposal be handled?

9.4 Project Presentations


Project managers are required to give presentations at various meetings for varying audiences for the
duration of a project. They hold regular project status meetings to discuss and display the current
status of a project and tollgate review meetings when a phase ends and authorization is needed to
continue to the next phase of the project. Project managers also need to be prepared to give senior
management an overview of the project as well as provide detail when required.

To handle a project's presentation needs, project managers rely on software and tools that track a
project's performance as well as tools that present information, such as data in a spreadsheet for analysis
purposes, reports that include tables and charts, or quality images of a project's performance data.

9.4.1 Creating and Designing Project Presentations


Following are the actions (steps) involved in creating and designing a presentation:

Step 1. Determine the major participants. (Who is the audience?)


Step 2. Determine the objective. (What is the intended use of the data?)
Step 3. Gather background information.
Step 4. Determine the content and structure. (What is the message to be communicated? What
presntation format should be used?)
Step 5. Create the visual aids.
Step 6. Create the presentation.
To help ensure your project presentation is engaging, clear, and concise, the following best practices
will be helpful when creating and designing one:

◆◆ Focus: Have an objective in mind when you create and design your presentation. Do not add
information that does not pertain to the objective.

◆◆ Engagement: Ensure the content is easily understood by the intended audience. Project
presentations need to engage the audience by using carefully selected and relevant images,
graphs instead of tables of data, and stories to explain numbers, as well as asking thought-
provoking questions.

◆◆ Effects: Avoid using too many effects. They can distract the audience from the objective.

◆◆ Bullet Points: Bulleted lists should be kept to five items if possible but never more than seven.
Details need not be included, i.e., the content on the slide or on the handout need not be
included. Instead, verbalize minor or extended pieces of information during the presentation.

108 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

◆◆ Visual Appeal: Use visual stimuli to present information. Colors and fonts need to be readable
(avoid using a true red or green as red-green color blindness is common). If you are using a
projector, test your presentation to ensure everything is readable on the screen.

Chart Design: A Few Hints and Tips

Unlawful to replicate or distribute


This section is taken from the U.S. Department of Energy's "The Performance-Based Management
Handbook, Volume 5" and is reprinted with the permission of the Performance-Based Management
Special Interest Group.2

Graphics and charts are essential to presenting the data. The charting area is the focal point of the
chart or graphic. The graphical, dramatic representation of numbers as bars, risers, lines, pies, etc.
like is what makes a chart so powerful. Therefore, make your charting area as prominent as possible
without squeezing other chart elements off the page. If you can still get your point across without
footnotes, axis titles, or legends, do so to make the charting area more prominent. However, remember
that the document needs to communicate enough information to be a stand-alone document. Keep
the following tips in mind when designing your chart.

◆◆ Less is more.

◆◆ Group bars to show relationships.

◆◆ Avoid three-dimensional graphics.

◆◆ Use grids in moderation.

◆◆ Choose colors carefully or avoid them altogether.

◆◆ Limit use of typefaces.

◆◆ Choose legible typefaces.

◆◆ Set type against an appropriate background.

◆◆ Use pattern fills with moderation.

2  Performance-Based Management Special Interest Group (PBM SIG), "The Performance-Based Management Handbook,
Volume 5: Analyzing, Reviewing, and Reporting Performance Data," www.orau.gov (September 2001) 41-43.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 109
This page intentionally left blank.
Unlawful to replicate or distribute

110 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Part III: Define Phase of DMAIC

Unlawful to replicate or distribute


T he first phase of the LSS methodology is the Define Phase. You will recall from earlier chapters
the overall methodology includes five phases: 1) Define, 2) Measure, 3) Analyze, 4) Improve, and
5) Control. The purpose of the Define Phase is to determine the objectives, scope, and schedule of the
project and to select and train the project team. During this phase it is necessary to collect information
about the customers and the process involved and also to determine how project success will be
measured.

This first phase is intended to help the team get organized, determine the roles and responsibilities of
each team member, and establish team goals and milestones. By the end of this phase, the team should
be able to answer the following questions:

1. What is the problem?

2. Who are the customers and how are they impacted by the problem?

3. What are the goal and the deliverables?

4. What is the timeline for achieving the goal?

5. What factors are critical to the customers?

6. What processes are involved?

7. What is the scope of the project?

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
This page intentionally left blank.
Unlawful to replicate or distribute

112 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Chapter 10: Voice of the Customer (VOC)

Unlawful to replicate or distribute


Key Terms
Control to x (CTx) Operational definitions
Critical to Quality (CTQ) Quality Function Deployment (QFD)
Tree House of Quality Voice of the Customer (VOC)
Kano analysis

Body of Knowledge
1. Identify the internal and external customers of a project and the effect the project will have on them.

2. Collect feedback from customers using surveys, focus groups, interviews, and various forms of
observation.

3. Use affinity diagrams to sort and group customer data.

4. Apply Kano analysis to identify opportunities to satisfy customers.

5. Develop a CTQ tree to refine the general customer requirements into the CTQ requirements.

6. Craft operational definitions to express customer requirements in clear and objective terms.

7. Use quality function deployment (QFD) to translate customer requirements statements into
product features, performance measures, or opportunities for improvement.

U nderstanding their customers’ needs is critical to a company's success. Soliciting and collecting
customer needs and perceptions can be described as listening to the voice of the customer. The
voice of the customer (VOC) is a term used to describe the in-depth process of capturing a customer’s
expectations, preferences, and aversions. VOC enables the organization to:

◆◆ Make decisions on products and services.

◆◆ Identify product features and specifications.

◆◆ Focus on making improvements.

◆◆ Develop baseline metrics regarding customer satisfaction.

◆◆ Identify customer satisfaction drivers.

In the Define Phase, critical customer requirements are collected, measured, and translated into
actionable goals using a number of tools, such as surveys, interviews, focus groups, warranty data, field
reports, complaint logs, the Kano model, CTQ analysis, and QFD.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Chapter 10: Voice of the Customer (VOC)

When developing a VOC strategy, the following questions should be answered:

◆◆ How will the customers be identified?

◆◆ How can the customers' needs be gathered and identified?


Unlawful to replicate or distribute

◆◆ Which of the customers’ needs are currently being fulfilled?

◆◆ Which of the customers’ needs are currently not being fulfilled?

10.1 Identifying Your Customer


As a reminder, the first principle of Lean thinking states that an organization must know who their
customers are and how they define value. Without understanding what the customer wants and what
the customer values, an organization runs the risk of producing a wasteful quantity of goods and
services that the customer does not want or need.

Customer-perceived quality is the leading driver of business success; therefore, it is important for an
organization to identify and understand their customers. A customer is any person or organization
that receives a product or service (output) from the work process or any person or organization that
regulates the product or service. Customers can be any of the following:

◆◆ External: Individuals or organizations outside of your company that pay for the product or
service.

◆◆ Internal: Colleagues or departments that receive products, services, support, or information


from the organization's processes, e.g., Engineering, Manufacturing, Quality, Marketing.

◆◆ Regulatory: Any government agency that has standards to which the process, product, or
service must conform.

Methods used to identify customers include brainstorming, supplier-input-process-output-customer


(SIPOC), marketing analysis data, and tracking a product or service to delivery.

10.2 Collecting Customer Data


There are two types of VOC data:

1. Reactive: Customer complaints, compliments, feedback, audits, contract cancellations,


technical support calls, product returns/recalls and/or warranty claims that can lead to
significant improvement opportunities.

2. Proactive: Customer interviews, surveys, focus groups, market research, and observations that
can help identify improvement opportunities.

Reactive data are always being sent whether it is requested or not, but proactive data must be collected
through efforts initiated by the organization. Some of the methods for capturing the VOC listed in
Table 10.1.

114 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Table 10.1 Methods for Capturing the Voice of the Customer (VOC)
Method Strength Weakness
Interviews One-on-one Small sample size
Surveys Reach many customers at once Low response rate

Unlawful to replicate or distribute


Focus Groups One-on-few Group think
Quality Function Deployment Identification, prioritization, Complicated process
(QFD) and implementation
Lead Users Leaders in knowledge of future Available resources for
products deployment

Other sources of customer data include:

◆◆ Complaints

◆◆ Service issues

◆◆ Quality issues

◆◆ Delivery issues

◆◆ Customer scorecards

◆◆ Marketing research

◆◆ Data studies of patterns and trends

◆◆ Audits

◆◆ Past decision behavior and tendencies

◆◆ Technology research

◆◆ Gemba: go to the process and observe

Customer Surveys
Surveys are used to measure the performance of a product or service within a specific group of
customers or across an entire customer segment. Surveys can be completed in various ways: a paper
form, an online form, or a phone call.

When developing the survey, the measurement scale for answers should be determined, ensure that the
individual questions fulfill the objectives of the survey, and then validate the questions with a test group.

Interviews
Interviews provide information about how a customer sees an organization’s product or service, such
as issues, characteristics, performance, etc. Interviews can be performed one-on-one or with a group
of customers, which can be conducted on the phone, through U.S. mail, email, or the Internet. While
conducting in-person interviews with one customer at a time can be costly, they do have the benefit of
building customer relationships through personal interaction. They also have the best completion rate
of all types of interviews.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 115
Chapter 10: Voice of the Customer (VOC)

Focus Groups
Another way to collect VOC data is through a focus group, where a dozen or so potential or current
customers meet together and are asked to share their perceptions and opinions about a product/
service. Focus groups can be used to gain insight into a customer's needs, to test designs, and/or obtain
feedback and can also provide clarification of information gathered during interviews or through a
Unlawful to replicate or distribute

survey. Focus groups work well when the participants are allowed to talk freely and openly with one
another regarding the product or service.

10.2.2 Sorting and Grouping Customer Data


An organization that is collecting customer data may be able to compile a very large list of unsorted
data, and then use an affinity diagram to organize them into groupings based on relationships (see
Figure 10.1). The affinity diagram is created by sorting customer data into logical, related groups. Then,
brief statements are written that capture the customer data on cards, and category headings are
created that represent each group and how the data are linked. Superheaders should be used when two
or more groups of data contain a relationship.

Superheading Group 3

Group 1 Group 2 data data data

data data data


data data data

data data data


data data

Group 4

Group 5
data data
Group 6
data

data data data

data data

data

Figure 10.1 Affinity Diagramming: Grouping Customer Data

116 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

10.3 Identifying Customer Needs and Requirements


The customer's needs and requirements establish the link between the organization and the customer.
Customers need to feel that their needs are being met and that they are important to the organization.
Identifying the customer's needs will lead the organization to the critical product requirements and

Unlawful to replicate or distribute


technical specifications.

10.3.1 Kano Model


The Kano model (named for its inventor, Dr. Noriaki Kano) classifies product attributes based on how
they are perceived by customers and their effect on customer satisfaction. The figure below (see Figure
10.2) portrays the three levels of need:

◆◆ Expected Attributes/Must Haves: Basic requirements (when absent, customers are


dissatisfied); offers no opportunity for product differentiation.

◆◆ Normal Attributes/Satisfiers: Variable requirements (more is better); will improve customer


satisfaction.

◆◆ Exciting Attributes/Delighters: Unspoken and unexpected by customers; can result in high


levels of customer satisfaction; their absence does not lead to dissatisfaction.

Delight
Exciting Attributes /
Delighters

Normal Attributes /
Satisfiers

Absent Fulfilled

Expected
Attributes /
Must Haves

Dissatisfaction

Figure10.2 Kano Model

The model also contains two dimensions:

◆◆ Achievement (the horizontal axis): Ranges from the attribute or need being absent to being
fulfilled.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 117
Chapter 10: Voice of the Customer (VOC)

◆◆ Satisfaction (the vertical axis): Ranges from dissatisfaction with the product or service to
delight.

Because customer expectations can change over time, a date and comments describing the decisions
made should be noted on each Kano model as well as each subsequent revision.
Unlawful to replicate or distribute

10.4 Developing CTx Measures


Collecting the voice of the customer allows you to translate customer comments into measurable
statements called critical to x (CTx) metrics. A simple tool called the VOC Translation Matrix can be
used to facilitate this process (see Table 10.2).

Table 10.2 Voice of the Customer (VOC) Translation Matrix Template


Voice of the Key Customer Critical Customer
Customer
Customer Issue(s) Requirement
Summarize key issues and
Identify the issue(s) that
Who is the customer? What does the customer want? translate them into specific and
prevent satisfying customers.
measurable requirements

Steps for Completing the VOC Translation Matrix:

Step 1. Identify the organization's customers: “Who is the customer?”


Step 2. Collect and analyze reactive data, consider proactive approaches to identify the
customers’ needs: “What does the customer want from the organization?”
Step 3. Identify the key issue(s) that prevent the organization from satisfying customers.
Step 4. Create critical customer requirements by summarizing key issues and translating them
into specific and measurable requirements.

10.4.1 Critical to Quality (CTQ) Metrics


The CTQ metrics are the specific and measurable product or process requirements with applicable
performance standards and specification limits, as defined by the customer. They are the key
characteristics by which customers evaluate the quality of a product or service.

A defect is any event that does not meet a CTx metric. Defects can cost a project time and effort. In
order to adequately define, measure, and evaluate defects, the following terms first must be understood.

◆◆ Unit (U): an individual product or service delivered.

◆◆ Opportunity (O): a defect that could occur on any unit, keeping in mind that there might be
multiple opportunities per unit (product or service).

Total opportunities: TO = U x O

118 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

◆◆ First pass yield (FPY): percentage of units that completes a process and meet quality
guidelines without being scrapped, rerun, retested, or removed from processing.

(Total Units Entering Process - Defective Units)/Total Units Entering Process

Unlawful to replicate or distribute


◆◆ Defect (D): any event that occurs that does not meet customer expectations.

Probability of a defect: P(d) = 1 – FPY

◆◆ Defects per unit (DPU): a unit with one or more defects.

DPU = Total Defects/Total Units

◆◆ Defects per opportunity (DPO): total defects divided by total opportunities.

DPO = Total Defects/Total Number of Opportunities

◆◆ Defects per million opportunities (DPMO): average number of defects per unit observed
during an average production run divided by the number of opportunities to make a defect on
the specific product.

DPMO = DPO x 106

Example
When human body weight is out of control, it is likely because a few of the individual's
critical X metrics are running out of specification. To get their body weight back under
control, the individual decides to purchase a new diet program that is sold on television,
which is a sample package of 25 different food items.

The CTQs in this example can be defined as follows:

CTQ 1—The food must taste relatively good.

To rate the items, a simple scale of 1 – 3 is created:

1—I disliked the taste and cannot eat it.


2—The taste is bearable if it will help me lose weight.

3—The food tastes pretty good.

CTQ 2—The food cannot leave me feeling hungry after I eat it.

To measure this CTQ, another simple scale of 1-3 is used to measure hunger 15 minutes
after eating:

1 – I am still very hungry.

2 – I think I might be able to stay on this diet.

3 – I really feel satisfied and full.

Since there are two CTQs, there are two opportunities for a defect on each unit (package of
food). The number of units is 25 (number of packages of food), and the decision is made to
classify each of the CTQs as a defect if the rating is less than 2.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 119
Chapter 10: Voice of the Customer (VOC)

After trying all 25 packages of food, the following results were calculated for this product:

D = Number of defects = 15

O = Opportunity = 2
Unlawful to replicate or distribute

N = Number of units = 25

DPO = D / (N*O) = 15 / (2 * 25) = .30

Yield = (1-DPO) x 100 = 70 percent

Sigma = 2.02 (this value must be determined in a process-sigma conversion table).

Sigma levels may be obtained from texts, online converters, or software applications.

DPMO = 30,000

Clearly, the critical Xs that impact both CTQ 1 and CTQ 2 will need some adjusting in
order for this organization to achieve an improved sigma level for this product.

10.4.2 Critical to Schedule (CTS) Metrics


CTS metrics cycle time and scheduling efficiencies or inefficiencies, e.g., process cycle efficiency,
process lead time, process velocity, and overall equipment effectiveness, are used to measure how a
process is performing.

Process Performance
With LSS, there are two very important measures for understanding, at a high level, how the process is
performing:

1. Lead time is the total elapsed time for one item to make it through the system from initial step
to customer delivery. When evaluating the lead time for a process, the first question to ask
is "how short does the customer want this lead time to be?" A secondary question would be
"what is the competitor’s lead time, and how does the organization's time compare?"

2. Process efficiency involves distinguishing between value-added and non-value-added steps.


Imagine completing a value stream map for the process and labeling each of the steps in the
process as either value-added or non-value-added. The time spent doing value-added work
and the time spent doing non-value-added work are each totaled. Then, the efficiency of the
process is calculated using the formula below.

Process efficiency = Value-added /(value-added + non-value-added)

Example
Process efficiency = 12 hours/275 hours = .0436 or 4 percent

The efficiency of this process is 4 percent, meaning the inefficiency in this process is 96
percent. While this number might sound rather shocking, leading one to think this may be
a poor example, be assured that these numbers are actually quite typical.

Both Lean and Six Sigma are firmly focused on the customer and what is valuable to the customer.
For Lean methods, the process always begins by looking at the full value stream from the customer’s

120 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

point of view and measuring value-added vs. non-value-added activities in the process based on what
is important to the customer.

The combination of the Lean metrics and the Six Sigma metrics provide the capability of measuring
and comparing many different types of processes. Whether one works in finance, information

Unlawful to replicate or distribute


technology, marketing, or engineering, the DPMO and Sigma levels can be used to measure process
performance. Everyone across the organization can then universally understand exactly what a certain
level of performance looks like. In addition, providing Lean’s cycle time and process efficiency metrics
offers a solid understanding of how a particular process is impacting the success of a business, which is
an incredibly powerful tool.

10.4.3 Critical to Cost (CTC) Metrics


CTQ and CTS metrics are often Critical to Cost (CTC) metrics as well because of the costs associated
with and affected by process or service issues or delays in production or delivery. CTC metrics include
the internal rate of return (IRR), net present value (NPV), and cost of poor quality.

Cost of Poor Quality


The cost of poor quality (COPQ) is the cost associated with producing products and services that are
poor quality. There are four categories of costs:

1. Appraisal: expenses involved in the inspection process

2. Prevention: cost of all activities whose purpose is to prevent failures

3. Internal failure: cost incurred when a failure occurs in-house

4. External failure: cost incurred when failure occurs after the customer owns the product

Many organizations track the costs associated with these COPQ categories but may not link those
costs to quality defects. Some research and data gathering therefore are necessary to make these links.
Since the focus of LSS is using data to provide financial benefits, it is important for an organization to
invest the time and effort in creating an effective cost-of-quality tracking system.

These costs can be linked with LSS projects. Once the cost buckets that the process improvements will
impact have been identified, the potential dollar impact can be systematically analyzed and a cost-out
performed.

The following are COPQ examples:

Prevention Costs
◆◆ Applicant screening ◆◆ Market analysis
◆◆ Capability studies ◆◆ Personnel reviews
◆◆ Controlled storage ◆◆ Pilot projects
◆◆ Design reviews ◆◆ Planning
◆◆ Equipment maintenance ◆◆ Procedure writing
◆◆ Equipment repair ◆◆ Prototype testing

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 121
Chapter 10: Voice of the Customer (VOC)

◆◆ Field testing ◆◆ Quality design


◆◆ Fixture design and fabrication ◆◆ Safety reviews
◆◆ Forecasting ◆◆ Time and motion studies
◆◆ Housekeeping ◆◆ Training
Unlawful to replicate or distribute

Appraisal Costs
◆◆ Audits ◆◆ Laboratory testing
◆◆ Equipment calibration ◆◆ Procedure checking
◆◆ Final inspection ◆◆ Prototype inspection
◆◆ In-process inspection ◆◆ Receiving inspection
◆◆ Inspection and testing ◆◆ Shipping inspection
◆◆ Inspection and test reporting ◆◆ Test equipment maintenance

Internal Failure Costs


◆◆ Accidents ◆◆ Late time cards
◆◆ Accounting error corrections ◆◆ Obsolescence
◆◆ Design changes ◆◆ Premium freight
◆◆ Employee turnover ◆◆ Redesign
◆◆ Engineering changes ◆◆ Re-inspection
◆◆ Equipment downtime ◆◆ Repair and retesting
◆◆ Excess interest expense ◆◆ Retyping letters
◆◆ Excess inventory ◆◆ Rework
◆◆ Sorting ◆◆ Scrap

External Failure Costs


◆◆ Bad debts ◆◆ Overpayments
◆◆ Customer complaint visits ◆◆ Penalties
◆◆ Customer dissatisfaction ◆◆ Premium freight
◆◆ Engineering change notices ◆◆ Price concessions
◆◆ Equipment downtime ◆◆ Pricing errors
◆◆ Excess installation costs ◆◆ Recalls
◆◆ Excess interest costs ◆◆ Redesign
◆◆ Excess inventory ◆◆ Re-inspection
◆◆ Excess material handling ◆◆ Repair costs
◆◆ Excess travel expense ◆◆ Restocking costs
◆◆ Failure reviews ◆◆ Retesting
◆◆ Field service training costs ◆◆ Returns
◆◆ Liability suits ◆◆ Rework

122 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

◆◆ Liability ◆◆ Scrap
◆◆ Loss of market share ◆◆ Sorting

Unlawful to replicate or distribute


10.4.4 Refining Requirements
Critical to Quality (CTQ) Tree
CTQ tree diagrams can help translate customer needs and requirements into product characteristics
by linking the requirements to the key quality drivers and specific measurements.

First, the major customer needs collected during the customer data collection process are
documented, with the intent of seeing each need from the customer’s point of view. Typically, the
VOC is a qualitative opinion, i.e., this process is about transforming the VOC into a quantitative
specification. Translation can be achieved by asking the following question: How do we know when we
have it?

Example
A high school is going to convert a traditional course to a flipped classroom model. Figure
10.3 is an example CTQ tree for that effort.

Need Drivers CTQ Requirements

General / Specific /
Hard to measure   Easy to measure

Increased More high school graduates


Learning
Outcomes Less high school dropouts per school year

Students learn at their own pace


Increased Learning
Communicate with peers and teachers
through Active
via online discussions
Engagement
Convert Traditional Concept engagement takes place in
High School Class to classroom
Flipped Classroom

Students receive instant feedback


Increased Use of
Educational Immediate review of concepts
Technology
Increased use of prerecorded lecture
videos and course podcasting

Less frustration as students work on


problems during class instead of at home
Increased Support
of Students in Review of concepts with individual
Classroom students as needed

Increase in collaborative work and


concept mastery exercises in classroom

Figure 10.3 CTQ Tree Diagram

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 123
Chapter 10: Voice of the Customer (VOC)

10.5 Linking Customer Requirements to Business Objectives


Product and service requirements concepts must be linked with business objectives. If a customer
requirement cannot be linked to a measurable business objective, then it should not be included in the
product/service and is referred to as being out of scope.
Unlawful to replicate or distribute

10.5.1 Operational Definitions


An operational definition is a simple, straightforward description of what should be observed and
measured to ensure that anyone taking or interpreting the data will do so consistently, i.e., translate
the voice of the customer into words, metrics, and technical requirements that then can be used by the
organization. Operational definitions should also provide specific instructions on how to take each
measurement (see Table 10.3).

Table 10.3 Operational Definition


Operational Definition
Elements Examples
◆◆ Satisfaction of customers in the
Southeast region with computer
What you are trying to measure support services
◆◆ Number of surface defects
◆◆ On-time delivery for Product X
◆◆ Are “customer comments” included
under “complaints”?
What the measure is not
◆◆ Does “surface defects” include only
scratches and dents?
◆◆ Satisfaction is X% of customers that
relate a score of 80 or above.
Basic definition of the measure ◆◆ Surface defect = any dent or scratch
visible from a distance of 2 feet under
normal light.
◆◆ Start the stopwatch when the customer
steps into the line, and stop it when the
customer leaves the front desk.
How to take the measure (in detail)
◆◆ “Use the standard calipers placed at
the X-junction to measure the width in
centimeters.”

Translating VOC into Operational Definitions


The operational definitions are crafted as the VOC data are being refined and translated into
the product/service requirements. Table 10.4 is a sample template for translating VOC data into
operational definitions.

124 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Table 10. 4 Translating VOC into Operational Definitions Matrix Template


Gather Feedback Affinity Grouping CTQs Operational
Definitions
◆◆ Customer 1 ◆◆ Group 1 ◆◆ CTQ 1 ◆◆ CTQ 1 Operational

Unlawful to replicate or distribute


Definition
◆◆ Customer 2 ◆◆ Group 2 ◆◆ CTQ 2 ◆◆ CTQ 2 Operational
Definition
◆◆ Customer 3 ◆◆ Group 3 ◆◆ CTQ 3 ◆◆ CTQ 3 Operational
Definition

10.5.2 Quality Function Deployment


Quality function deployment (QFD) links the needs of the customer with various business functions
and organizational processes, such as marketing, design, quality, production, manufacturing, and
sales. Using the seven management and planning tools, QFD identifies opportunities and needs and
translates them into actions and designs.

The QFD methodology can be used for both tangible products and non-tangible services, including
manufactured goods, services, software products, IT projects, business process development,
government, health care, environmental initiatives, and many other applications.

QFD provides a comprehensive development process for the following:

◆◆ Understanding customer needs (basic, unspoken, performance, and excitement level).

◆◆ What value means from the customer’s perspective.

◆◆ Understanding how customers find, select, and evaluate a product or service.

◆◆ Deciding what features or functions to include in the product or service design based on the
customer’s needs.

◆◆ Determining the level of performance that must be delivered to gain a competitive advantage
in the market.

◆◆ Intelligently linking the needs of the customer with design, development, engineering,
manufacturing, and service functions.

◆◆ Intelligently linking Design for Six Sigma (DFSS) with the front-end voice of the customer
analysis and the entire design system.

House of Quality
This section, excluding the House of Quality graphic, is taken from DRM Associates, “Customer-Focused
Development with QFD” and is reprinted with the permission of the author.1

QFD is a structured approach to defining customer needs or requirements and translating them into
specific plans to produce products to meet those needs. This understanding of the customer needs

1  Kenneth Crow, "Customer-Focused Development with QFD," www.npd-solutions.com (DRM Associates, 2014).

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 125
Chapter 10: Voice of the Customer (VOC)

is then summarized in a product planning matrix, or House of Quality.2 These matrices are used
to translate higher level “whats” or needs into lower level “hows” as the product requirements or
technical characteristics to satisfy those needs.

Once the customer needs are identified, preparation of the House of Quality can begin. The sequence
Unlawful to replicate or distribute

of preparing the product planning matrix is as follows:

Step 1. Customer needs or requirements are stated on the left side of the matrix (see Figure
10.4). These needs are organized by category based on the affinity diagrams. For each
need or requirement, the customer priorities are stated using a 1 to 5 rating. Use
ranking techniques and paired comparisons to develop priorities.
Step 2. Evaluate prior generation products against competitive products. Use surveys,
customer meetings, or focus groups/clinics to obtain feedback. Include a competitor’s
customers to get a balanced perspective. Identify price points and market segments
for products under evaluation. Identify warranty, service, reliability, and customer
complaint problems to identify areas of improvement. Based on the results, develop
a product strategy. Consider the current strengths and weaknesses relative to the
competition. Identify opportunities for breakthroughs to exceed the competitor’s
capabilities, areas for improvement to equal the competitor's capabilities, and areas
where no improvements will be made. This strategy is important in order to focus
development efforts where they will have the greatest payoff.
Step 3. Establish product requirements or technical characteristics to respond to customer
requirements and organize that information into related categories. The characteristics
should be meaningful, measurable, global, and should be stated in a way to avoid
suggesting a particular technical solution so as not to constrain designers.
Step 4. Develop relationships between customer requirements and product requirements or
technical characteristics. Use symbols for strong, medium, and weak relationships. Be
sparing with the strong relationship symbol.
Step 5. Develop a technical evaluation of prior generation products and competitive products.
Step 6. Develop preliminary target values for product requirements or technical
characteristics.
Step 7. Determine potential positive and negative interactions between product requirements
or technical characteristics using symbols for strong or medium, positive or negative
relationships. Too many positive interactions suggest potential redundancy in “the
critical few” product requirements or technical characteristics. Focus on negative
interactions; consider product concepts or technology to overcome the potential
tradeoffs or the tradeoffs in establishing target values.
Step 8. Calculate the importance ratings. Assign a weighting factor to relationship symbols (9-
3-1, 4-2-1, or 5-3-1). Multiply the customer importance rating by the weighting factor
in each box of the matrix and add the resulting products in each column.
Step 9. Develop a difficulty rating (1 to 5 point scale, with 5 being very difficult and risky) for
each product requirement or technical characteristic. Consider technology maturity,
personnel technical qualifications, business risk, manufacturing capability, supplier/
subcontractor capability, cost, and schedule. Avoid too many difficult/high risk items

2  Called “House of Quality” because the correlation matrix that sits on top of main body of the matrix is shaped like a roof.

126 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

as this will likely delay development and exceed budgets. Assess whether the difficult
items can be accomplished within the project budget and schedule.
Step 10. Analyze the matrix and finalize the product development strategy and product
plans. Determine the required actions and areas of focus. Finalize the target values.

Unlawful to replicate or distribute


Determine the items for further QFD deployment. To maintain the focus on “the
critical few,” less significant items may be ignored in the subsequent QFD matrices.
Maintain the product planning matrix as customer requirements or conditions change.

-
-
- -
- -
+ - -
+ + +

Competitor A’s product or service


Competitor B’s product or service
- +
- + + +
Customer Need #1 Customer
CORRELATION Need #2
ENGINEERING OR

Our product or service


IT REQUIRMENTS

+ High positive
+ Positive
- High negative
Requirement #1

Requirement #2

Requirement #3

Requirement #4

Requirement #5

Requirement #1

Requirement #2

Requirement #3

Requirement #4
Engineering / IT

Engineering / IT

Engineering / IT

Engineering / IT

Engineering / IT

Engineering / IT

Engineering / IT

Engineering / IT

Engineering / IT
- Negative

CUSTOMER NEEDS
Detail #1 of need 5 4 4 2
(briefly describe)
(briefly describe need)
Customer Need #1

Detail #2 of need 3 2 1 2

Detail #3 of need 2 5 4 4

Detail #4 of need 2 3 3 4

Strong Relationship (9)


Detail #1 of need Some Relationship (3)
describe need)

(briefly describe) 2 3 3 4
Customer
Need #2
(briefly

Detail #2 of need 1 Weak Relationship (1) 4 3 5


No mark indicates no
relationship

Technical difficulty
Object target values
(These are sample measures) ft-lb lb lb ft-lb lb lb/ft % dB psi
Our product
or service 11 12 6 10 18 3 10 9 70
Objective B’s product
measures or service 9 12 6 9 13 2 10 5 60
A’s product
or service 9.5 11 7 11 14 2 10 6 60

Technical Absolute
importance Relative

Figure 10.4 House of Quality

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 127
This page intentionally left blank.
Unlawful to replicate or distribute

128 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Chapter 11: Identifying and Selecting a Project

Unlawful to replicate or distribute


Key Term
benchmarking project selection
process owners stakeholders

Body of Knowledge
1. Describe the project selection process and what factors should be considered in deciding whether
to use DMAIC or another problem-solving process.

2. Identify the process owners and other stakeholders in a project.

3. Recognize stakeholders, their needs, possible conflicts or resistance, and plan and communicate
accordingly.

4. Identify each tier in successful project selection.

5. Understand the purpose of benchmarking.

S electing the right project is a critical component of project success. If LSS practitioners do not put
enough effort into selecting the right opportunity for improvement, a project can end in disaster
or create unnecessary work and complexity for the project team. Poor results will shake the faith of
management in the worth of LSS and may lead to its demise.

Practitioners need a robust and reliable approach to determine if the project is a good LSS project
and to prioritize projects to ensure resources are allocated appropriately. Projects are the key to
organization improvement in Six Sigma. Since projects are the most visible and quantifiable part
of this effort, you will be judged by their quality so they must have a large enough impact for the
organization to care about them. The impact may relate to profit, the environment, safety, or
anything else that management deems important.

11.1 Identifying a Project


A project can be identified by any of the following groups/individuals:

◆◆ Executives/Upper Management: looks at possible projects based on their impact on the


organization and also may resurrect failed past projects.

◆◆ Department Level Management: looks at possible projects based on their impact on the
department’s ability to meet its organizational goals and objectives.

◆◆ Employees: identify projects that can improve their ability to meet customer needs on a daily
basis or make their jobs easier.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Chapter 11: Identifying and Selecting a Project

◆◆ Black or Green Belts: identify projects based on their experience with a previous or ongoing
project.

◆◆ Process Owners: identify projects based on their experiences executing and implementing
specific processes.
Unlawful to replicate or distribute

◆◆ Customers: identify projects through VOC data collection to meet their needs and add value
to their experience.

◆◆ Suppliers: can impact projects based on changes that occur within the supplier’s organization.

When identifying potential projects, it is also important to identify which problem-solving process to
use. As a reminder, use DMAIC if you are trying to increase performance for an existing process or
service, find and eliminate defects and their root causes, or have measured specific opportunities for
improvement. Use a DFSS methodology to create a new product, design, solution, or process.

11.2 Identifying Process Owners and Project Stakeholders


It is important to identify project stakeholders and process owners while identifying and selecting
projects. Process owners execute and implement specific processes and are responsible for managing
the process and ensuring it is followed by process users. Process owners are usually subject matter
experts with an aptitude for process thinking and are interested in systems and sub-processes.
Because of their expertise, they should be involved in the decision-making process and in tollgate
reviews. Organizations usually document the role of the process owner through job descriptions and
organization charts.

Stakeholders are any individuals, groups, or organizations, external or internal, that are involved
with or are impacted by a process and/or its products and outputs. Stakeholders include employees,
functional areas or departments, management, investors, suppliers, and the community. Their
involvement with an organization can change over time, so the list of stakeholders for one project may
be different for another. A project’s stakeholders are important because their support is necessary for
implementing improvements.

11.2.1 Stakeholder Analysis


A fundamental tool for analyzing and managing change is the stakeholder analysis as shown in Table
11.1.

Table 11.1 Stakeholder Analysis


Stakeholder Analysis
Stakeholder Strongly Moderately Neutral Supportive Strongly
Name Against Against Supportive
Person A CD
Person B C D

The influence strategy planning tool, as shown in Table 11.2, can help the team assess the issues and
concerns of each stakeholder. It helps the team to understand who must be “moved” to a higher level
of support and to identify a strategy for doing so.

130 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Table 11.2 Influence Strategy Planning Tool


Influence Strategy
Stakeholder Name Issues/ Concerns Identify “Wins” Influence Strategy
Stakeholder A Fears Positive Impact, Sell Key Points

Unlawful to replicate or distribute


Changes "What’s in it for me?"
Power Issues

Steps for Using the Influence Strategy Planning Tool:

Step 1. Identify the key stakeholders. Write out the names of individuals or groups.
Step 2. Identify what level of support each stakeholder currently has regarding this change.
Facilitate a discussion with the team based on their evidence of what they have seen or
heard at a behavioral level.
Step 3. Record the current level of support of each group/stakeholder on the chart with the
letter ‘C’. It is important to be clear about what each category means (strongly against,
neutral, supportive, etc.).
Step 4. Discuss the importance of each stakeholder and determine the level of support to
which each stakeholder needs to be moved. Record on the chart using the letter "D"
and connect the two points with an arrow C → D to display any gaps.
Step 5. Next, an influence strategy needs to be developed for those who need to move in terms
of support. One approach is to analyze the lines of influence in the organization.

11.3 Project Selection Process


Project selection is a critical part of the LSS methodology. When it is performed properly, an
organization can successfully achieve an ideal balance between strategic and tactical projects. In
selecting a portfolio of Six Sigma projects, the organization must strike the appropriate balance
between these two types of projects based on its individual business conditions, strengths and
weaknesses, threats and opportunities (SWTO), and overall strategic direction. Effective project
selection is a key factor in determining the success of any LSS effort.

Key Elements of a Project Selection Process:

◆◆ Commitment of senior management

◆◆ Project selection based on realistic, available metrics

◆◆ VOC/business/process data collection

◆◆ Clear linkage to organizational goals

◆◆ Specific, detailed plans

◆◆ Properly selected process owners that have organizational support

Projects should be selected that will have the greatest impact on driving the organization’s key
performance indicators (KPI), strategy, and CTx measurements.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 131
Chapter 11: Identifying and Selecting a Project

11.3.1 Using a Prioritization Matrix


A prioritization matrix is a tool that can significantly help an organization stay focused on all of its
critical decision-making factors and keep the project selection process as data-based and bias-free as
possible. For more information, see Section 6.5.
Unlawful to replicate or distribute

11.3.2 Tiered Approach


There is a structured tiered approach (see Figure 11.1) to ensuring that the appropriate projects are
selected to best support the organization’s overall business goals.

Company Strategic Goals

Operational Goals

Projects

Figure 11.1 Tiered Approach

Tier One: Strategic business-level planning (top management)

1. Define the strategic business goals and metrics.

2. Link business goals and objectives to the core business processes.

3. Define the strategic level initiatives—top-level/big Y's.

4. Create a business-level dashboard.

Tier Two: Operational business level (mid-level managers, process owners, and project sponsors):

1. Identify the specific drilldown of subprocesses to be improved.

2. Translate the strategic-level goals and objectives into process goals and metrics.

3. Create process-level dashboards that trace back to the business-level dashboard.

4. Identify the specific problems that need to be addressed.

5. Define the project-level Y's .

6. Draft initial project charters.

Tier Three: Project level (Black Belts and Green Belts):

1. Finalize the project charter.

2. Validate Y's, scope, and feasibility.

132 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

11.4 Benchmarking
Benchmarking is the process of comparing your organization’s processes, services, products, and
performance metrics to the best practices from other industries or even within your organization.
Internal benchmarking allows one department to assess another department’s processes. They then

Unlawful to replicate or distribute


take the best of each process to improve their own processes. External benchmarking is usually done
within the same industry, but benchmarking against an organization in a different industry removes
feelings of competition, making it easier to analyze and improve upon their own performance.

Benchmarking can provide insight into how your organization compares with the competition or
with similar organizations that have different customer segments. Benchmarking can also help an
organization identify products, services, or process systems in need of improvement.

Basic Benchmarking Process Steps:

Step 1. Create a flowchart for the current process.


Step 2. Identify the areas to be improved.
Step 3. Brainstorm ideas.
Step 4. Investigate how others (internal and external) perform similar processes.
Step 5. Develop plans for application of ideas.
Step 6. Pilot test ideas.
Step 7. Initiate the new process.
Step 8. Evaluate the new process.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 133
This page intentionally left blank.
Unlawful to replicate or distribute

134 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Chapter 12: Defining and Documenting the Process

Unlawful to replicate or distribute


Key Terms
process mapping SIPOC diagram spaghetti diagram

Body of Knowledge
1. Develop process maps and review written procedures, work instructions, and flowcharts to
identify any gaps or areas of the process that are misaligned.

2. Define the process under investigation.

3. Define and describe the process components and boundaries.

4. Recognize how processes cross various functional areas and the challenges that result for process
improvement efforts.

5. Describe the role of a process owner.

6. Identify process input and output variables and evaluate their relationships using the supplier,
inputs, process, output, and customer (SIPOC) model.

7. Understand alternate forms of process mapping and apply criteria to select the appropriate type of
map for the situation.

I n order to define a process, an organization needs to ensure that they understand what the process
will accomplish, the metrics that will be used to measure the process, the risks involved, and the
appearance of the structure. All processes should add value to an organization, so it is important to
understand the objectives. While defining the process, describe the tasks in simple, explicit terms.

Processes normally affect more than one department or organization, which can create challenges for
any process improvement project. It is important to define the process owner as there may be more
than one area or individual that may consider themselves the owner with decision authority. When
processes cross functional areas, there may also be problems with sharing of information, including
process knowledge. And different areas use different metrics to measure efficiency and effectiveness,
e.g., the accounting and financial departments might measure dollars while production may measure
productivity and defects.

12.1 Top-Level Process Definition


Within any business system, a top-level review of the organization is needed first to define the core
processes. Then, each top-level process is broken down into as many levels as needed to describe the
process structure through a process called decomposition. This hierarchy of processes graphically
represents the high-level processes within the system broken down to the detailed, low-level processes.
Top-level processes represent what the organization wants to accomplish, and the low-level processes
give detailed instructions for accomplishing each associated task.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Chapter 12: Defining and Documenting the Process

Top-level processes include items found in an organization’s vision and/or mission statement and its
stated objectives and goals. Once the processes are defined, the question to answer is “What needs to
be done to accomplish the stated goal?”
Unlawful to replicate or distribute

12.2 Process Inputs and Outputs


When defining a process, it is important to identify its boundaries (scope), where it begins and ends
and what it includes. One of the tools used to identify process boundaries is a high-level process map.
High-level process maps can help clarify objectives and define basic inputs, processes, and outputs and
should also identify major tasks and activities as well as who the customers are and what they require
from the outputs of the process.

Process maps provide a diagram that shows how the various components of the process are
interconnected in a graphical format that is easy to understand. A graphical view of a process can
help management and employees visualize what needs to be done. Different views with various levels
of detail can be created since a process owner or employee will need more detail in order to perform
effectively and efficiently.

A supplier-input-process-output-customer (SIPOC) diagram is another useful tool used to identify


the process boundaries. SIPOC diagrams can help translate customer requirements into specifications
while focusing on key process inputs and outputs.

12.3 SIPOC Diagram


A SIPOC diagram displays a cross-functional set of activities in a single, simple diagram. It provides
a standard framework for reviewing processes of all sizes and helps maintain a big-picture perspective.
The SIPOC diagram is a fundamental LSS tool and includes these five major basic elements:

◆◆ Supplier: individual or organization providing inputs, e.g., information, materials, or services


to the process

◆◆ Input: information, materials, or services provided


◆◆ Process: set of action steps that transform inputs into outputs

◆◆ Output: final product or service resulting from the process

◆◆ Customer: person or organization that receives the output

136 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

A SIPOC diagram (see Table 12.1) is used to document a process at a high level. This is an excellent
tool because it stays above the detailed level of a regular process map and value stream map.

Table 12.1 SIPOC Diagram


SIPOC Diagram

Unlawful to replicate or distribute


Suppliers Defects Input Process Process Output Defects Customer CTQ Current
Owner Measures
Where do What can What Steps or Who is What What Who What is What
the inputs be wrong goes tasks that responsible does the can be receives important current
to the with the into the utilize for the process wrong the output about the measures
process inputs process to inputs process? generate? with the of the output to are
come going generate and output? process? the currently
from? into the the generate customer? in place?
process? output? an output
that is of
value to
the
customer

Steps for Creating a SIPOC Diagram:

Step 1. Start by providing a description of what the process does. (It is often helpful to think of
it as a “black box” or some type of function or operation.)
Step 2. Identify the first and last steps of the process and fill in the middle steps as needed.
Ideally, there should only be five steps for this high-level map.
Step 3. List the outputs of the process.
Step 4. List the customers of each output.
Step 5. List the process inputs.
Step 6. List the suppliers of the process.

If any of the information in the SIPOC diagram is unclear or missing, the next step for a project
team or process owner is to gather that information. In many cases, a SIPOC mapping exercise will
highlight the need to put some metrics in place to measure the inputs and outputs of the process.
When scoping an improvement project, the scope can be narrowed by not including all the elements
of a SIPOC. Only particular inputs, suppliers, outputs, customers, or process steps should be included.
These should be based on what is considered a manageable scope or where the problem is presenting
itself. Fit the project scope to the resources (people, time, money) that are available for the project.

12.4 Process Mapping


The purpose of process mapping is to visually document the process as a communication tool to help
everyone understand how the process actually works.

A process typically has a “thing” that is moving through the process. Depending on the process, the
“thing” might be a product, service, patient, invoice, insurance policy, etc. The process uses resources
as the “thing” moves through each of the steps on its way to completion. This “thing” is sometimes

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 137
Chapter 12: Defining and Documenting the Process

referred to as an entity and it very important to clearly identify and track every movement of the entity
through a process. The entity might be any of the following:

◆◆ Human: employees, customers, and patients


Unlawful to replicate or distribute

◆◆ Objects: documents, parts, units, and molecules

◆◆ Abstract items: email, telephone calls, orders, and needs

This may sound easy until the entity starts changing forms as it goes through the process. For example,
a doctor’s written order for a patient to have a test appears to be rather simple process. The “thing” is
followed below as it changes into different forms throughout the process:

◆◆ Form 1: Doctor’s written order for a test (piece of paper)

◆◆ Form 2: Electronic order entered into an order system

◆◆ Form 3: Printed order form waiting on a printer in the nuclear area

◆◆ Form 4: Nuclear isotope traveling to be injected into the patient

◆◆ Form 5: Injected patient ready to be scanned

◆◆ Form 6: Scanned pictures of the patient (in a physical or electronic file folder moving to
Radiology)

◆◆ Form 7: Patient being tested on a treadmill and results being collected

◆◆ Form 8: Results (electronic) and paper waiting for a radiologist to interpret

◆◆ Form 9: Interpreted results waiting to be sent to patient’s nurse

◆◆ Form 10: Final results in the patient’s record for doctor to read

12.4.1 Steps for Creating a Process Map


Step 1. Define the scope of the process:
•• Clearly identify the scope of the process mapping activity.

•• Identify the first and last steps of the process. Fill in the middle steps as needed. The
number of steps will be relative to the level of the map.

•• A
 gree on the level of the process map to be completed (high-level, operational-level,
sub-task level, etc.).

Step 2. Document all the steps in the process:


•• W
 alk through the process by pretending to be the “thing” going through the
process.

•• I t is best to have people who do the process every day involved in the process
mapping activity. Make sure all areas are covered. Do not guess at how something
works. If possible, observe the process to see firsthand how the process is currently
operating.

138 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

•• D
 ocument all the steps of the process as they are actually being done today rather
than the way they shouldStarting
Oval: be done. and Ending Point

•• Starting points and ending points are shown as ovals (see Figure 12.1).

Unlawful to replicate or distribute


Figure 12.1 Oval - Starting and Ending Point

•• Activities (steps) are shown as rectangles, as shown in Figure 12.2. (If it is


possible for the team to meet together in the same room while mapping the
process, it is helpful to place sticky notes and banner paper on the map to
record the process steps. If meeting
Rectangle: Activityby teleconference,
or Step sharing a MS Excel®
spreadsheet online can provide one giant sheet of banner paper that extends
infinitely.

Figure 12.2 Rectangle - Activities or Steps


Diamond: Decision Point
Step 3. Document the decision points. A decision point is a question with answers that require
the process to branch off in different directions. Decision points are shown as a
diamond (Figure 12.3) on a process map.

Figure 12.3 Diamond - Decision Point

Try to maintain a consistent level of detail at each step of the process map. Concentrate more on
getting the process captured than on the mapping symbols. The three basic mapping symbols shown
above can easily get the job done.

12.5 Spaghetti Diagram


The spaghetti diagram, also known as a physical process map or work-flow diagram, visually depicts
the continuous flow of items (product, documents, etc.) or people through a process. The continuous
flow lines of a spaghetti diagram enable process teams to do the following:

◆◆ Determine the physical flow and distance that the product, information, or people travel in
order to process the work.

◆◆ Highlight major intersection points within the work space, which are areas where many paths
overlap and cause delay.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 139
Chapter 12: Defining and Documenting the Process

◆◆ Identify inefficiencies and waste (redundancies in the work flow, wasted motion, etc.).

◆◆ Identify opportunities to expedite process flow.

◆◆ Identify opportunities for better workforce communication and safety improvements.


Unlawful to replicate or distribute

Unlike detailed process maps or value stream maps, spaghetti diagrams are created for specific work
areas and their layouts, which do not require sequential steps for the process. Highlight instead the
wasted motion in the work area being mapped.

Steps for Creating a Spaghetti Diagram:

Step 1. Sketch the current work area in detail, including the process locations.
Step 2. Draw a line to describe every trip each person and/or item makes from one point to
another.
•• Use different colors to distinguish between different people and/or items.

•• Walk the area as if you were the person and/or item. As more trips are made, more
lines are added. The more wasteful/redundant the trips are, the thicker the chart is
with lines.

Step 3. Measure the distance traveled.


Step 4. Look for potential problems (long or confusing routes, back tracking, crossing tracks,
etc.).
Step 5. Revise the layout to minimize unnecessary motion and/or conveyance time.

Figure 12.4 shows the daily use of a centrally located office printer. This spaghetti diagram was used to
determine whether or not personal printers should be located at some or each of the desks instead.

Printer

Meeting
Area

A B C D E

Figure 12.4 Spaghetti Diagram of Daily Use of Central Office Printer

140 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Chapter 13: Project Charter

Unlawful to replicate or distribute


Key Terms
activity interim deliverables
assumptions opportunity statement
business case problem statement
constraints project charter
deliverables scope
final deliverables

Body of Knowledge
1. Create a project charter with a compelling business case, clear objectives, and appropriate scope of
action.

2. Help define the scope of the project using Pareto charts and other quality tools.

3. Develop a problem statement that includes baseline data or the current status to be improved and
the project’s goals.

4. Help develop primary metrics (reduce defect levels by x-amount) and consequential metrics (the
negative effects that making the planned improvement might cause).

5. Differentiate between deliverables and activities.

6. Differentiate between final deliverables and interim deliverables.

T he major purpose of a project charter is to introduce the project to the organization in order to
gain acceptance and support of the project and serves both as the project plan and the project
record. A project charter also serves as an informal contract that helps the project team to stay focused
on the organization’s goals and objectives for a particular project. A good project charter sets clear
expectations and the initial boundaries for the project; obtains buy-in from the key stakeholders; and
identifies the resources that will be needed to complete the project. The charter is a living document
that may be updated and modified as needed throughout the project.

Each charter should contain the following points (see Table 13.1 for a sample project charter template):

◆◆ Problem Statement: Explains what needs to be improved.

◆◆ Purpose: Establishes the goals and objectives of the team.

◆◆ Benefits: States how the enterprise will fare better when the project reaches its goals.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Chapter 13: Project Charter

◆◆ Scope: Provides project limitations in terms of budget, time, and other resources.

◆◆ Results: Defines the criteria and metrics for project success, including the baseline measures
and improvement expectations.
Unlawful to replicate or distribute

Table 13.1 Project Charter Template


Project Charter
Project Name:

Business Case:

Problem/Opportunity: Goals/Objectives:

Scope, Constraints, Assumptions: Expected Benefits:

Project Resources: Baseline Measures & Results:

Preliminary Project Plan Target Date Actual Date


Define Tollgate Review
Measure Tollgate Review
Analyze Tollgate Review
Improve Tollgate Review
Control Tollgate Review
Prepared by: Approved by:

142 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

13.1 Business Case


The business case is a short summary of the strategic reasons (the justification) to complete the
improvement project. It also links the opportunity to the organization’s business objectives and
describes the impact to the customers and/or stakeholders. The business case provides an overview

Unlawful to replicate or distribute


of why the organization should approve the project and how long the problem has been affecting the
organization and its customers. The business case describes why the project is important. In LSS terms,
the business case defines the outcome measures (y).

13.2 Problem and Opportunity Statements


The problem and opportunity are also defined in the project charter. The problem statement identifies
the problem that the project will address, and the opportunity statement provides the vision for the
outcome of the project after a process or product is improved.

The problem statement only documents the current performance. It makes no assumptions about the
“y”, and includes no possible solutions. The problem statement addresses the questions of what, when,
where, how many, and how it is known.

Example
Recruiting time for network engineers for the shared services area is negatively impacting
the team’s performance. The average time to fill a request has been 155 days over the past
15 months, which is adding costs of $145,000 per month in overtime, contractor labor, and
rework.

13.3 Project Goals and Objectives


The goals statement describes the expected improvement, usually long-term. The objectives define
strategies or steps that will be taken to achieve the identified goals. Goals and objectives should be
clearly stated and linked to the evaluation measures identified in the project charter. Objectives should
follow the SMART criteria as follows:

◆◆ Specific: Clearly describe the goals and objectives. Avoid using confusing or vague language.

◆◆ Measurable: Define in terms of percentage, monetary gains, throughput, productivity, etc. This
establishes an objective for the team and a basis for comparison when the project is completed.

◆◆ Attainable: Avoid setting goals that are too high.

◆◆ Relevant: The team’s goal should correspond to the problem at hand, the business objectives,
and the identified CTQ requirements.

◆◆ Time-Bound: Note when the team expects to achieve the goal.

Example
Reduce the network engineer recruiting time from an average of 155 days to 51 days with
an upper specification limit of 65 days by November 1. Achieving this goal will support the
Employer of Choice goal and will achieve an annualized savings of $145,000 per month.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 143
Chapter 13: Project Charter

13.4 Project Scope, Constraints, and Assumptions


13.4.1 Scope
LSS projects must clearly define the process boundaries (scope) for expectations that are in line with
the project charter, eliminate scope creep, and minimize risk. It is important to set boundaries large
Unlawful to replicate or distribute

enough to solve the problem while keeping the project small enough and focused to achieve the
results in a timely fashion. Beware of scope creep. It can ruin your project by overextending available
resources.

The project scope section describes the business opportunity or problem the project is designed to
address, as well as the project’s deliverables, its customers, and the customers’ requirements for the
final deliverables. If team members have a clear understanding of the project scope, they will be better
able to satisfy the customer. The scope section also identifies the key stakeholders and describes any
organizational deliverables.1

Pareto Charts
The following section is taken from “How Pareto Chart Analysis Can Improve Your Project” and is
reprinted with the permission of Michael Martinez, Project-Management-Skills.com.2

There are usually only a few inputs (x) that generate most of the outputs (y). A Pareto chart can help
identify these vital few inputs (critical x’s).

A Pareto chart has several key benefits:

◆◆ Helps the project team focus on the inputs that will have the greatest impact.

40 120%
94% 100%
91% 100%
86%
30
79%
80%
64%
20 60%
42%
40%
10
20%

0 0%
Shipping

Configuration
Installation

Software Fault

Hardware Fault

Connectivity

Other

Figure 13.1 Pareto Chart of Project Issues

1  Karen Tate and Paula Martin, The Project Management Memory Jogger, Second Edition [Salem, NH: GOAL/QPC, 2010], 19.
Used with permission. www.goalqpc.com
2  Michael Martinez, “How Pareto Chart Analysis Can Improve Your Project,” www.project-management-skills.com (2010-
2015). Accessed on October 20, 2015.

144 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

◆◆ Displays in order of importance the inputs that matter in a simple visual format.

◆◆ Provides an easy way to compare before and after snapshots to verify that any of the process
changes had the desired result.

Unlawful to replicate or distribute


In the sample Pareto diagram shown on the previous page (Figure 13.1), it can be seen that there
are seven categories of project issues and that most of the issues (42% to be exact) are related to
installation. It is also very easy to see that three categories account for 79% of the issues: installation,
software faults, and shipping.

Based on this Pareto analysis, if efforts were focused on addressing just the installation issues, the total
issues can potentially cut the total issues by more than 40%!

In addition to the basic Pareto chart, there are other variations:

◆◆ Major Cause Breakdown: The “tallest bar” can be broken down into subcauses using a second
Pareto diagram.

◆◆ Before and After: After a change has been made, create a second chart to be shown in a
side-by-side comparison with the original chart.

◆◆ Change the Data Source: Analyze the same problem from different perspective; for example,
from different departments, locations, equipment, etc.

◆◆ Change the Measurement Scale: Use the same inputs, but measure the outputs differently. For
example, one chart can measure frequency and another chart can measure cost.

Using a Pareto chart to analyze problems in a project will allow the team to focus their efforts on the
ones that offer the greatest potential for improvement.

13.4.2 Constraints
Constraints are the limitations placed on the project that can affect the project’s outcome. Constraints
can be internal (level of funding, resources, equipment, etc.) or external (economic, environmental,
legalities, etc.). Constraints must be identified and incorporated into the project plan to ensure that
the plan is realistic. Also, identifying a project’s constraints can help the project sponsor remove them,
allowing the project team to accomplish the required work activities.

13.4.3 Assumptions
Few projects begin with absolute certainty. The project team does not know for certain what problems
they will encounter during the project, which is why it is important to identify critical assumptions
for the project. Assumptions are factors that are considered to be true, but without underlying proof.
Assumptions must be analyzed and monitored to ensure their validity and relevancy as the project
proceeds.

13.5 Expected Benefits


A project charter is a communication instrument that explains to an organization and its key
stakeholders the business benefits expected from the successful completion of a project. It describes
the financial analysis of its expected benefits, such as budget impacts and estimated costs savings vs.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 145
Chapter 13: Project Charter

actual cost savings. The benefits section identifies how internal and/or external customers will benefit
from the project, as well as the potential impact on the business or the opportunities it can create.

The relevance of the project should be linked to the organization’s strategies and objectives:
Unlawful to replicate or distribute

◆◆ Increased profit margin by...

◆◆ Reduced costs by...

◆◆ Increased sales and market share by...

◆◆ Reduced customer complaints by...

◆◆ Improved customer satisfaction by...

13.6 Project Resources


The charter documents the project’s team. The team must include a project leader and project sponsor/
champion along with all relevant parties required to successfully complete a project. It is important
that all the areas of an organization most affected by a project are represented on the project team.

To ensure that the project team includes the right people with the best blend of skills, influence, and
knowledge and that it is led by a capable leader, consider the types of skills, knowledge, and expertise
that are important for the project. Choosing the right team makes it easier for the project team to meet
its objectives.

The project leader should be a key stakeholder that has a strong interest in making the project succeed
because he or she (or the area he or she represents) is affected by the activities or deliverables of the
project. A project leader should be skilled in the following areas:

◆◆ Leadership

◆◆ Facilitation

◆◆ Coordinating tasks
◆◆ Communication

◆◆ Project management knowledge

13.7 Baseline Measures and Results


Baselines are measurements that indicate the level at which a process currently performs or the
number of defects or variations from the recommended range. Before any improvements can be
measured, a baseline measurement must first be established, which acts as a benchmark for comparing
actual process performance against expected process performance and helps the team focus on the
gap between the current and the targeted performance. These measurements include defects/errors,
process capability, yield, or sigma levels.

The process baseline is the average long-term performance of an output characteristic or a process (y)
when all the input variables (x) are not under constraints. The primary metric of interest (the metric to
improve) is the output, y.

146 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

13.7.1 Measuring a Process


Following is a list of metrics (see Table 13.2) frequently used in LSS projects to measure the outcomes
of a process, identify opportunities for improvement, and monitor changes over time. These metrics
will help pinpoint sources of waste, variability, or customer dissatisfaction. The metrics selected will
depend on the goals of a project and may require multiple iterations as more information is discovered

Unlawful to replicate or distribute


about the process being improved.

Table 13.2 Lean Six Sigma (LSS) Metrics by Category


Adapted from U.S. Environmental Protection Agency, “Lean Government Metrics Guide”3
Lean Six Sigma Metrics
Time Metrics ◆◆ Lead Time: The total time from start to finish to develop a service/
product and deliver it to the customer, including waiting time
How long does it take (expressed in days; a lower number is better)
to produce a product or ◆◆ Processing Time: “Touch time” or the number of working hours
service? How long does spent on process steps, not including waiting time (a lower number
it take to deliver it to the is better)
customer? How much of
that time is spent adding ◆◆ Response/Wait Time: The number of working hours it takes to react
value? to a customer request for a service or product (a lower number is
better)
◆◆ Activity Ratio: Processing time divided by lead time (expressed as a
percentage; a higher number is better)
◆◆ Best and Worst Completion Time: The range of variation in lead time
or processing time (a smaller range is better)
◆◆ Percent On-Time Delivery: How often the lead time meets the target
(a higher number is better)
◆◆ Value Added (VA) Time: Amount of processing time spent adding
value to the service/product (a higher proportion of VA time is
better)
◆◆ Non-Value Added (NVA) Time: Amount of time not spent adding
value to the service/product (a lower proportion of NVA time is
better)
◆◆ Essential Non-Value Added (ENVA) Time: Non-value added steps
that cannot be eliminated (goal varies by service/product)

3  United States Environmental Protection Agency, “Lean Government Metrics Guide,” (July 2009), www2.epa.gov. Accessed
October 22, 2015. http://www2.epa.gov/sites/production/files/2014-04/documents/metrics_guide.pdf.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 147
Chapter 13: Project Charter

Lean Six Sigma Metrics


Cost Metrics ◆◆ Total Process Cost: Total costs, including labor, material, and
overhead, to produce the service/product (a lower number is better;
How much does it cost given the same level of production)
Unlawful to replicate or distribute

to complete the process ◆◆ Cost per Transaction: Total process cost divided by the number of
and produce a service or services/products produced (a lower number is better)
product? What are the
operational costs relative ◆◆ Cost Savings: Dollar or percentage of reduction in total process cost
to production levels? or cost per transaction (a higher number is better)
◆◆ Cost Avoidance: Dollar or percentage reduction in planned spending
that would otherwise have occurred (a higher number is better)
◆◆ Labor Savings: Reduction in labor hours needed to perform a
process (expressed in hours, FTEs, or percentage reduction; a higher
number is better)
Quality Metrics ◆◆ Customer Satisfaction: Qualitative or quantitative data derived from
surveys, number of complaints, thank-you notes, or other feedback
Was value created mechanisms (goal varies by measurement technique)
for the customer? Do ◆◆ Defect Rate: Percent of services/products that are defective (a lower
services meet customer number is better)
satisfaction criteria? How
often does the process ◆◆ Rework Steps/Time: Amount of a process spent correcting mistakes
generate mistakes that or going back for missing information (a lower number is better)
require rework? ◆◆ Percent Complete and Accurate: Percent of occurrences where a
process step is completed without needing corrections or requesting
missing information (a higher number is better)
◆◆ Rolling First Pass Yield: Percent of occurrences where the entire
process is completed without rework or the product of all the steps’
percent completes and accurate ratings (a higher number is better)
Output Metrics ◆◆ Production: Total number of services or products completed or
produced in a given amount of time (goal varies by service/product;
How many services or the optimal level should align with customer demand to minimize
products are completed backlogs and excess inventory)
or produced every month ◆◆ Work-in-process: Number of services or products currently being
or year? How many are processed (goal varies by service/product)
in the pipeline? Were
more produced than the ◆◆ Backlog: Number of services or products that are waiting to start the
customer needed? process (a lower number is better)
◆◆ Inventory: A supply of raw materials, finished products, or
unfinished products in excess of customer demand (a lower number
is better)

148 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

13.8 Preliminary Project Plan


The project charter should identify the project's milestones, as well as their target and actual
completion dates. For projects using the DMAIC methodology, the milestones noted on the project
charter can be noted using the tollgate reviews for each phase. The target dates are the dates that you

Unlawful to replicate or distribute


expect to complete milestones, and the actual dates are the dates the milestones were actually achieved.
The project team will use the preliminary project plan to develop the full project schedule.

13.8.1 Deliverables vs. Activities


A deliverable is produced as a result of an activity. Examples of deliverables include:

◆◆ Report
◆◆ Design

◆◆ Trained workers

◆◆ Patient test results

◆◆ Software documentation

An activity is a set of steps that creates a deliverable. Examples of activities include:

◆◆ Writing a report

◆◆ Creating a design

◆◆ Training workers

◆◆ Performing tests on a patient

◆◆ Writing software documentation

13.8.2 Final and Interim Deliverables


A final deliverable is a product, service, process, or plan; it must satisfy the customer needs and
requirements and is delivered to the customers of the project.

A LSS project usually has only one major final deliverable: the completed project charter, which
contains the following:

◆◆ Final process documentation, including process maps, SIPOCs, FMEAs, etc.

◆◆ Final process control plan that includes all the roles and responsibilities for maintaining the
gains, ongoing measures required, and metric reporting requirements.

It is important to determine if the customer is looking for specific features in the final deliverable(s) or
has defined the specifications for the final deliverable(s). For example, the final measurement system
must be automated and must not require any additional personnel to run the reports.

An interim deliverable is produced before the final deliverable. LSS projects require identification
of significant accomplishments to show progress and maintain support from the organization. The

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 149
Chapter 13: Project Charter

interim deliverables of a LSS project should meet the acceptance criteria of each DMAIC stage before
proceeding.

What is the purpose of defining the interim deliverables?


Unlawful to replicate or distribute

◆◆ To determine what, if any, deliverables will be produced before the final deliverables are
completed

◆◆ To assign accountability for the production of each interim deliverable

◆◆ To breakdown the production of the final deliverable into more manageable and tangible steps

◆◆ To define key accomplishments of the project

The purpose of interim deliverables is to divide up the work of the project and to assign that work to
subprojects. A tree diagram (see Figure 13.2) shows at a glance the subprojects that will be carried out
and who will be held accountable for making sure the work assignments are done. Subproject team
members then convert the work assignments into their own project plans.

Subprojects and Final


Person Accountable Deliverables

Project
Management Process Evaluation Report
Rebecca Mayberry Course Transition Plan

Course Organization/Lecture Design


Instructional Design Learner Interactions
Cynthia Jones Self-Directed Learning
Student Assessment Survey

Project: Transition Instructor/Content Online Delivery Course Model


of Traditional Specialist Textbook Selection
Pharmacology Course Ancillary Materials Selection
Dr. William Smith
to Online Delivery
Course Schedule

Instructional Canvas LMS


Technology Respondus 4.0
Peter Moore Respondus Lockdown Browser
Services/Technical Support

Develop Course Media


Course Design Develop/Select Other Course Materials
Jason Harris Develop Online Resources
Provide Quality Assurance

Figure 13.2 Tree Diagram of Subprojects and Work Assignments


Based on graphic from: Karen Tate and Paula Martin,
The Project Management Memory Jogger, Second Edition
[Salem, NH: GOAL/QPC, 2010], 80. Used with permission. www.goalqpc.com.

150 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Part IV: Lean Manufacturing and Lean


Office

Unlawful to replicate or distribute


A ccording to Womack and Jones, there are five key Lean principles (see Section 2.2 Lean
Methodology):

1. Identify what creates value from the customer’s perspective.

2. Map the value stream to identify waste and flow issues. Use the appropriate Lean methods
and tools to eliminate the waste.

3. Make the processes flow by using the appropriate Lean methods and tools.

4. Manufacture only what is pulled by the customer.

5. Strive for perfection by continually improving the systems.

The following chapters discuss how to discover waste and flow issues in an organization's value
stream and the appropriate Lean tools with which to combat them.

1. Map the value stream by drawing a current state value stream map (Chapter 14).

2. Learn about Lean methods and tools that will help eliminate waste and improve flow in the
value stream (Chapter 15).

3. Using the Lean methods and tools discussed in Chapter 15, analyze the current state value
stream map and then build an ideal value stream map that has no waste and where product
flows seamlessly, the future state (Chapter 16).

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
This page intentionally left blank.
Unlawful to replicate or distribute

152 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Chapter 14: Value Stream Mapping

Unlawful to replicate or distribute


Key Terms
process map value stream map

Body of Knowledge
1. Develop a value stream map (VSM).

2. Differentiate between a VSM and a process map.

A value stream contains all the activities an organization must perform to design, order, produce,
and deliver its products or services. A VSM captures those activities, including the flow of work
and the flow of information and materials required to complete the steps of the process.

VSM visually depicts the flow of the manufacturing and production process, the information that
controls the flow of materials through the process, and where improvements are needed. For example,
a VSM can help identify waste and flow issues; point out where a process needs to be standardized;
which loads need leveling; and if and where resources need to be allocated to handle production
demands. Creating a VSM has the added benefit of actually watching and walking the process to
observe and scrutinize the entire process.

There are two separate aspects to VSM: 1) current state value stream mapping and 2) future state value
stream mapping. The current state map looks at what happens now, i.e., it is the “as is” drawing. The
future state map, which looks at how things should be carried out, represents the ultimate goal of the
improvement process and provides the team with an objective to work towards.

14.1 Comparing VSM and Process Maps


There are similarities and differences between the process map and the value stream (see Table 14.1 for
specific comparisons):

◆◆ A VSM is similar to a process map but has a broader range of information than a process map.

◆◆ A VSM is focused on identifying cycle time reduction.

◆◆ A VSM is especially useful when wasted time is difficult to spot.

◆◆ A VSM cuts across functional boundaries and across multiple departments.

◆◆ A VSM uses simple graphics or icons to show the sequence and movement of information,
materials, and actions in the value stream.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Chapter 14: Value Stream Mapping

Table 14.1 Process Maps vs. Value Stream Maps


Process Maps Value Stream Maps
Functionally focused Customer-focused (end-to-end process)
Used to understand the steps Helps to visualize the flow (stops, starts, and
Unlawful to replicate or distribute

waiting)
Does not make value judgments about the steps Helps to see waste and its sources
Two states: “as is” and “to be” process maps Two states: current state and future state VSM
Maps the “as-is” process Maps the way the process is actually working
Mixes information with physical flows Separates information from physical flows

14.2 Current-State VSM


To create a Current-State VSM, a team is selected that includes the process owner and the employees
that work in the process. Then, the material flow’s path must be walked through, beginning from
each input (source of materials, etc.) through each output, documenting each step along the way.
Sometimes steps are discovered that were not previously documented as part of the process so it is
important to document every step during the walkthrough. When and how communication occurs
and any problems you witness should be documented; and all the employees that work in the process
should be interviewed.

Once the information has been gathered, the VSM can be drawn on paper or a white board. Following
are the key sections a VSM should include:

◆◆ Upper right corner of map should provide customer information

◆◆ Upper left corner of map should provide supplier information

◆◆ Top half of the map should illustrate the information flow

◆◆ Bottom half of map should illustrate the material or product flow

◆◆ Time line should be shown near the bottom of the map to calculate the value added and non-
value added time.

The data collected should be useful in measuring the process and can include cycle and changeover
times, reliability of equipment, first-pass yield, quantities, number of operators and shifts, hardcopy
and electronic information and communication, inventory levels, and queue or wait times.

154 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Value Stream Map Icons

Material Flow Icons Information Flow Icons

Manual Info Flow

Unlawful to replicate or distribute


I Inventory
Process Electronic Info Flow
190

Truck Withdrawal
Kanban
deliveries/ Kanban
shipments
Process
Shared Kanban
Kanban Batch
Post

Physical
Pull
Load
OXOX Signal Kanban
Supplier Leveling
Forklift
Weekly Schedule  Go-See

Warehouse

C/T = 60s General Icons


C/O = 15m Data Box
3 shifts
Takt = 1s Kaizen
Total
Burst Safety Stock Process
Time

 Movement
of Goods Quality
Q Problem
60s 7 mins

Timeline &
Operator
Timeline Total

Figure 14.1 VSM Symbols

14.3 Procedure for Drawing a Current State VSM


1. Determine the scope of the VSM and the family of products or services that are applicable.
2. Draw the process flow using the SIPOC or process map as the foundation. Remember to stay
within the scope.

3. Add the material flow and show its movement through the process. Include all testing
activities. Add supplier information at the beginning of the process and customer information
at the end of the process. Show how the material is delivered to the plant and the finished
product or service is delivered to the customer (see Figure 14.2).

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 155
Chapter 14: Value Stream Mapping

Supplier Customer



Unlawful to replicate or distribute

Raw
Final
 Materials
Received
 Machining  Assembly  Testing  Finishing  Testing  Shipping

Figure 14.2 VSM with Process Steps

4. Add the information flow. Include production orders, scheduling activities, procedures,
records, and other documents as applicable. Label as "hard copy" or "electronic" format.
(see Figure 14.3.)

cast Production 30 day Fore


Monthly Fore Control
cast
Daily Order
Supplier Weekly Order Customer

 Weekly

Schedule
Weekly Daily

Raw
Final
 Materials
Received
 Machining  Assembly  Testing  Finishing  Testing  Shipping

Figure 14.3 VSM with Information Flow

5. Collect important process data (whether parts or information) and connect it to the chart
boxes. This may include process time, set up time, number of people, defect rate, error rate,
downtime, batch size, and work-in-process.

6. Add up process times (value-added) and lead times (non-value-added), including delays
(time in queue), set-up times, and other times that are important to the process. Count the
inventory and add the amounts in the appropriate locations within the VSM.

7. Calculate and summarize the value-added and non-value-added times and record at the
bottom of the map. Calculate the percent of value-added time in the value stream.

8. Verify the finished map with the appropriate employees. Make changes as needed.
(see Figure 14.4).

156 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

cast Production 30 day Fore


Monthly Fore Control
cast
Daily Order
Supplier Weekly Order Customer

 Weekly

Unlawful to replicate or distribute


Schedule
Weekly Daily

Raw
Materials Final
 Received  Machining  Assembly  Testing  Finishing  Testing  Shipping

I I I I I I I
1713 130 260 260 130 190 190

C/T = 60s C/T = 120s C/T = 60s C/T = 60s C/T = 60s C/T = 60s
C/O = 15m C/O = 11m C/O = 7m C/O = 13m C/O = 9m C/O = 20m
3 shifts 3 shifts 3 shifts 3 shifts 3 shifts 3 shifts
Qual = 95% Qual = 99% Qual = 100% Qual = 98% Qual = 100% Qual = 99%
Total
Process
Time
60s 120s 60s 60s 60s 60s 7 mins

Figure 14.4 Completed Current State VSM

Upon completion of the VSM, the Lean tools and techniques discussed in Chapter 15 may be used
to improve the process. Potential improvements or “Kaizen Bursts” are then added to the current
state map; and an ideal value stream is drawn as a “Future VSM,” which includes all of the potential
improvements. The future state contains only value-added tasks where all the waste is removed and
the flow is vastly improved, which is the perfect state. The goal is to get as close to this state as possible.
Future state mapping is discussed in Chapter 16.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 157
This page intentionally left blank.
Unlawful to replicate or distribute

158 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Chapter 15: Lean Methods and Tools

Unlawful to replicate or distribute


Key Terms
5S mistake-proofing
constraint management plant layout
continuous flow point of use storage
cycle time pull system
eight wastes quick changeover
kanban standard work
level loading total productive management
lot size visual factory

Body of Knowledge
1. Identify various Lean methods and tools available for optimizing flow and eliminating waste
in the value stream.

2. Select and apply the correct Lean methods and tools for LSS projects.

L ean manufacturing focuses on speed and efficiency by reducing or eliminating waste in the value
stream and increasing flow. Lean uses many methods and tools and is applicable to both the
manufacturing and service industries.

All of the Lean tools available are not necessarily used on any one particular project. However, the
more tools an organization has at its disposal, the more options there are to address problems with
waste and flow in your value stream.

Many Lean tools are directed at the eight wastes that exist in business (Chapter 2).

1. Defects: Products or services that are out of specification or contain errors.

2. Overproduction: Producing too much of a product before it is ready to be sold.

3. Waiting: Down-time waiting for the previous step in the process to complete.

4. Non-Utilized Talent: Employees that are not effectively engaged in the process.

5. Unnecessary Transportation: Transporting parts or information that are not required to


perform the process from one location to another.

6. Idle Inventory: Parts or information that are not being processed.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Chapter 15: Lean Methods and Tools

7. Wasted Motion: Employees, information, or equipment making unnecessary motion.

8. Extra Processing: Any activity that is not necessary to produce a product or service.

The Lean methods and tools discussed in this chapter are listed in Table 15.1
Unlawful to replicate or distribute

Table 15.1 Lean Methods and Tools


Method / Tool Description
5S Creates a safe, clean, neat arrangement of the workplace
Constraint management Attacks bottlenecks to increase throughput
Continuous flow Items are processed and moved directly from one processing step to the
next
Cycle time reduction Reduces the time required to complete one cycle of an operation
Kanban Provides a signal system for controlling or balancing the flow of parts,
materials, and information
Level loading Balances production throughput over time
Lot size reduction Results in a one-piece flow where possible
Mistake-proofing Prevents errors from occurring
Plant layout Facilitates flow of material and information and reduces waste in the
workplace
Point of use storage Locates materials and tools where they are used
Pull systems Produces the product upon customer demand
Quality at the source Ensures product is made right the first time at each step of the
process
Quick changeover Results in fast set-up and turnaround times
Standard work Results in process steps safely carried out with tasks organized in the best
known sequence and using the most effective combination of resources
Total productive Provides a systematic approach to the elimination of the equipment losses
maintenance
Visual factory Creates simple signals that provide an immediate understanding of a
situation or condition

15.1 5S (Sort, Set, Shine, Standardize, and Sustain)


5S is a process that creates and maintains a safe, organized, clean, high-performance workplace that
can serve as the foundation for process improvement in an organization. 5S is planned, implemented,
and maintained by the employees of the work area to be improved and enables them to quickly
distinguish normal from abnormal conditions. 5S is the foundation for process improvement.

15.1.1 5S Work Instruction


1. Identify the target area and the scope of the project. Form the team using employees from the
area under study.

2. Document the current state by performing a workplace scan. Draw a map of the area
supplemented by photographs. List all the activities that occur in the work area. This forms the
baseline, or the “before” state.

160 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

3. Sort the items in the work area by identifying unneeded items and moving them to a
temporary holding area. Within a predetermined time, these items either are discarded, sold,
moved, or given away. Items are usually cleaned as they are sorted.

4. Set in order by identifying the best location for the remaining items, relocating out-of-place

Unlawful to replicate or distribute


items, setting inventory limits, and installing location indicators, e.g., arrows, labels, and
signage. There will be a place for everything and everything will be in its place.

5. Shine the area and everything in it, including desks, cabinets, shelves, walls, ceilings, and
floors. Clean everything inside and out. Continue to inspect items while cleaning them and to
prevent dirt, grime, and contamination from occurring.

6. Standardize to maintain control of the area. Create the rules for maintaining and controlling
the first 3 S’s using visual controls, checklists, and procedures. Table 15.2 is an example of a
visual control that can be posted in a 5S area.

Table 15.2 5S Control Board


Standard condition Control idea Idea adopted
Sort Only those tools Shadow board Shadow board in
needed for the work place and training
should be in the work accomplished
area
Set Height of stacked Red control line Control line in
skids not to exceed place and training
three feet accomplished
Shine Splicer checked for Checklist and Training
oil leaks every shift training accomplished
and checklist
implemented

7. Sustain the gains and ensure adherence to the 5S standards through communication, training,
“after” photographs, and self-audits by the employees in the work area. The best way to
sustain the gains is through employee involvement in the 5S project itself, and on-going top
management support.

15.2 Constraint Management


Constraint management is an improvement method that focuses on the weakest link, or process step,
in a system. Usually the constraint is the slowest process. The flow rate through the entire system is
restrained due to the bottleneck of the slowest process. This results in lower throughput rates, larger
inventories, and higher operating expenses.

15.2.1 Drum-Buffer-Rope
The “drum” is the process bottleneck, or constraint. The “beat” of this process step sets the pace for
the rest of the system. The “buffer” is the inventory for the bottleneck which must be available to keep
the bottleneck operating at full performance. The “rope” feeds information from the buffer to the raw
material. Material is released to keep the buffer inventory at the proper level.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 161
Chapter 15: Lean Methods and Tools

15.2.2 Constraint Improvement


1. Identify: Find the slowest step of the process. It will often have the most work-in-process
(WIP) before it.
2. Exploit: Use methods to improve the rate of the constraint.
Unlawful to replicate or distribute

3. Subordinate: Adjust the rates of the other processes in the chain to match the constraining
process.
4. Elevate: If more improvements are needed, the constraining process may need extensive work
(more investment).
5. Repeat: Work on the next process step that has become the new constraint.

15.3 Continuous Flow


Continuous flow is a situation in which a product moves through the stages of ordering and
producing one piece at a time without stopping or moving backwards in the process. In order for a
process to achieve continuous flow, the following systems must already be in place: 5S, kanban, Poka
Yoke, and quick changeover. The next step is to align the physical layout of the workspace. Hand-offs
should happen quickly within a minimal amount of time. The ideal work cell layout is a straight line
or modified U-shape. During the process of establishing a new layout, making radical changes may
be needed. It is recommended that machines be downsized if possible to accommodate the shift from
large batch production.

Batch and Queue Production vs. Continuous Flow


Batch and queue production is an older process that produces large amounts of product. Although it
may seem effective, it is not the correct way to conduct business in a LSS operation. Batch and queue
production causes longer lead times because of the additional time required to produce large amounts.
Additionally, the large batches may become work-in-process (WIP) and excess inventory that needs to
be moved, stored, and monitored. Finally, large batches that have been found to be defective become
very expensive scrap heaps.

In contrast, products moving through a continuous flow operation move very quickly through the
process, creating minimal delay for the customer. It is also much easier to identify and remedy any
defects that occur in the process. Quick identification of defects saves the organization costly rework
or scrap. Finally, the company does not have to worry about finding space and maintaining a huge
inventory of product.

15.4 Cycle Time Reduction


Cycle time, also known as processing time or turnaround time, is the time it takes to complete one
cycle of an operation. Reducing cycle time and cycle time variation reduces waste.

15.4.1 Examples of Cycle Time Reduction


1. Reducing room turnaround time in hospitals.

2. Reducing the time to change from one part to another by employing SMED methods. See
Section 15.13.

162 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

3. Eliminating a non-value added step in the process.

4. Creating a work group cell that combines steps and improves efficiencies.

15.5 Kanban

Unlawful to replicate or distribute


Kanban is a system that incorporates signs, cards, or other visuals that signal the need to replenish
stock. Inventory levels are set in such a manner that they remain low, while ensuring that inventory
depletion will not occur.

15.5.1 Two-bin System


In a two-bin system, when the first bin is down to a certain percentage, or number of parts, the second
bin is brought forward for replenishment. The signal may be a card that comes with the bin, a light, or
just showing the bin to someone. Kanbans should be as simple as possible.

15.5.2 Other Kanban Examples


1. An office clerk buys two cartridges of ink for the printer. One goes into the printer; and the
other one is placed on a nearby desk in a specially marked spot. When the printer runs out of
ink, it is replenished with the ink cartridge on the desk. The empty marked spot on the desk
signals the office clerk to buy another cartridge.

2. A car has a feature that when the fuel tank is below a certain level, a red light flashes in the
dash. This is a Kanban signal to add more fuel.

3. A Kanban board can be used to keep track of critical tasks prior to taking a two-week cruise
(see Table 15.3).
Table 15.3 Cruise Kanban Board
To Do Doing Done
Board pets
Notify neighbors
Pack medicines
Get passports
Get travel insurance
Get cruise tickets
Stop mail

15.6 Level Loading (Heijunka)


Level loading, also known as “Heijunka,” is the leveling of schedules and production, i.e., adjusting
the volume and the product mix to minimize day-to-day variation. This tool allows reducing
inventory, decreasing lead-times, and producing the variety of products the customer wants as they
want them. Many LSS tools should already be in place to properly use and maintain a Heijunka
scheduling system.

Level loading is the foundation for increased flow and inventory reduction.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 163
Chapter 15: Lean Methods and Tools

15.7 Lot Size Reduction


As a result of reducing change over time in the process, small lot production (ideally one piece) can be
achieved. Small lot production is an important component of many Lean manufacturing strategies. Lot
size directly affects inventory and scheduling.
Unlawful to replicate or distribute

Small lots introduce flexibility to manufacturing as well as reduce waste. They enhance quality,
simplify scheduling, reduce inventory, enable kanban, and encourage continuous improvement.

15.7.1 Example of Small Lot Size


The Acme Pancake Mix Company produces three types of mix on one line in 3,000-pound batches,
resulting in large inventories when customer demand is low. Holding items in inventory too long can
result in loss of inventory due to warehouse damage, infestation, and spoilage. Production batches
were reduced to 500 pounds, which conforms better to customer demand after the changeover time
between mix types was drastically reduced (See 15.13).

15.8 Mistake-proofing
Mistake-proofing, sometimes known as error proofing or Poka-Yoke, prevents defects or errors by
ensuring that the proper conditions exist in the process. Mistake-proofing should be inexpensive,
effective, and easy to understand. A good example is the automatic “save” reminder that pops up when
an individual closes a Word document. In manufacturing, companies can use specialized jigs or color-
coding techniques to assist workers on the line.

Mistake-proofing is a valuable tool for several reasons. First, it can be used as a safety mechanism. An
example is a car window that will not roll up when an individual’s arm is in the way. Mistake-proofing
can also be used to reduce inspection time and scrap on the manufacturing floor.

Mistake-proofing has uses in a transactional environment as well. A good example is the embedded
security thread in $100 bills, which makes it easier to detect a forgery.

Mistake-proofing can be applied to production, service, safety, and environmental issues.

15.8.1 Mistake-proofing Principles1


1. Elimination of steps or tasks in a process.
•• Example: product simplification or part consolidation that avoids a part defect or
assembly error prior to production.
2. Replacement substitutes that provide a more reliable process to improve consistency.
•• Examples: use of robotics or automation that prevents a manual assembly error or
automatic dispensers or applicators to ensure that the correct amount of a material (such
as an adhesive) is applied.
3. Prevention designs for the product or process so that it is impossible to make a mistake at all.
•• Examples: limit switches and/or fixtures to ensure a part is correctly placed before the
process step is performed; part features that only allow assembly the correct way; unique

1  Kenneth Crow, “Mistake-Proofing by Design,” http://www.npd-solutions.com/mistake.html (DRM Associates, 2002)

164 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

connectors to avoid misconnecting wire harnesses or cables; and part symmetry that
avoids incorrect insertion.
4. Facilitation of techniques and combining steps to make work easier to perform.
•• Examples: visual controls, e.g., color coding, marking, or labeling parts to facilitate correct

Unlawful to replicate or distribute


assembly; exaggerated asymmetry to facilitate correct orientation of parts; a staging tray
that provides a visual control that all parts were assembled; and locating features on parts.
5. Detection involves identifying an error before further processing occurs so that the user can
quickly correct the problem.
•• Examples: sensors in the production process to identify when parts are incorrectly
assembled and built-in self-test capabilities in products.
6. Mitigation controls that seek to minimize the effects of errors.
•• Examples: fuses to prevent overloading circuits resulting from shorts, extra design margin,
or redundancy in products to compensate for the effects of errors; contingency planning
when dealing with sole suppliers of critical material; and gun locks to prevent children
from potential harm.

15.8.2 Mistake-proofing Example


The key to developing an effective mistake-proofing system is to understand how and why the mistake
occurred. It is important to understand the circumstances that led to the error. Is the mistake random
or repetitive? Does this mistake happen with everyone or only certain individuals? Is everyone using
the standardized work procedure without eliminating any steps? The greatest challenge is discovering
the true cause of the mistake and then creatively coming up with solutions to eliminate the possibility
of the mistake occurring.

Mistake-proofing may take many forms and its costs, and its effectiveness may vary. For example,
driving an older model automobile and a driver who has a tendency of locking the keys in the car.
Possible actions may include:

1. Put a duplicate key in a magnetic holder and place under the car (inexpensive, if it stays in
place).

2. Buy another car with a feature that prevents locking with the keys in the car (expensive).

3. Buy another car that sounds an alarm when the keys are in the car (see above).

4. Buy another car with a door keypad (see above).

5. Carry an extra key in a billfold or purse (inexpensive, unless the billfold/purse is still in the
car).

6. Standardize the process of exiting and locking the car. (inexpensive, but the process depends
on the driver's discipline in doing so).

7. Use a lanyard (not convenient).

8. Subscribe to OnStar (expensive).

As shown above, there may be many choices to solve a problem. With each, however, the effectiveness
of an action with the cost to implement and maintain it must be weighed. When choosing solutions,

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 165
Chapter 15: Lean Methods and Tools

the entire team should be involved, including the employees in the area affected.

15.9 Plant Layout


Lean floor layouts focus on people, material, and information flow. Employees, workstations, parts
Unlawful to replicate or distribute

bins, and equipment are arranged to optimize flow, minimize waste, and boost productivity.

With a Lean layout, work centers are grouped by product families or groups of products that share
common processes. This type of layout enables smaller batch and run sizes, which results in less WIP
inventory, less material handling due to shorter travel distances, and less physical space.

15.10 Point of Use Storage (POUS)


Point of Use Storage (POUS) is the practice of storing the material, tools, information, or anything
else needed to perform the work at the workstation. POUS simplifies inventory tracking, storage, and
handling and also can reduce travel time and other wasted effort.

15.11 Pull Systems


The two basic manufacturing production systems are push systems and pull systems. A pull system
is used in the Lean environment while a push system, which is used by most U.S. manufacturers, is
driven by forecasts or schedules. Pull systems are based on customer consumption and replenishment,
while push systems are focused on production standards and customer forecasts and then waiting for
the customer to consume the product inventory.

Some companies use a hybrid system. They produce to a “supermarket” that holds a pre-determined
amount of inventory. As this inventory is used, a kanban system signals the need for restocking by
pulling from the upstream process step. Supermarkets, if properly implemented and maintained,
ensure on-time delivery by providing a safety stock as a buffer against uneven customer demand or
unforeseen circumstances. They also keep in-process and finished goods inventories under control at a
pre-determined level.

In short, if predetermined inventory limits are used in a process, the organization has a pull system. If
there are no limits and controls on in-process and finished goods inventory, it is a push system. 

15.12 Quality at the Source


Quality at the Source means that the employees are certain that the product or information they are
passing to the next process step, or to the customer, is acceptable. In order to do this, employees must
be given the necessary resources, e.g., enough time, proper training, appropriate work instructions and
visual controls, applicable equipment, and a safe clean work environment. Successful application of
this philosophy help ensure that the product is made right the first time at every step of the process.

15.13 Quick Changeover


Quick changeover is an efficient method for quickly converting a process from running one product
to running the next product.

166 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Single-minute exchange of die (SMED) was developed by Shigeo Shingo and is a process that enables
the production of smaller batch sizes while reducing lead time. His strategy was to reduce what he
called internal and external changeover activities. Internal activities need to be minimized because
they can only happen when the machine is not operating. In contrast, external activities can be done
while the machine is operating. The process for SMED is outlined in the following steps:

Unlawful to replicate or distribute


Step 1. Measure the changeover time in the current state.
Step 2. Identify the internal and external changeover elements and calculate the time for each.
Step 3. Convert internal elements into external elements (100 percent is not always possible).
Step 4. Reduce the time for the remaining internal elements.
Step 5. Reduce the time for the external elements.
Step 6. Standardize the new procedure.

The benefits of quick changeover include:

◆◆ Smaller batch sizes

◆◆ Reduced inventory

◆◆ Increased machine capacity and manufacturing flexibility

◆◆ Reduced errors

◆◆ Improved safety

◆◆ Improved competitive position

15.14 Standard Work


Standardized work is defined as operations carried out in a safe manner that are organized in the best
known sequence using the most effective combination of resources. Resources include the employees
that work in the process, the materials and information used in the process, the methods and
procedures used in the process, and the machines and equipment used in the process. Standard work
becomes the current “Best Practice."

15.15 Total Productive Maintenance (TPM)


TPM has a goal of maximizing equipment effectiveness for the lifetime of the equipment, with an
ultimate objective of zero unplanned machine downtime. TPM is a shift in thinking that recognizes
the role of the operator in maintaining the health of the equipment and looks at maintenance as a
necessary part of doing business.

By including maintenance as a scheduled daily activity, TPM can keep emergency maintenance
to a minimum and reduce the costs that arise when a maintenance program is not a part of the
manufacturing process, e.g., equipment breakdown, setup and adjustment, minor stoppages, line
speed reductions, defects, scrap, and rework.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 167
Chapter 15: Lean Methods and Tools

15.15.1 TPM Subgroups


TPM programs consist of three subgroups:

1. Autonomous maintenance is a preventive measure that uses the process operator as the first
line of defense against equipment issues. Trained competent employees with a good sense of
Unlawful to replicate or distribute

awareness can spot trouble before it happens, such as loose parts, excessive wear, and strange
noises.

2. Planned routine maintenance is a scheduled program designed to reduce wear and prolong
equipment life. Regular lubrication, replacement of filters, and inspection of critical parts are
all a part of this program. Documentation includes maintenance schedules, work instructions,
and records, including training records.

3. Predictive maintenance predicts when equipment failure might occur, and thereby avoid
occurrence of the failure by performing maintenance. Monitoring for future failure allows
maintenance to be planned before the failure occurs. Ideally, predictive maintenance
allows the maintenance frequency to be as low as possible to prevent unplanned reactive
maintenance, without incurring the costs associated with doing excessive preventative
maintenance. Examples of predictive maintenance include visual inspections, listening for
strange noises with stethoscopes, and lubricant analysis.

15.15.2 Overall Equipment Effectiveness (OEE)


Overall Equipment Effectiveness (OEE) assesses current operating conditions and machine
productivity and provides a good benchmark upon which to improve. Once the improvements are in
place, OEE provides a good metric to judge sustainability of the gains.

The formula is OEE = Availability of the equipment x Performance efficiency x Rate of quality

15.15.3 OEE Example


Availability:
A factory operates one ten-hour shift per day. The line runs through lunch and break periods.
Therefore, the total available run time is 600 minutes per day from which is subtracted the setup time
and planned maintenance downtime, which is 60 minutes.

Availability of equipment = 540/600 = 90%

Performance:
Data gathered from the operators shows an average of 100 minutes of unplanned downtime every
shift. Also, there is an average of 20 minutes per shift lost due to reduced equipment speed.

Performance = 480/600 = 80%

Quality:
Data gathered from the quality department shows that an average of 30 minutes every shift is spent
producing defective parts.

Quality = 570/600 = 95%

168 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Total OEE = 90% x 80% x 95% = 68.4%

This result indicates a baseline of 68.4% and that the best opportunity for improvement is the
performance of the equipment (80%), which is unplanned downtime.

Unlawful to replicate or distribute


15.16 Visual Factory
Visual factory, sometimes known as visual controls, are techniques that allow employees to visually
determine the status of a factory or office process at a glance. This form of process control can prevent,
or at least reduce, process variation. Information may be displayed in text or pictures, which must be
legible and in a form or language that all employees can understand.

Visual factory is sometimes listed as a form of mistake-proofing. It is an excellent way to help control
the process and sustain the improvement gains.

Some examples of the visual factory include:

1. Signage and photographs

2. Product line identification including labels on the equipment and outlines on the floor

3. Color-coded items (bottles, bins, documentation, and walkways)

4. Schedule boards

5. Posting of performance metrics

6. Examples of defective products

7. Graphic displays of work instructions

8. Shadow boards

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 169
This page intentionally left blank.
Unlawful to replicate or distribute

170 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Chapter 16: Value Stream Analysis

Unlawful to replicate or distribute


Key Terms
current state kaizen
eight wastes of Lean value stream map
future state

Body of Knowledge
1. Analyze a current state value stream map.

2. Construct a future state value stream map.

3. Use Lean tools to improve the value stream.

4. Accomplish a kaizen event.

T he value stream is comprised of all the actions, both value-added and non-value added, that are
required to bring a product or service from concept to launch and from order to delivery.

The value stream is analyzed and mapped in order to reduce waste, enable flow, and move the process
towards the ideal of rapid response to customer pull.

By identifying the value stream from end to end, organizations can uncover large areas of waste and
inefficiencies in the process.

16.1 The Eight Wastes in the Value Stream


Waste is defined as anything that does not add value for the customer. It is waste if it does not change
the information or product, is not produced correctly the first time, or the customer is not willing to
pay for it. Waste needs to be reduced and eliminated whenever possible.

The eight wastes of Lean are:

1. Defects: Products or services that are out of specification or in which errors were made.

2. Overproduction: Producing too much of a product before it is ready to be sold.

3. Waiting: Waiting for the previous step in the process to complete.

4. Non-utilized Talent: Employees that are not effectively engaged in the process.

5. Transportation: Transporting parts or information that are not required to perform the
process from one location to another.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Chapter 16: Value Stream Analysis

6. Inventory: Parts or information that are sitting idle (not being processed).

7. Motion: People, information, or equipment that are making unnecessary motion.

8. Extra Processing: Any activity that is not necessary to produce a product or service.
Unlawful to replicate or distribute

16.2 Lean Improvement Methods and Tools to Reduce Waste and Increase
Flow
The Lean methods and tools that can be used to reduce waste and create flow were discussed in
Chapter 15.

To recap, they were:

1. 5S

2. Constraint management

3. Continuous flow

4. Cycle time reduction

5. Kanban

6. Level loading

7. Lot size reduction

8. Mistake-proofing

9. Plant layout

10. Point of use storage

11. Pull systems

12. Quality at the source

13. Quick changeover

14. Standardized work

15. Total productive maintenance

16. Visual factory

16.3 Current State Value Stream Map (VSM)


The purpose of the current state VSM is to capture all key flows (work, information, and materials)
in a process and record important metrics. A VSM is more complicated to construct than other
flowcharts, but it is more useful in capturing waste in the value stream, especially in time and costs.
It is created as a current (as-is) state of the process rather than what you want it to be nor expect it
to be. It is “a picture in time” when the map is created which may be used in the Define and Measure
phases of DMAIC to identify and visualize improvement opportunities and also to fill up the funnel

172 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

of suspected causes in the Analyze phase. The current state VSM provides the basis for designing the
future state VSM. Current state value stream maps were discussed in Chapter 14.

16.4 Future State Value Stream Map

Unlawful to replicate or distribute


After mapping the current state of a process, Lean methods and tools are applied to reduce waste in
the process and improve flow, creating a future state vision of the process. This vision is drawn as a
future state map. It is an ideal state which contains only value-added tasks, all waste is removed, and
flow is vastly improved. The future state is attained through a series of kaizen improvement activities.
(see Section 16.5).

16.4.1 Procedure for drawing a Future State Map


A well-documented current state value stream map is the starting point and foundation for a future
state map.

Future states may be drawn in any type of media, e.g., whiteboard, paper and pencil, or
computer software.

Step 1. Review the current state map. Obtain consensus that it represents the true current state.

Step 2. Brainstorm potential improvements. Use the questions in Section 16.4.2, and the waste audit
checklists (Table 16.1) as guidelines. Focus on low-cost, low-risk measures that reduce waste
and improve flow and simplify when possible.

Step 3. Create and prioritize an action list of improvements. Prioritize the list.

Step 4. Draw the proposed changes on the current state map.

Step 5. Draw the future state map.

Using kaizen events (Section 16.5), implement the changes to the current state to get as close to the
future state as possible.

16.4.2 Questions to Ask When Creating a Future State VSM


Are there bottlenecks or constraints?
From the data collected during the creation of the current state VSM, look at the cycle times or
processing times. A bottleneck (or constraint) is the resource that requires the longest time in the
supply chain operations for a certain demand. The theory of constraints is an important tool for
operations managers to manage bottlenecks and improve process flows. Constraint management is
discussed in Chapter 15.

Where can inventory or queue time be reduced?


Look at raw material and WIP, e.g., parts or information, buffer stock, safety stock, and finished goods
inventories, to determine if they can be reduced. The key is to find ways to reduce inventory in a
logical manner. Also, look for opportunities to improve or reduce paperwork flow.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 173
Chapter 16: Value Stream Analysis

Where can the material and information flow be improved?


Could materials be placed into a cell or eliminate materials from stopping and waiting? If material flow
improvements are not possible, could a first-in, first-out lane be established between processes?
Unlawful to replicate or distribute

What other improvements are required?


Does the reliability of equipment need to be improved? Are the quality levels acceptable? Is the
workplace messy and cluttered? Are current layouts confusing and too complex?

Where is the waste?


Table 16.1 is a waste audit checklist that may be used when auditing the value stream for waste.
Table 16.1 Generic Waste Audit Checklist
Area Description of waste and type
Office

Grounds

Warehouse storage

Receiving

Maintenance

Restrooms

Lunch area

Value stream #1

Value stream #2

Value stream #3

16.5 Kaizen
Kaizen is a philosophy that seeks to improve processes, systems, and people every day. Kaizen means
a “change for the good.” It is a continuous improvement process that involves all employees at all levels
of the organization.

174 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

A kaizen blitz, also known as a kaizen event or kaizen activity, is a process improvement activity
performed by a team of employees in a short amount of time. It is designed to make relatively quick
and easy improvements in a tightly focused area or process.

Kaizen events have the following characteristics:

Unlawful to replicate or distribute


1. Teams are made up of employees dedicated full time on a temporary basis to the project. The
teams are made up of employees from the process under improvement. LSS practitioners may
lead the team or act as its advisors/coaches/trainers.

2. The project is well defined and preliminary data has already been gathered. The team generally
works from a value stream map.

3. Implementation is immediate. Kaizens may last hours, days, or weeks.

4. Kaizen improvements are low risk and low cost.

16.5.1 Kaizen Event Work Instructions


1. Define the Kaizen scope and objectives. Select and train the team as applicable. Initiate the
project charter, which serves as both the kaizen plan and the record.

2. Draw the new VSM or verify the existing VSM by “walking the process” and gathering the
appropriate data for the benchmark metrics.

3. Identify and list the waste, flow issues, and other problems in the value stream. Choose the
issues to address and brainstorm process improvements.

4. Create the action list to accomplish improvements. Implement the action items, train
employees on the new process, and test for effectiveness.

5. Create controls to sustain the gains. Present results to the management team. Develop a plan
to monitor results over time. Complete and close out the project charter.

16.5.2 Kaizen Example


A factory fabricates small parts for a garage door manufacturer; and a team has been charged with
improving the work flow and reducing waste on the main production line. Having previously received
Lean training, the team began putting their skills, knowledge, and abilities to accomplish a kaizen
event for this value stream, utilizing the following process:

1. Develop and draw the current state VSM.

2. Analyze the value stream and revise the current state VSM adding potential improvements to
reduce waste and increase flow.

3. Develop and draw the future state VSM using the revised current state VSM as its foundation.

4. Develop and implement the action plan towards achieving the future state.

A. Before the Kaizen event


A team was formed among the employees that worked in the value stream. The team was led by a LSS
Green Belt. The Kaizen charter (Table 16.2) was used to plan and document the project. Note that the

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 175
Chapter 16: Value Stream Analysis

A3 form (Subsection 9.2.4) may also be used to document a kaizen.

Three team members were sent to the production floor to draw the draft current state value stream
map, which included gathering the metrics which the team identified as important. They also collected
all the relevant documents used in the process, including procedures, records, and forms. The map was
Unlawful to replicate or distribute

posted in the conference room for review by the rest of the team members before the event.

Table 16.2 Kaizen Charter


Title: Date: Owner: Approval:
Background: Solution(s):

Current conditions: Implementation plan:

Scope and objectives Outcome:

Root cause analysis: Follow-up:

176 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

B. The Kaizen Event


The main customer and main supplier for the value stream under study was first identified. The team
then walked the process (go and see) for more information, including interviews with the employees.
The generic waste audit checklist was used as a guideline (see Table 16.1). The following metric
information was collected: lead time, value-added time, cycle time, changeover time, uptime, and

Unlawful to replicate or distribute


inventory levels. This information formed the baseline for the improvements.

The current state map was updated with this additional information.

cast Production 30 day Fore


Monthly Fore Control
cast
Daily Order
Supplier Weekly Order Customer

Weekly
Tues + Schedule 100 Parts
Thurs Daily

Stamping Machining Assembly Finishing Testing Shipping


I I I I I I
STEEL 700 100 200 200 500
C/T = 1s C/T = 60s C/T = 60s C/T = 60s C/T = 60s
C/O = 1hr C/O = 10m C/O = 20m C/O = 0 C/O = 0
Uptime = 90% Uptime = 100% Uptime = 80% Uptime = 90% Uptime = 100%
1 1 1 1 1
Production
5 days 7 days 1 day 2 days 2 days 5 days = 23 days
Lead Time

1s 60s 60s 60s 60s Processing


Time = 241s

Figure 16.1 Current State Map

The team analyzed the value stream as depicted by the current state VSM, which then was marked up
to show the kaizen events and changes that would move the value stream closer to the future state (see
Figure 16.2). The items considered were inventory reductions, workplace layout changes (including
consolidation of steps), quality at the source, POUS and standardization, workplace organization (5S
and visual controls), quick changeover, creation of standard work instructions, and simplification of
forms.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 177
Chapter 16: Value Stream Analysis

cast Production 30 day Fore


Monthly Fore Control
cast
Daily Order
Supplier Weekly Order Customer

X
Weekly
Unlawful to replicate or distribute

X
Tues + Schedule 100 Parts
Thurs
4 Daily

Daily 3

2
X X X X X 2
1

XI
Stamping
XI
Machining
X
I
Assembly
XI
Finishing
XI
Testing
XI
Shipping

700 100 200 200 500


C/T = 1s C/T = 60s C/T = 60s C/T = 60s C/T = 60s
2 C/O = 1hr
5 C/O = 10m C/O = 20mX C/O = 0 C/O = 0
Uptime = 90% Uptime = 100% Uptime = 80% Uptime = 90% Uptime = 100%
1 1 1 1 1
Production
5 days 7 days 1 day 2 days 2 days 5 days = 22 days
Lead Time

1s 60s 60s 60s 60s Processing


Time = 241s

Figure 16.2 Draft Future State Map

The team then developed and drew the Future State Map. See Figure 16.3.

cast Production 30 day Fore


Monthly Fore cast
Control
Supplier Wee kly Orde r Daily Order Customer

Daily Ship
Schedule 1x
Daily
Daily

Machining
Assembly
Stamping Finishing Shipping
Testing

1.5 days 1.5 days 2 days


SUPERmarket SUPERmarket SUPERmarket
C/T = 1s CT = 240s
C/O = 30m C/O = 0s
Uptime = 95% 4
1
Production
= 5 days
1.5 days 1.5 days 2 days Lead Time
1s 220s Processing
Time = 221s

Figure 16.3 Future State Map

178 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Next, the team developed an action plan (Table 16. 3) to move from the current state towards the
future state.

Table 16.3 Action Plan

Unlawful to replicate or distribute


Kaizen activity Result Tools used
Combine machining, assembly, Reduced inventory; increased POUS, quality at the source, 5S,
finishing, and testing steps into flexibility with cross training of visual controls, standardized
one work cell. employees; and increased layout work, plant layout, and SMED.
Increase uptime of assembly efficiency resulting in reduced
from 80% to 90%. cycle times.

Eliminate changeover times in


the new cell.
Clean up area and reduce
clutter.
Install visual controls.
Add new work instructions.
Simplify forms.
Build to supermarket. Increased flow, reduced Pull Kanban and visual control.
Install supermarkets (safety inventory, and increased on-
stock or buffer inventories) in time delivery.
receiving, after stamping, and
shipping area.
Use visual controls for kanbans.
Ship parts from supplier daily. Reduced inventory.
Eliminate weekly schedule. Daily production schedule Pull Kanban and visual control.
Install one point scheduling at “pulls” product from
warehouse. manufacturing, reducing
inventory.
Use visual controls for kanbans.
Reduce changeover time in Increased uptime. SMED.
stamping from one hour to 30
minutes.

The results of the kaizen event were reduced waste and an increase in flow. The supermarkets installed
a pull system (customer orders are pulled from the warehouse supermarket, which are then pulled
from work center supermarket, and so on, which because of the agreed upon safety stock levels,
ensured on-time customer delivery and thereby reduced lead times from 22 days to five days.

C. After the Event


Not all of the improvements were implemented during the kaizen event. The sustaining team will
be in charge of further improvements towards the future state, which includes possible reduction of
supermarket inventory levels. The team will monitor the new current state and decide if more kaizens
should be pursued in the future.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 179
This page intentionally left blank.
Unlawful to replicate or distribute

180 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Part V: Measure Phase of DMAIC

Unlawful to replicate or distribute


T he second phase of the LSS methodology is the Measure Phase of DMAIC, which is concerned
with creating, executing, and verifying a data collection plan in order to fully investigate the
problem and determine the underlying cause(s). By the end of the Measure Phase, the project team
should be able to answer the following questions:

1. When is the problem occurring?

2. Where is the problem occurring?

3. What is the baseline performance of this process?

4. Just how bad is the current process?

5. How big is the gap between the current performance and the target performance?

6. How good is the measuring system?

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
This page intentionally left blank.
Unlawful to replicate or distribute

182 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Chapter 17: Probability and Statistics

Unlawful to replicate or distribute


Key Terms
central tendency measures of variability
descriptive statistics range
inferential statistics standard deviation

Body of Knowledge
1. Describe the role of measurements and basic statistics in the Measure Phase of a DMAIC project.

2. Describe the differences between descriptive statistics and inferential statistics.

3. Identify and apply basic probability concepts.

4. Explain statistical results to answer critical questions.

A series of factors make up a unique process. The ability of each factor or variable to consistently
serve in the process is critical to producing and delivering quality results to meet an
organization's goals. The variations of these factors or variables will cause unsustainable unpredicted
processes. In the measurement stage, it is important to identify and understand all the different
types of variations which an organization is facing. This chapter begins with a review of some basic
probability concepts.

17.1 Basic Probability Concepts


LSS bases its analysis and findings on the data at hand. Statistical studies and probability theories are
key tools that LSS teams use to measure and analyze the issues that are identified. This section explores
the basic statistical and probability concepts that apply to LSS.

The classic definition of the probability of any event is described as P(A) = m/n; where the event, A,
can occur in m ways out of a possible n equally likely ways. The probability is always between 0 and 1,
which can be expressed either as a decimal number or a percentage.

For example:

If there are nine black marbles and one white marble in a bag, the probability of randomly selecting
a white marble is likely .1 or 10%. Using the formula: P(white marble) = 1 white marble/10 marbles,
or .1. Probability is simply how likely something is to happen. The analysis of events governed by
probability is a branch of statistics.  

Almost all statistical experiments are based upon the rules of probability, which includes probability
distributions (Chapter 21) and hypothesis testing (Chapter 26).

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Chapter 17: Probability and Statistics

A coin toss has all the attributes of a statistical experiment because there is more than one possible
outcome. Each possible outcome, i.e., heads or tails, can be specified in advance, and there is
an element of chance since the outcome is uncertain. For every coin toss, it can be said that the
probability of seeing a “heads” is P(Heads) = 1/2 = .5, or 50%. What happens if the coin is flipped four
times in a row and results in four heads, which does not meet the expectations of 50% probability of
Unlawful to replicate or distribute

heads? This can be explained by the law of large numbers, which states that the average of the results
obtained from a large number of trials should be close to the expected value and will tend to become
closer as more trials are performed. So, beware of small sample sizes.

Further, if four heads in a row happened in four tosses, what are the chances that the next flip will
result in another heads? It remains at 50%. The probability has not changed even though the results
did not reflect the expected values.

17.1.1 Probability Definitions


The sample space is a set of elements that represent all possible outcomes of a statistical experiment.
The sum of the probabilities of all the sample points in a sample space is equal to 1.

A sample point is an element of a sample space. The probability of any sample point can range from 0
to 1. An event is a subset of a sample space, i.e., one or more sample points.

For example:

When a die is tossed, the sample space consists of six sample points: {1, 2, 3, 4, 5, and 6}. Each
sample point has equal probability, and the sum of the probabilities of all the sample points equals 1.
Therefore, the probability of each sample point = 1/6 or .167.

There are two types of events. Two events are mutually exclusive if they have no sample points in
common and they cannot occur at the same time. For example: Event A = the roll of the die is odd
and Event B = the roll of the die is even. A non-mutually exclusive event would be if Event A = the
roll of the die is even and Event B = the roll of the die is two. Two events are independent when the
occurrence of one does not affect the probability of the occurrence of the other, such as gender and
eye color. Dependent events do affect probabilities, such as when a defective part is selected from a
box of parts and is not replaced before another part is selected from the same box.

The probability that an event will occur is expressed as a number between 0 and 1 and is represented
by P(A). If P(A) equals zero, it is very likely that Event A will not occur. If P(A) is close to one, it is
very likely that Event A will occur.

In a statistical experiment, the sum of probabilities for all possible outcomes is one. Therefore, in the
case of the coin (A, and B), then P(A) + P(B) = 1.

The probability that Event A occurs, given that Event B has occurred, is called a conditional
probability. The conditional probability of Event A, given Event B, is denoted by the symbol P(A|B).

The complement of an event is that the event does not occur. The probability that Event A will not
occur is denoted by P(A’).

The probability that Events A and B both occur is the probability of the intersection of A and B.

184 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

The probability of the intersection of Events A and B is denoted by P(A ∩ B). If events A and B are
mutually exclusive, P(A ∩ B) = 0.

The probability that Events A or B occur is the probability of the union of A and B. The probability of
the union of Events A and B is denoted by P(A U B) .

Unlawful to replicate or distribute


17.1.2 Probability Rules
Keeping in mind that the probability of an event ranges from 0 to 1, and the sum of the probabilities of
all possible events equals 1, consider the following important probability rules.

The Rule of Addition is used when there are two events, and knowing the probability that either event
occurs is needed. The probability that Event A or Event B occurs is equal to the probability that Event
A occurs plus the probability that Event B occurs, or P(A U B) = P(A) + P(B).

For example:

Jack and Jill randomly draw a card from a 52-card deck. They need either an Ace or a Jack to win the
game. What is the probability that they will draw either one? There are four Aces so we have P(Ace) =
4/52 = .077. There are four Jacks, so we have P(Jack) = 4/52= .077. Therefore, P(Ace U Jack) = P(Ace)
+ P(Jack) = .077 + .077 = .154.

The Rule of Subtraction is used when we want to know the probability that an event will not occur,
given that the probability that the event will occur is known. The formula is P(A) = 1 - P(A’).

For example:

The probability that your car will start is .90 or 90%. Therefore, the probability that your car will not
start is 1-.9 = .1, or 10%.

The Rule of Multiplication occurs when we want to know the probability of the intersection of two
events, i.e., what is the probability that the two events both occur? The probability that Events A and
B both occur is equal to the probability that Event A occurs multiplied by the probability that Event B
occurs, given that A has occurred. The formula is P(A ∩ B) = P(A) P(B|A).

For example:

A bucket contains six red balls and four black balls. Two balls are drawn from the bucket. They are
not replaced. What is the probability that both of the balls that are drawn are black?

A = the event that the first ball is black; and B = the event that the second ball is black. It is known
that in the beginning, there are 10 balls in the bucket, four of which are black. Therefore, P(A) =
4/10. After the first selection, there are nine balls in the bucket (remember, the first black ball was not
replaced), three of which are black. Therefore, P(B|A) = 3/9.

After entering the numbers into the formula, P(A ∩ B) = (4/10) * (3/9) = 12/90 = 2/15 or 13.3%.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 185
Chapter 17: Probability and Statistics

17.2 Basic Statistics


Statistics are used because processes have variation. Before variation can be reduced, it must be
measurable and the causes identifiable. The goal of any organization is to reduce variation so that the
product or service measures are always within the customer's specification and are centered on the
Unlawful to replicate or distribute

target values.

Statistics is the science of collecting, organizing, and interpreting data. Data are essential to the success
of any project, making it imperative that the concepts and principles of basic statistics are understood
by all involved.

Statistical Process Control (SPC) is a methodology that uses statistics to monitor, control, and
improve processes. SPC is discussed in Chapter 20.

In a statistical study, population refers to the entire set of objects, individuals, or measurement items
defined by the scope of the study that exhibit a particular characteristic. In a study of the height of
Indiana residents between the ages of 18 and 21, the population would include everyone fitting that
description. Many times, it is hard to sample the entire population.

Sometimes the population under study may be smaller. For example, a study may involve 10 parts to
be shipped to your customer on a particular day. In that case, the entire population could be sampled
and measured. The resulting measures are called population parameters. However, it is not typically
feasible to do this for all studies so the best alternative is to take samples. Samples are smaller sections
of the population that are used to gather information about the entire population.

Therefore, a statistical study can be accomplished where samples are randomly selected from the
population and measured, and the resulting data can be analyzed to produce descriptive statistics.
Examples of these statistics are mean, median, and standard deviation. These statistics can be used to
estimate the population parameters with inferential statistics.

Table 17.1 lists the population and sample notations used when describing parameters and statistics;
and Table 17.2 provides a summary of descriptive statistics and inferential statistics.

Table 17.1 Population and Sample Notations


Measure Sample statistic Population parameter
Size n N
Mean (average) x (x bar) μ (mu)
Standard deviation s σ (sigma)

186 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Table 17.2 Summary of Descriptive and Inferential Statistics


Descriptive statistics Inferential statistics

Collect, organize, summarize, and present Make inferences and predictions based upon
sample data sample data

Unlawful to replicate or distribute


Shows central tendencies, variation, and shape of Uses data from a sample to make estimates about
the data the population

Tools include histograms, run charts, and other Tools include hypothesis testing, regression, and
graphs design of experiments

The information gathered takes the form of Distributions, which provide a “picture” of the resulting
statistics, in the form of Frequency Plots.

For example:

Figure 17.1 shows the distribution of the sampling measurements of the height of 18-year olds that live
in Indiana, in the form of a histogram (see Chapter 19).

Histogram of Height
18

16

14

12
Frequency

10

0
60 65 70 75 80 85
Height

Figure 17.1 Histogram of the Heights of 18-Year Olds Living in Indiana

The above chart clearly delineates the frequency at which certain measurements occur once there is
a "picture" of the distribution. Descriptive statistics can be applied to mathematically describe that
distribution.

Two types of data: variable (continuous) or attribute (discrete) are generally used. Variable data can
be subdivided into smaller increments, such as distance, temperature, or time. Attribute data are often

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 187
Chapter 17: Probability and Statistics

used to count the number of occurrences or determine percentages and denote a specific state, such
as good or bad; red, white, or blue; or on time or late. Operational definitions are essential when using
attribute data.

Measurement starts by capturing a specific quantifiable characteristic at a specific time. Once the
Unlawful to replicate or distribute

measurements are collected, the resulting data set can be characterized using statistical measures,
either in numerical form or graphic form (see Chapter 19). Usually, the center, the spread, and the
shape of the data are of greatest interest. Sample data can be used to gain insights into the population
through inferential statistics.

17.2.1 Central Tendency


Central tendency measures are used to describe the center of the data set. Mean, median, and mode
are the common metrics used in this case.

The mean is the arithmetic average of a data set. If the data set is 3, 4, 5, 6, and 7, the mean is 5.

The median is the middle value of the data set. If there are even numbers of data values, the median is
the mean of the two middle data values. Therefore, when the data set is 4, 5, 6, and 7, the median is 5.5.

The mode is the most frequently occurring value of a data set. When the data set is 2, 3, 3, 4, and 5, the
mode is 3.

17.2.2 Variation
Variation measures are used to indicate the spread of the data points. All the processes exhibit
variation. Range and standard deviation are two common methods to express variation of the process.

The range is the difference between the largest and smallest observations. When the data set is 1, 2, 3,
and 4; the range is 4-1, or 3.

The standard deviation is the average distance any data point is from the mean of a data set. Smaller
standard deviations are better because they reflect less process variation.

The sample standard deviation formula is:

2
∑ (x-x)
s= n-1
where,

s = sample standard deviation


∑ = sum of (adding up the differences between
each sample and the sample mean)

x = sample mean (mean of all values in the data set)

x = each value in the data set


n = number of samples

188 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

The population standard deviation formula is:

2
∑ (x-μ)
σ= n

Unlawful to replicate or distribute


where
σ = population standard deviation
∑ = sum of (adding up the differences between
each sample and the population mean)

μ = population mean
n = number of samples
x = each value in the data set
For example:

You are entering your frog, Froggy, into the local frog jumping contest. Last year’s winning frog, Atlas,
jumped 5.16 feet. You decide to test Froggy’s ability before the contest and make him jump once a
day for 10 days. The results (in feet) were as follows: 4, 3, 3, 4, 5, 2, 2, 4, 6, and 7. Note that this is a
small sample size. Also, there were intangibles, or factors that were out of the owner's control, which
included the weather and whether or not Froggy “feels” like jumping on a particular day.

First, the central tendency was computed with the data arranged in linear form: 2, 2, 3, 3, 4, 4, 4, 5, 6, 7

Average = 4 (32 divided by 8)

Mode = 4 (there are three 4s)

Median = 4 (this is the middle number of the data set)

Then, the variability was computed.

The range was 7-2=5.

The formula for the sample standard deviation was used. Taking each result, subtract the average, and
square the result. For the first trial: 4-4=0, and 0 squared = 0. Total the results for all ten samples (this
is the sum of squares (see Table 17.3), which is 24. This is the top portion of the formula, or the sum of
squares. Now, divide the sum of squares by the number of samples minus one 24/9 = 2.67, then take
the square root of 2.67, which is 1.63. This is the standard deviation.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 189
Chapter 17: Probability and Statistics

Table 17.3 Sum of Squares


Sum of Squares
Trial Result Trial - Average (Trial – Average)
squared
Unlawful to replicate or distribute

1 4 0 0
2 3 -1 1
3 3 -1 1
4 4 0 0
5 5 1 1
6 2 2 4
7 2 2 4
8 4 0 0
9 6 2 4
10 7 3 9
Sum of Squares = 24

Histogram of Distance
Normal

3.0

2.5

2.0
Frequency

1.5

1.0

0.5

0.0
1 2 3 4 5 6 7
Height

Figure 17.2 Histogram of Frog Jumping Distance

Figure 17.2 is a histogram that graphically displays the data set, which quickly reveals the centering,
variation, and shape of the data set. Creating and analyzing histograms is discussed further in Chapter 19.

190 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

17.2.3 Inferential Statistics


Inferential statistics are used to make claims about a population based upon a sample of that
population. Hypothesis tests use this principle. This is very important in situations when the entire
population cannot be sampled. In these cases, samples are taken and studied and an inference about
the population is drawn. Inferential statistics generally include confidence intervals and confidence

Unlawful to replicate or distribute


levels. Inferential studies are discussed in Chapters 25-27.

For example:

The speed of vehicles at mile marker 88 on Interstate 4 is being studied, where the speed limit is 70
mph. Sixty vehicles were randomly sampled within an eight-hour time period. The data were entered
into Minitab software and the results are shown as in Figure 17.3.

Descriptive Statistics: MPH


Variable Count Mean StDev Minimum Median Maximum Range Mode
mph 60 71.650 7.449 55.000 70.000 90.000 35.000 70
Summary Report for mph

Anderson Darling Normality Test


A-Squared 0.84
P-Value 0.029
Mean 71.650
StDev 7.449
Variance 55.486
Skewness 0.265182
Kurtosis 0.273749
N 60
Minimum 55.000
1st Quartile 67.000
60 70 80 90
Median 70.000
3rd Quartile 76.000
Maximum 90.000
*
95% Confidence Interval for Mean
69.726 73.574
95% Confidence Intervals
95% Confidence Interval for Median
Mean 70.000 74.000
95% Confidence Interval for StDev
Median
6.314 9.085
70 71 72 73 74

Figure 17.3 Summary Report for mph

The numbers and graphs above provide a great deal of information. There is a large amount of data
here for both descriptive and inferential statistics, some of which will be discussed in more detail in
later chapters.

The center, spread, and shape of the data can be ascertained using the numbers and the graphs in
Figure 17.3. The mean is 71.65 with a standard deviation of 7.44; the median is 70 with a range of 35;
and the mode is 70. The sample size is 60.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 191
Chapter 17: Probability and Statistics

Also, note the confidence intervals, which are inferential (probability) statistics. They estimate the
population speed mean, median, and standard deviation using the sample data. For example, with
95% confidence, it can be said that the population mean speed is between 69.726 and 73.574. The
actual sample mean is 71.65. A great deal of information was obtained from 60 samples, which may
be relatively small depending on the goals of the study. If a larger sample size had been possible, the
Unlawful to replicate or distribute

inferential statistics' confidence level could be higher.

A number of different ways to describe distributions of data were presented in this chapter that
included both numbers and pictures. Using these tools, the baseline can be set for a LSS project in the
Measure phase.

192 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Chapter 18: Measurement System Analysis (MSA)

Unlawful to replicate or distribute


Key Terms
accuracy precision
attribute agreement analysis repeatability
bias reproducibility
gage R&R resolution
linearity stability
measurement systems analysis (MSA)

Body of Knowledge
1. Appreciate the important role of measurement system analysis.

2. Describe the key factors that ensure measurement system reliability and repeatability.

3. Calculate, analyze, and interpret variable gage R&R studies.

4. Calculate, analyze, and interpret attribute agreement analysis studies.

M easuring systems analysis (MSA) determines if a measuring system can generate accurate precise
data and if that data will be adequate to obtain the project's objectives. Whether it is historical data
or data to be collected in the future, MSA answers the question, "Can I trust the data?". Conducting a
MSA will help determine how much of an observed variation is due to the measurement system itself
and in which ways the measurement system needs to be improved.

A good measurement system should be both accurate and precise.

1. Accuracy usually consists of three components:

Linearity: a measure of how the size of the part affects the accuracy of the measurement
system. It is the difference in the observed accuracy values through the expected range of
measurements.

Bias: a measure of the bias in the measurement system. It is the difference between the
observed average measurement and a true or standard value.

Stability: a measure of how accurately the system performs over time. It is the total
variation obtained with a particular device, on the same part, when measuring a single
characteristic over time.

2. Precision (or measurement variation) consists of two components:

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Chapter 18: Measurement System Analysis (MSA)

Repeatability: the variation due to the measuring device, which is observed when the same
operator measures the same part repeatedly with the same device.

Reproducibility: the variation due to appraiser variation, which is observed when different
operators measure the same parts using the same device.
Unlawful to replicate or distribute

3. Resolution is the ability to differentiate between samples to the extent necessary to make
a decision.

In order to conduct a study, the number of appraisers, sample parts, and repeat readings must be
determined. For example, there could be three operators, two repeats readings on the same part,
and 10 sample parts. Larger numbers of parts and repeat readings will produce results with a higher
confidence level. The appraisers chosen are those who normally perform the measurement and who
are familiar with the equipment and procedures in the study.

It is critical that the sample parts are selected to represent the entire process spread. If the process
spread is not fully represented, the degree of measurement error may be incorrect.

Parts should be numbered in random order so that the appraisers do not know the number assigned
to each part or any previous measurement value for that part. A third party should record the
measurements, the appraiser, the trial number, and the number for each part into the software
package.

18.1 MSA for Attribute Data


An Attribute Agreement Analysis is used when the data are based on human judgment. Questions
answered include: In which category does this part/report/person belong? Is the part good or bad? Is
the service early or late? Is the report legible, or illegible?

Procedure:

1. Select 20 parts: 10 that exhibit the defect and 10 that do not.


•• Sample parts should be representative of the production system being analyzed.

•• The “bad” parts should represent the entire range of possible examples.

•• “Boundary samples” or “gray areas” should be included.

•• The parts should be numbered.

•• Select three “appraisers."

2. Each appraiser inspects the parts (reproducibility) in random order and records the results.

3. Each appraiser re-evaluates the same parts in a different order to capture repeatability
of the test.

4. The data are entered into the software program and the results are examined. Generally,
software is used to evaluate the results in this manner:

194 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

•• The percent reproducibility shows how often the appraisers repeated their results across the
different trials.

•• The percent repeatability shows how often the appraisers agreed with each other.

Unlawful to replicate or distribute


•• If an “expert” also is utilized to evaluate the parts, the software also will show how often
the appraisers agree with this expert.

•• The ideal goal is 100% agreement across the board.

For example:

Moe, Larry, and Curley were given 10 parts to inspect. Some of the parts are good, others were bad;
and they inspected each part twice. The results from Minitab are shown in Table 18.1.
Table 18.1 Repeatability (shows if the appraisers were able to repeat their results on the same part)
Within Appraisers
Appraiser # Inspected # Matched Percent 95% CI
Curley 10 1 10.00 ( 0.25, 44.50)
Moe 10 9 90.00 (55.50, 99.75)
Larry 10 5 50.00 (18.71, 81.29)

Table 18.1 shows repeatability. Curley inspected each of the ten parts twice; and only one time did
he agree with himself. Moe did much better, matching himself 90% of the time. Note the column
for confidence intervals (CI). With 95% confidence, it can be predicted that Moe will match himself
between 55.5% and 99.75% of the time.
Table 18.2 Comparison to Expert (shows how the appraiser’s results compare to the expert’s results)
Each Appraiser vs Expert
Appraiser # Inspected # Matched Percent 95% CI
Curley 10 1 10.00 ( 0.25, 44.50)
Moe 10 8 80.00 (44.39, 97.48)
Larry 10 2 20.00 ( 2.52, 55.61)

Table 18.2 shows how the appraisals of Curley, Moe, and Larry compared to the expert’s appraisal of
the part. An expert would be someone who is very familiar with the process, product, and measuring
system. Moe was the best, with 8 times out of 10 times measured, agreeing with the expert. However,
both Curley and Larry did not do so well, which could result from inadequate training or bad
inspection procedures.
Table 18.3 Reproducibility (shows how the appraiser’s results compared to each other)
Between Appraisers
Appraiser # Inspected # Matched Percent 95% CI
Curley 10 0 0.00 (0.00, 25.89)
Moe 10 0 0.00 (0.00, 25.89)
Larry 10 0 0.00 (0.00, 25.89)

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 195
Chapter 18: Measurement System Analysis (MSA)

Table 18.3 shows how they compared to each other on their appraisal of the ten parts. The idea result is
100% agreement; however, the above results show no agreement from them on any of the parts.
Unlawful to replicate or distribute

18.2 Gage Repeatability and Reproducibility (R&R) Studies


A gage R&R study helps investigate if the measurement system variability is small compared to the
variability of the process. It also can determine if variability exists between operators (reproducibility)
and within operators (repeatability). Finally, it also can establish if your measurement system is
capable of discriminating between different parts.

18.2.1 Types of Gage R&R Studies


1. Crossed gage: A study in which each operator measures each part. This study is called "crossed"
because the same parts are measured by each operator multiple times.

2. Nested gage: A study in which only one operator measures each part, usually because the test
destroys the part. This study is called "nested" because one or more factors is nested under
another factor and is not crossed with the other factors.

3. Expanded gage: A study that is used when there is a mixture of crossed and nested factors or
an unbalanced design.

Before performing a gage R&R, the device under study must be calibrated. This ensures the accuracy
of the measuring instrument.

18.2.2 Using Software to Analyze Gage R&R Results- QI Macros


The following is reprinted from www.qimaros.com by Jay Arthur (888-468-1537).

“First, Gage R&R studies are usually performed on variable data, such as height, length, width, diameter,
weight, viscosity, etc.

Second, when you manufacture products, you want to monitor the output of your machines to make sure
that they are producing products that meet the customer's specifications. This means that you have to
measure samples coming off the line to determine if they are meeting your customer's requirements.

Third, when you measure, three factors come into play:

1. Part variation (differences between individual pieces manufactured.)

2. Appraiser variation (aka, reproducibility): Can two different people get the same measurement
using the same gage?

3. Equipment variation (aka, repeatability): Can the same person get the same measurement using
the same gage on the same part in two or more trials?"

You want most of the variation to be between the parts and less than 10% of the variation to be caused
by the appraisers and equipment, which makes sense. If neither one of two appraisers cannot get the
same measurement twice, then the measurement system becomes a key source of error.

196 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Conducting a Gage R&R Study


To conduct a gage R&R study, the following items will be needed:

1. Five to ten parts (number each part) that span the distance between the upper and lower
specification limits. The parts should represent the actual or expected range of process

Unlawful to replicate or distribute


variation. Rule of thumb: when measuring to 0.0001, the range of parts should be 10 times the
resolution, e.g., 0.4995 to 0.5005.
USL

Target

LSL

Figure 18.1 Gage R&R Study


Note: If you do not have enough part variation, you cannot get a good gage R&R.

2. Two appraisers (people who measure the parts).

3. One measurement tool or gage.

4. A minimum of two measurement trials, on each part, by each appraiser.

5. A gage R&R tool like the gage R&R Excel template in QI Macros.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 197
Chapter 18: Measurement System Analysis (MSA)

QI Macros for Excel Gage R&R Template (Long Form)


The following table shows samples of the Gage R&R template input sheet and results sections using
sample data from the AIAG Measurement Systems Analysis Third Edition.

Table 18.4 Gage R&R Template


Unlawful to replicate or distribute

Gage R&R Part Number


Average & Range Method 1 2 3 4 5 6 7 8 9 10 Sum
Appraiser1 Trial1 0.29 -0.56 1.34 0.47 -0.8 0.02 0.59 -0.31 2.26 -1.36 5.710
Enter your data here -> Trial2 0.41 -0.68 1.17 0.5 -0.92 -0.11 0.75 -0.2 1.99 -1.25
Trial3 0.64 -0.58 1.27 0.64 -0.84 -0.21 0.66 -0.17 2.01 -1.31 2.110
Trial4 Xbar1 Reference
Trial5 0.1903333 0.001
Total 1.34 -1.82 3.78 1.61 -2.56 -0.3 2 -0.68 6.26 -3.92 Bias
Average 0.4467 -0.6067 1.26 0.5367 -0.8533 -0.1 0.6667 -0.2267 2.0867 -1.3067 Rbar1 0.189
Range1 0.35 0.12 0.17 0.17 0.12 0.23 0.16 0.14 0.27 0.11 0.184
Appraiser2 Trial1 0.08 -0.47 1.19 0.01 -0.56 -.2 0.47 -0.63 1.8 -1.68 2.050
Enter your data here -> Trial2 0.25 -1.22 0.94 1.03 -1.2 0.22 0.55 0.08 2.12 -1.62
Trial3 0.07 -0.68 1.34 0.2 -1.28 0.06 0.83 -0.34 2.19 -1.5 0.890
Trial4 Xbar2 Reference
Trial5 0.0683333 0.001
Total 0.4 -2.37 3.47 1.24 -3.04 0.08 1.85 -0.69 6.11 -4.8 Bias
Average 0.13333 -0.79 1.1567 0.4133 -1.0133 0.0267 0.6167 -0.2967 2.0367 -1.6 Rbar1 0.067
Range1 0.18 0.75 0.4 1.02 0.72 0.42 0.36 0.71 0.39 0.18 0.513
Appraiser3 Trial1 0.04 -1.38 0.89 0.14 -1.46 -0.29 0.02 -0.46 1.77 -1.49 -7.630
Enter your data here -> Trial2 -0.11 -1.13 1.09 0.2 -1.07 -0.67 0.01 -0.56 1.45 -1.77
Trial3 -0.15 -0.96 0.67 0.11 -1.45 -0.49 0.21 -0.49 1.87 -2.16 -2.840
Trial4 Xbar2 Reference
Trial5 0.328 0.001
Total -0.22 -3.47 2.64 0.45 -3.98 -1.45 0.24 -1.51 5.09 -5.42 Bias
Average -0.0733 -1.1567 0.88 0.15 -1.3267 -0.4833 0.08 -0.5033 1.6967 -1.8067 Rbar1 -0.256
Range1 0.19 0.42 0.42 0.09 0.39 0.38 0.2 0.1 0.42 0.67 0.328

Table 18.5 Gage R&R Template


Range Average 0.3417 Constants
XDIll 0.4447 10 Trials 9 Trials 8 Trials 7 Trials 6 Trials 5 Trials 4 Trials 3 Trials 2 Trials # Trials 3
UCL 0.8815 1.777 1.816 1.864 1.924 2.004 2.11 2.28 2.58 3.27 4 2.58
LCL 0.0000 0.233 10.184 0.136 0.076 0 0 0 0 0 3 0
Repeatablility (EV) 0.2019 0.308 0.337 0.373 0.419 0.483 0.577 0.729 1.023 1.88 A2 1.023
Reproducibility (AV) 0.2297 0.3249 0.3367 0.3512 0.36977 0.3946 0.4299 0.4857 0.5808 0.8862 K1 0.5908175
Gage Capability (R&R) 0.3058 0.7071 0.5231 4 0.5231
Spec Tolerance 4.42 2Ops 3Operators
3.0775 2.97 2.8472 2.70436 2.5344 2.3259 2.0588 1.6826 1.1284 d2
% Using % Using Gage system may be acceptable based on importance of application and cost
AIAG - Automotive Indv TV Tolerance Operator may need to be better trained or gage is hard to read
EV (Equipment Variation) 0.2019 Equipment Variation(EV)
%EV 17.6% 27.4% # Parts # Trials #Ops % of Total Variation(TV)
AV: (Appraiser Variation) 0.22967 10 3 3 Appraiser Variation(AV)
%AV 20.0% 31.2% % of Total Variation(TV)
R&R (Gage Capability) 0.3058 Repeatability and Reproducibility(R&R)
%R&R 26.7% 41.5% NDC 5 % of Total Variation(TV)
PV (Part Variation) 1.1046 Part Variation(PV)
%PV 36.4% 150% % of Total Variation(TV)
TV (Total Variation) 10146 Total Variation(TV)

Gage R&R Requirements


If the number of distinct categories (NDC) < (see %R&R, columns 5&6), there is not enough part
variation to do a valid Gage R&R. This value represents the number of groups the measurement
tool can distinguish from the data itself. The higher this number, the better chance the tool has in
discerning one part from another.

(http://blog.minitab.com/blog/quality-data-analysis-and-statistics/understanding-your-gage-randr-
output)

198 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Ten parts that span the specification tolerance are needed in order to obtain a good Gage R&R.

Gage R&R System Acceptability


◆◆ R&R<10% - Gage system is good. (Most variation is caused by parts, not people or

Unlawful to replicate or distribute


equipment.)

◆◆ R&R<30% - May be acceptable based on the importance of application and cost of gage or
repair.

◆◆ R&R>30% - Gage system needs improvement. (People and equipment cause over 1/3 of


variation.)

What to Look For:


Repeatability: Percent Equipment Variation (%EV - Can the same person using the same gage measure
the same thing consistently.)

If you simply look at the measurements, can each appraiser get the same result on the same part
consistently, or is there too much variation?

Example (analyzing the measurements from one appraiser only):

◆◆ No Equipment Variation: Part 1: 0.65, 0.65; Part 2: 0.66, 0.66

◆◆ Equipment Variation: Part 1: 0.65, 0.67; Part 2: 0.67, 0.65

If repeatability (equipment variation) is larger than reproducibility (appraiser variation), the reasons
include:

1. Gage needs maintenance (gages can corrode).

2. Gage needs to be redesigned to be used more accurately.


3. Clamping of the part or gage, or where it is measured, needs to be improved, e.g., measuring a
baseball bat at various places along the tapered contour will yield different results.

4. Excessive within-part variation, e.g., a steel rod that is bigger at one end than the other. If the
rod is measured at different ends each time, the results will vary widely.

Reproducibility: Percent Appraiser Variation 

(% AV - can two appraisers measure the same thing and get the same answer)?

Example (looking at measurements of the same part by two appraisers):

◆◆ No Appraiser Variation: Appraiser 1, Part 1: 0.65, 0.65; Appraiser 2, Part 1: 0.65, 0.65

◆◆ Appraiser Variation: Appraiser 1, Part 1: 0.65, 0.65; Appraiser 2, Part 1: 0.66, 0.66

The line graph of appraiser performance will clearly show whether or not each appraiser over-reads or
under-reads the measurements.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 199
Chapter 18: Measurement System Analysis (MSA)

If the reproducibility (appraiser variation) is larger than repeatability (equipment variation), the
reasons include:

1. Operators may need to be better trained in a consistent method for using and reading
the gage.
Unlawful to replicate or distribute

2. Calibrations on the gages may be unclear.

3. Fixture may be required to help the operators use the gages more consistently.

Mistakes People Make


There are common mistakes that people make when conducting Gage R&R studies:

1. Forgetting that the Gage R&R study is evaluating their measurement system, NOT their
products. A Gage R&R study does not care about how good the organization's products are. It
only cares about how good they measure their products.
2. Evaluating only one sample of the part. If only one sample is examined, THERE CAN NOT
BE ANY PART VARIATION so the people and equipment become the ONLY source of
variation.
3. Using the one sample part measurement for all 10 parts (again, there will not be any part
variation so the people and equipment become the only source of variation).
4. Using too many trials (if five trials are run, there is more opportunity for equipment
variation).
5. Using too many appraisers (if all three appraisers are utilized, there is more opportunity for
appraiser variation).
6. Using fake data.

7. Using a gage that measures in too much detail. If a part is 74mm +/- 0.05, then a gage that
measures to a thousandth of an inch (0.001) is not needed, rather only one that measures to
the hundredth of an inch (0.01).

18.2.3 Using Software to Analyze Gage R&R Results - Minitab


In this example, Moe, Larry, and Curley are weighing 50 pound bags of flour (ten samples total), to
be weighed twice on the same scale. The results were put into Minitab for analysis. The graphs in
Figure 18.2 are part of the output of the study. While all the graphs are important, we will study the
components of variation, R chart by operator, and part by operator interaction.

200 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Gage R&R (ANOVA) Report forSigma
Lean Six Measurement
| Green Belt Book of Knowledge
Gage name: Reported by:
Rate of study: Tolerance:
Misc:

Components of Variation Measurement by Part


80 % Contribution 60
% Study Var
Percent

50
40

Unlawful to replicate or distribute


40

0
Gage R&R Repeat Reprod Part-to-Part 1 2 3 4 5 6 7 8 9 10
Part

R Chart by Operator
Curley Larry Moe Measurement by Operator
Sample Range

60
5.0 UCL=4.901

2.5 50
R=1.5
0.0 LCL=0
10
10

10

9
9

9
2

8
8

7
4

4
3

5
6
1

40
Part

Curley Larry Moe


Operator
X-bar Chart by Operator
60
Curley Larry Moe
Part * Operator Interaction
Sample Range

60 Operator
UCL=53.94
X=51.12 Curley
50 Larry
LCL=48.30
Average

50 Moe

40
10
10
10

9
9

8
2

2
8

7
4
4

5
3
3

6
1
1

Part 40

1 2 3 4 5 6 7 8 9 10
Part

Figure 18.2 Gage R&R Report for Weight

Components of Variation
It it is apparent by studying the red bars in Figure 18.2 that much more part to part variation is needed
than the variation exhibited in the test. Most of the testing variation (gage) is its reproducibility, while
some is repeatability and are the two areas of the testing variation that need improvement.

R Chart by Operator
The results for each person capturing the differences in the bag measurements for each set of bags
(each bag was weighed twice for repeatability). This study is looking for flat lines around zero. Curley
was very erratic; for example, on sample 10, there was a difference of over five pounds from his first
measurement of this bag.

Part by Operator Interaction


Ideally, all three lines should trace each other across the 10 parts. Note that Moe and Larry were very
similar in their results across the ten bags. Curley had some problems, especially with the first two
bags. His results for these two bags were much lower in weight than the results of Larry and Moe.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 201
This page intentionally left blank.
Unlawful to replicate or distribute

202 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Chapter 19: Collecting and Summarizing Data

Unlawful to replicate or distribute


Key Terms
common cause judgment sampling
concentration diagram checksheet output measure (y)
confidence level population
continuous (variable) data random sampling
convenience sampling sampling event
data sheet sampling frequency
defect or cause checksheet special cause
discrete (attribute) data stratification
effectiveness measures stratified sampling
efficiency measures subgroup
frequency plot checksheet systematic sampling
input measure (X) traveler checksheet

Body of Knowledge
1. Identify continuous (variables) and discrete (attributes) data.

2. Identify efficiency and effectiveness measures.

3. Distinguish between input, in-process, and output measures.

4. Determine the appropriate sampling strategy and sample size.

5. Develop a data collection plan.

6. Identify the most common tools used to collect data.

7. Interpret the information conveyed by graphical representations of data.

8. Describe and define nominal, ordinal, interval, and ratio measurement scales.

W hen utilizing the LSS methodology, the goal is to take all of the numbers, data, and
measurements gathered during the Measure phase and turn it into knowledge and insight.
During this process, there will be times when large volumes of data will need to be summarized to find
out exactly what is causing problems within the process. As the data are searched, it is important to
stay diligently focused on the goal and objective of the project.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Chapter 19: Collecting and Summarizing Data

The role of the LSS professional is to balance the use of the tools with the risk and the speed of meeting
the needs of the business. The LSS professional must be able to evaluate and defend the tests they do
and how much data must be gathered before they are comfortable with their answer to the problem.
The more tests they perform, the more certain the conclusions; however, this takes time and effort.
It is important to keep the sponsors educated and informed about the risks, decisions, and options
Unlawful to replicate or distribute

available for successfully meeting all of the goals and objectives of a LSS project.

19.1 Types of Data and Measurement Scales


Before beginning data collection, there are a few things that need to be understood about data. Data
comes in two different types: continuous (or variable) and discrete (or attribute). Understanding the
difference between them is important because it influences how measures are defined, how the data
are collected, and what can be learned from the data. The differences in data types will also affect data
sampling and how the samples are analyzed.

Measurement Scales
Numbers can be grouped into four types, or levels: nominal, ordinal, interval, and ratio. Nominal is
the simplest, and ratio is the most complex. Each level has the characteristics of the preceding level,
plus one additional factor.

Nominal scales classify data into groups where there is no implied order, i.e., categories or
classifications. An example would be a list of office supplies, such as pens, paper, staples, and clips, or
small, medium, and large.

Ordinal scales refer to positions in a series where order is important, but the differences between the
values are not defined. An example would be the order of finish in a race.

Interval scales provide information about order and also have equal intervals. Examples would be
measuring temperature or measuring time on a twelve-hour clock.

Ratio scales have meaningful differences and a natural zero point. An example would be length, where
zero is defined as having no length and four inches is twice as long as two inches.

Applications
The level of measurement for a variable is defined by the highest category that it achieves. For example,
saying that a person is tall or short is nominal while tall = 3, medium = 2, and short = 1 is ordinal. If
we use some sort of standardized measure for tallness where one interval is one inch, that would be
an interval. A ratio scale of tallness is also possible, where 70 inches is twice as tall as 35 inches. It is
clear that the higher the measurement level, the more information that is received about the variable in
question.

Continuous (variable) data are any variable that is measured on a continuum or scale that can be
infinitely divided. The tools for interpreting continuous data are much more powerful than those for
discrete or attribute data.

Examples of continuous data include the following:

204 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

◆◆ Lead time of a process (hours, minutes, seconds)

◆◆ Duration of customer service call

◆◆ Cost (dollars, yen, euros)

Unlawful to replicate or distribute


◆◆ Physical dimensions (height, weight, density, temperature)

◆◆ Sound (decibels)

◆◆ Electrical resistance (ohms)

Discrete (attribute) data include all other types of data other than continuous data. These are measures
that can be sorted into distinct, separate, non-overlapping categories. They are called attribute data
because they count items or incidents that have a particular attribute or a characteristic that sets them
apart from things with a different attribute or characteristic. A tip for correctly identifying discrete
data is to think about the unit of measure and ask if “half of the data” makes sense.

Types of discrete data include the following:

◆◆ Count, e.g., number of errors, customer complaints, defects per application.

◆◆ Binary data can have only one of two values, e.g., yes/no, late/on-time, correct/incorrect.

◆◆ Attribute nominal names or labels, e.g., Machine 1, Machine 2, Machine 3, People Ages 50-75.

◆◆ Attribute ordinal (name or label represents some value, e.g., strongly agree, agree, disagree,
strongly disagree).

19.1.1 What Needs to be Measured?


Using data effectively is a fundamental activity for all LSS teams. The Measure phase is typically the
longest and most difficult phase in the process. Understanding what data to collect and how to collect
it in such a way that it will provide the team accurate insights into exactly what is causing the problem
can be overwhelming. It is best to break this process down into smaller sections without getting too
weighed down with techniques that might only need to be utilized one percent of the time.

It is important to ask the following questions as a data collection plan is being designed:

◆◆ How is this process currently performing?

◆◆ What is the impact of variation (current performance) on the customer?

◆◆ Where are the causes of the problem?

◆◆ Why is this process not meeting the customer’s needs?

Before collecting any data, it is important to observe the process first for clues to identify the major
sources of variation in the process. Go out and walk the process. Observe everything that is occurring.
Watch the “thing” that is moving through the process, and watch what is happening to it along the
way. Talk to the people who are operating the current process. This is very important because it will
help to identify what and where to measure. If it can be observed, it can be measured; and if it can be
measured, it can be improved and controlled. Asking the following questions will be helpful:

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 205
Chapter 19: Collecting and Summarizing Data

◆◆ Where are people redoing a step to correct a problem?

◆◆ Where is the “thing” flowing through the process getting stopped?

◆◆ Where is variation being observed in the way people are doing the process?
Unlawful to replicate or distribute

◆◆ Where are there frustrated workers in the processes or frustrated customers of the output?

Efficiency measures focus on the volume and cost of the resources consumed in the process and aim to
achieve the project objectives, which may include the following:

◆◆ Lower cost

◆◆ Reduced time

◆◆ Reduced material consumption

◆◆ Reduced workers needed

Effectiveness measures focus on what the product or service looks like to the customer in order to
improve the following:

◆◆ Meeting or exceeding customer requirements

◆◆ Decreasing defective product delivered to customers

It is undoubtedly possible to identify a number of areas in a process where different types of data could
be measured. Ideally, the Measure phase is completed as quickly as possible to identify the source of
the problem and correct it. In addition, many projects require putting manual data collection systems
in place to gather the required data, which can be very labor-intensive. Therefore, getting enough data
to ensure an accurate picture of what is really going on in the process needs to be balanced with not
putting an unneeded burden on the organization. Even gathering just enough data can be a difficult
task for organization because often the organization has multiple projects going on at the same time
(multiple data collection efforts).

SIPOC

Inputs Outputs

Suppliers Customers
Process

S I P O C
Figure 19.1 SIPOC Diagram

206 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

The SIPOC (see Figure 19.1) completed during the Define phase is useful in helping to understand
which data elements need to be collected. Measuring inputs from suppliers would be an input measure.
Measuring various steps within the process would be an in-process measure (see Figure 19.2).

Unlawful to replicate or distribute


In-Process
Measures Customers
Suppliers A..............
A. ............. Input Measures Activity A Output Measures
B..............
B. ............. Activity B C. ............
C. ............. Activity C
Activity D CTQs

Figure 19.2 Input, In-Process, and Output Measures

The output measure (y) is the measurement that needs to be improved and provides the overall
performance of the process, y = f(x). Output measures quantify the overall performance of the process,
such as the following:

◆◆ How well the customer's needs and requirements are met (quality and speed)

◆◆ How well the organization's needs and requirements are met (cost and speed)

An output measure (y) can be controlled or changed only by controlling or changing the in-process
or the input measure (x). In-process measures are x-variable data that measure quality, speed, and
cost performance at key points within the process. Sometimes these measures are subsets of the y
measures. For example, if a y measure is the end-to-end lead time of the process, the process measures
could include the cycle time measures of the individual steps within the process.

Input measures are x-variable data that measure the quality, speed, and cost performance of the items
coming into the process. These measures usually focus on their effectiveness in meeting the needs of
the process.

The y represents an effect of the process and is a lagging indicator, and the x represents the cause
and is a leading indicator. The focus should be on those measures that are leading indicators of the
outcome, which means that the x measures will provide early warnings for what will happen to the
y (outcome) measure. As a result, adjustments can be made prior to the outcome becoming a defect.
The input and in- process variables represent measures of items that can be controlled (x). The output
measure represents values (y) that are the target for improvement by changing some number of
variables (x).

19.1.2 What Type of Data Are Collected?


The pros of collecting discrete data include the following:

◆◆ Collecting discrete data is often easier and faster than collecting continuous data.

◆◆ Many business processes are already set up to collect discrete data.

◆◆ Discrete data make it easier to interpret intangible data such as customer satisfaction.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 207
Chapter 19: Collecting and Summarizing Data

◆◆ Discrete data make it easy to calculate the sigma level of the process.

The cons of collecting discrete data include the following:

◆◆ Continuous data provide more precise measurements than discrete data.


Unlawful to replicate or distribute

◆◆ When using discrete data, more data must be collected in order to uncover the patterns. For
Pareto charts, which use discrete data, 50–100 data points may be needed. For a run chart,
which uses continuous data, far fewer data points are needed.

◆◆ With discrete data, the likelihood of missing important information increases. Whenever
possible, continuous data should be used if the time and budget allows.

19.1.3 Stratifying Data


Stratification, separating the data to identify patterns, helps in the following ways:

◆◆ Focuses the project on the critical few

◆◆ Speeds up the search for the root causes

◆◆ Generates a deeper understanding of the process factors

The question that needs to be answered is what kinds of things can contribute to differences in the
process performance level.

If the source of the problem can be narrowed down by identifying patterns based on “slicing and
dicing” the data in these various “buckets,” the source of the problem can be pinpointed and the
improvement efforts can be significantly streamlined. It is important to identify these patterns during
the data collection planning phase because if enough data is not gathered based on these “buckets,” the
data cannot be “sliced and diced” after data collection is complete.

19.2 Sampling and Data Collection Methods


Sampling is the process of taking data from one or more subsets of a larger group to make decisions
about the whole group. Sampling enables faster data collection. Care must be taken for this process
because the samples are representative of the data as a whole.

There are two types of sampling:

◆◆ A population is drawn from a fixed group with definable boundaries. There is no time element
involved. Examples include customers, complaints, and items in a warehouse.

◆◆ A process is a sampling from a changing flow of items moving through the business. There is a
time element involved. Examples include customers per week, hourly complaint volume, and
items received/shipped per day.

Additional Sampling Terms


◆◆ Sampling event: act of extracting items from the population or process to measure
(see Figure 19.3).

208 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge
Sample within Population

Population

Unlawful to replicate or distribute


Sample

Figure 19.3 Sample within Population

◆◆ Subgroup: number of consecutive units extracted for measurement in each sampling event.

◆◆ Sampling frequency: number of times per day or week a sample is taken (applies only to
process sampling).

19.2.1 Factors in Sample Selection


There are a number of factors that affect the size and number of samples that must be collected:

◆◆ Situation: existing set of items that will not change (a population) vs. a set that is continually
changing (process).

◆◆ Data type: continuous or attribute.

◆◆ Objectives: what will be done with the results?

◆◆ Familiarity: how much knowledge has been accumulated about the situation (historical
process performance, customer segments, etc.)?

◆◆ Certainty: how much confidence is needed in the conclusions drawn?

19.2.2 Understanding Sampling Bias


The biggest pitfall in sampling is bias, or selecting a sample that does not represent the whole. Typical
sources of bias include the following:

◆◆ Self-selection: i.e., asking customers to call in to a phone number rather than randomly
calling them.

◆◆ Self-exclusion: i.e., some types of customers will be less motivated to respond than others.

◆◆ Missing key representatives.

◆◆ Ignoring non-conformances: (i.e., the items that do not match your expectations.

◆◆ Grouping.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 209
Chapter 19: Collecting and Summarizing Data

19.2.3 Worst Ways to Choose Samples


Judgment sampling involves choosing a sample based on someone’s knowledge of the process and
assuming that the samples will be representative of the process. Judgment guarantees a bias and should
be avoided. Convenience sampling involves sampling the items that are easiest to measure or sampling
at times that are the most convenient. Examples may include voice of the customer (VOC) data from
Unlawful to replicate or distribute

people you know or obtaining samples when you go for coffee.

19.2.4 Sampling Strategies


Systematic sampling is recommended for most business processes. This method involves taking data
samples at certain intervals, e.g., every half hour or every 20th item. In a random sampling method,
every item in the population or process has an equal chance of being selected for counting. Stratified
sampling is used when there is an inherent structure in the population or process flow; in this case,
sub-dividing the population into sub-groups may be necessary first, and the sampling then based
on a specific strategy, e.g., frequent vs. infrequent customers, large dollar sales vs. small dollar sales.
Random or systematic sampling can be done within these sub-groups of data.

19.2.5 Confidence Level or Interval


The confidence level represents how strongly the data sample is believed to actually represent the
entire population or process. A confidence level of 95 percent is commonly accepted. This would
mean that there is a five percent chance of drawing an inaccurate conclusion. The tricky part about
collecting data is that the data collector must know something about the data to be collected. The more
experience collectors have with taking measurements and their knowledge of the process, the better
their sampling plan will be. This being said, sometimes it is necessary to begin the data collection
process with some educated guesses concerning the data.

19.2.6 Determining Sample Size


There are various formulas for calculating sample size. They vary depending upon the particular
sampling scenario and the desired results.

QI Macros (see below example) uses four formulas: 1) for attribute data with a known population size
2) for attribute data with an unknown population size 3) for variable data with a known population
size 4) for variable data with an unknown population size.

There are a few basic things about the target population and the sample you need to determine to
calculate sample size:

Population Size
Formulas are available for known population size and unknown population size. Tables are also
available if the population size is known.

Confidence Interval and Confidence Level


Confidence intervals are an estimate for the mean of the population that was sampled. Interval
estimates are desirable because the estimate of the mean varies from sample to sample. Instead of a
single estimate for the mean, a confidence interval generates a lower and upper limit for the mean. The
interval estimate gives an indication of how much uncertainty there is in an estimate of the true mean.

210 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

One-half of the confidence interval is known as the Margin of Error (MOE). MOE is sometimes
known as precision.

Confidence intervals are expressed in terms of confidence levels. How confident do you want to be that
the actual mean falls within your confidence interval? The most common confidence levels are 90%

Unlawful to replicate or distribute


confident, 95% confident, and 99% confident.

Confidence levels correspond to a z-score. The z-score is the number of standard deviations a given
proportion is away from the mean. It is a value needed for a sample size formula. Here are the
z-scores for the most common confidence levels:

• 90% – z-score = 1.645

• 95% – z-score = 1.96

• 99% – z-score = 2.576

The confidence levels and confidence intervals are combined so that you have a certain confidence
level that the population mean is within a certain interval.

For example, 1,000 cats are sampled and 210 of them are found to have green eyes, or 21%, which
is entered in Minitab software to find the confidence interval for a confidence level of 95% with the
following results:

Cat Sample X N Sample percent 95% CI


1 210 1000 0.210000 (0.185139, 0.236581)

These results show with 95% confidence that the percentage of green-eyed cats in the population
sampled is between 18.5139% and 23.6581%, which is the confidence interval. The MOE is one-half of
the confidence interval, or 2.6581% in this case.

The larger the MOE, the wider the confidence intervals are and the less likely the estimated value is
close to the true population value.

Standard of Deviation
For variable data, an estimate of the expected variance must be entered.

Estimated Response
For attribute data, an estimated percentage of the targeted response must be entered; for example, in
order to sample a population of items for the percentage of defects, if the historical rate of defects was
found to be 10% defects, then 10% would be entered. QI Macros uses 50% as a default.

Alpha Risk and Beta Risk


Alpha risk is defined as rejecting something that is true or good, such as rejecting a good product
(producer’s risk). Low alpha risk is good. The alpha risk is 1- the confidence level; therefore a
confidence level of 95% carries an alpha risk of 5%.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 211
Chapter 19: Collecting and Summarizing Data

Beta risk is defined as accepting something that is false or bad, such as accepting inferior products
(consumer’s risk). Low beta risk is good. Beta risk is 1 – power. Power is defined as rejecting
something that is false or bad. High power is good.; therefore a power of 80% carries a beta risk of
20%.
Unlawful to replicate or distribute

Sample Size Formula Examples


Sample size formula for variable data when the population is unknown
Sample size = ((1.96 x standard deviation /MOE)) 2

1.96 represents a 95% confidence level.

Given a standard deviation of .167, a confidence level of 95%, and MOE of 5%, 43 samples are needed:

(1.96 x .167/ .05) 2 = (6.54)2 = 43 samples.

Sample size formula for attribute data when the population is unknown
Sample size = 1.962 x estimated percent of defects x (1 – estimated percent of defects) / (MOE)2

1.96 represents a 95% confidence level.

Given a confidence level of 95%, MOE of 5%, and an estimated percent of 50% defects, you would
need 384 samples: (1.96)2 x .5 x .5 / (.05)2 = 384 samples.

It is much easier to use sample size tables and sample size calculators that are available on the internet,
which are based on formulas. Sample size calculators are also included in statistical software such as
QI Macros or Minitab.

19.2.6.1 Sample Size Table Example


Table 19.1 is an example of a sample size table listing population sizes, specific MOE, and a specific
confidence level. Remember that the MOE is one-half of the confidence interval. This table is for
either a 95% or a 99% confidence level.

The 5% and 1% columns are MOE values for this sampling table. To use these values, determine the
size of the population down the left-hand column. The value in the next column is the sample size that
is required to generate a MOE of 5% and a 95% confidence level.

For example, your population is 1,000 and you sample it for percent defects. You pull 278 samples and
find 28 defects. You know that the defect rate is 10% for the samples that you pulled; but, what is the
defect rate for the entire population of 1,000? Per the table, you are 95% confident that the true defect
rate of the entire population is between 5% and 15%. You would have to pull 906 samples to be 99%
confident that the defect rate for the population is 10% +/- 1%.

To be more confident or for a smaller interval, more samples must be taken.

212 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Recommended Sample Size for Two difference Confidence Levels


Table 19.1 Sample Size Table

Population size Confidence Level = 95% Confidence Level = 99%


MOE = 5% MOE= 1% MOE = 5% MOE = 1%

Unlawful to replicate or distribute


10 10 10 10 10
100 80 99 87 99
500 217 475 285 485
1000 278 906 399 943
5000 357 3288 586 3842
10000 370 4899 622 6239
19.2.6.2 Software Sample Size Calculator Example
The following information and figures are taken from the QI Macros sample size calculator example
with their permission.
QI Macros Sample Size Calculator
α (Type I Error) 0.05 95% Confidence Level
β (Type II Error) 0.1 90% Power
One-Half Confidence Interval 0.05 <--Maximum allowable error of the estimate (δ)
Population (if known) 340

Attribute Data
Percent Defects (50%) 50% Defaults
Sample Size (Unknown Population) 384 0.05 Confidence Interval (desired width)
Sample Size for Known Population 181 50% Percent Defects (Attribute - 50%)
0.167 Standard Deviation ([High-Low]/6)
Variable Data using α
Standard Deviation σ ([High-Low]/6) 0.167
Sample Size (Unknown Population) 43
Sample Size for Known Population 38

Variable Data using both α and β


Standard Deviation σ ([High-Low]/6) 0.167
Sample Size (Unknown Population) 118
Sample Size for Known Population
Figure 88
19.4 QI Macros Sample Size Calculator

The following information is needed to calculate a sample size (see Figure 19.4):

1. The confidence level required (90%, 95%, 99%) α = 0.1, 0.05, 0.01 (Type I Error)

2. The Power required (80%, 85%, 90%) β = 0.2, 0.15, 0.1 (Type II Error)

3. The desired width of the confidence interval δ - Maximum allowable error of the estimate =
1/2 * tolerance

4. σ - estimated standard deviation (0.167 = 1/6)

The defaults are set to standard parameters but can be changed.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 213
Chapter 19: Collecting and Summarizing Data

Confidence Level
Sampling enables knowing how well a sample reflects the total population. An α = 0.05 - 95%
confidence level indicates 95% certainty that the sample reflects the population within the confidence
interval.
Unlawful to replicate or distribute

Step 1. Choose alpha α = 0.05 - 95% Confidence Level


Step 2. Choose beta β = 0.1 - 90% Power

Confidence Interval
The confidence interval represents the range of values, which includes the true value of the population
parameter being measured.

Step 1. Set the confidence interval to one-half the tolerance or maximum allowable error of
the estimate, e.g., + 0.05, 2, etc.
Step 2. Attribute data (pass/fail, etc.). Set percent of defects to 0.5. If 95 out of 100 are good
and only five are bad, then a very large sample would not be needed to estimate the
population. If 50 are bad and 50 are good, a much larger sample will be needed to
achieve the desired confidence level. Since it cannot be known beforehand how many
are good or bad, the attribute field can be set to (50% or 0.5).
Step 3. Variable Data. Enter standard deviation. If the standard deviation of your data (from
past studies) is known, then the standard deviation can be used.
If the specification tolerance is known, then (maximum value - minimum value)/6 can
be used as the standard deviation. (The default is 1/6 = 0.167.)
Step 4. Enter the total population (if known). Using the default values (95%, + 0.05, Stdev =
0.167).
Step 5. Read the sample size. Use the sample size calculated for your type of data: attribute
or variable. Variable Sample Size: If variable data are used and just α, the sample size
would be 43. Using α and β the sample size would be 118.
Attribute Example
Attribute Sample Size: If you were using attribute data, e.g., counting the number of defective coins in
a vat at the Denver Mint, how would you determine how many coins were in the vat? You would need
384 coins to be 95% confident that the coins fell within the 5% interval. (see Figure 19.5).

However, if you knew there were 1,000 coins in the vat (population known), you only need 278 to be
95% confident (see Figure 19.5).

QI Macros Sample Size Calculator


α (Type I Error) 0.05 95%
β (Type II Error) 0.1 90%
One-Half Confidence Interval 0.05
Population (if known) 1000

Attribute Data
Percent Defects (50%) 50%
Sample Size (Unknown Population) 384
Sample Size for Known Population 278

Figure 19.5 Attribute Data Example Where One-Half Confidence Interval = .05

214 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

If you changed the confidence interval to + 0.1, only 88 coins would be needed to be 95% confident. (see
Figure 19.6).

QI Macros Sample Size Calculator


α (Type I Error) 0.05 95%

Unlawful to replicate or distribute


β (Type II Error) 0.1 90%
One-Half Confidence Interval 0.1
Population (if known) 1000

Attribute Data
Percent Defects (50%) 50%
Sample Size (Unknown Population) 96
Sample Size for Known Population 88

Figure 19.6 Attribute Data Example Where One-Half Confidence Interval = .1


Variable Example
A sample must be selected to estimate the mean length of a part in a population. Almost all production
falls between 2.009 and 2.027 inches.
Estimated standard deviation = (2.027 - 2.009) / 6 = 0.003.
You want to be 95% confident that the sample is within +/- 0.001 of the true mean. Enter the data as
shown below in Figure 19.7. You need 35 samples using α alone and 95 using α and β together.
QI Macros Sample Size Calculator
α (Type I Error) 0.05 95%
β (Type II Error) 0.1 90%
One-Half Confidence Interval 0.001
Population (if known)

Attribute Data
Percent Defects (50%) 50%
Sample Size (Unknown Population) 960365
Sample Size for Known Population

Variable Data using α


Standard Deviation σ ([High-Low]/6) 0.003
Sample Size (Unknown Population) 35
Sample Size for Known Population

Variable Data using both α and β


Standard Deviation σ ([High-Low]/6) 0.003
Sample Size (Unknown Population) 95
Figure 19.7 Variable Data Example

19.2.7 Data Collection Planning


A good data collection plan (see Table 19.2) will help make sure the data collected will be useful
(measuring the right things) and statistically valid (measuring things right). Data collection is a crucial
part of the LSS process. The success of a project is directly linked to the thoroughness and accuracy of
the data collection process.

If the data collected prove to be unreliable or the measurement system is unstable, the data will be
useless; and the data collection process will need to be completed again. If unreliable data are not
caught, all the decisions and improvements based on it are likely to be ineffective.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 215
Chapter 19: Collecting and Summarizing Data

Steps for Creating a Data Collection Plan

Step 1. Decide who will collect the data.


Step 2. Train data collectors.
Unlawful to replicate or distribute

Step 3. Do ground work for analysis.


Step 4. Execute the data collection plan.
Step 5. Identify the source and location of the data.
Step 6. Develop data collection forms/checklists.

Table 19.2 Data Collection Plan Template


Data Collection Plan
Metric: Stratification Operational Sample Source and Collection Who will
Factors: Definition: Size: Location Method: collect data:

19.2.8 Data Collection Tools


Once a decision has been made about the type of data to collect, the next step is to collect the data. The
most commonly used tools for data collection are spreadsheets and check sheets. When creating any
type of data collection form, the following guidelines must be considered:

◆◆ Keep the form simple. If the form is cluttered, hard to read, or confusing, there is a risk of
errors or nonconformance.

◆◆ Label the form well. Make sure it is clear where data should go on the form.

◆◆ Include space for the date, time, and collector’s name. This information later helps clarify any
information that might be unclear.

◆◆ Organize the data collection form and spreadsheet used to compile the data consistently.
Otherwise, it can cause rework, confusion, and extra time tabulating the data.

◆◆ Include the key factors to stratify the data.

The most common types of check sheets used include the defect or cause checksheet, the data sheet,
the frequency plot checksheet, the concentration diagram checksheet, the traveler checksheet, and the
production defect checksheet.

216 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

A defect or cause check sheet (see Figure 19.8) is used to record the types of defects or the causes of
Defect
defects. A data sheet is used to capture or Cause
readings, Checksheet
measures, or counts. A frequency plot checksheet is
used to record a measure of an item along a scale or continuum.

Unlawful to replicate or distribute


Defect July Total
10 11 12 13

Wrong Height 26

Wrong Length 9

Wrong Width 8

Wrong Weight 35

Wrong Finish 7

Total 24 17 23 21 85

Figure 19.8 Defect or Cause Checksheet

A concentration diagram check sheet is used to show a picture of an object or document being
observed on which collectors then mark where the defects are actually occurring. For example, when
renting a car, the customer usually walks around the car while an attendant records any dents or
scratches on the car on a check sheet. A traveler check sheet can be used which travels through the
process along with the product or service being produced. The process steps are listed in a column,
and then multiple pieces of information can be collected about each process step.

19.3 Graphical Methods of Displaying Data


19.3.1 Displaying Data Using Histograms
A histogram (also called a frequency plot) is used to graphically display process performance data
collected over some period of time. The data are divided into groups called classes. The data points
within a class are totaled, and bars are drawn for each class. The shape of the resultant histogram can
be used to assess the following:

◆◆ Measures of central tendency

◆◆ Variation in the data

◆◆ Shape or underlying distribution of the data

The histogram helps evaluate the distribution of the process performance, which is much more
revealing than looking at an average of the performance data. By looking at the process centering,
spread, and shape, a great deal can be learned about the problem to be solved. A normally distributed
histogram will have almost all its values within ±3 standard deviations of the mean.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 217
Chapter 19: Collecting and Summarizing Data

A histogram:1

◆◆ Displays large amounts of data that are difficult to interpret in tabular form.

◆◆ Shows the relative frequency of occurrence of the various data values.


Unlawful to replicate or distribute

◆◆ Reveals the centering, variation, and shape of the data.

◆◆ Illustrates quickly the underlying distribution of the data.

◆◆ Provides useful information for predicting future performance of the process.

◆◆ Helps to indicate if there has been a change in the process.

◆◆ Helps answer the question “Is the process capable of meeting my customer requirements?”

Steps for Creating a Histogram:2

Step 1. Decide on the measurement to collect and evaluate.


•• The data should be variable data and should be measured on a continuous scale.
Examples: temperature, time, dimensions, weight, and speed.

Step 2. Gather data.


•• Collect at least 50 to 100 data points if planning on looking for patterns and
calculating the distribution’s centering (mean), spread (variation), and shape.

•• Consider collecting data for a specified period of time: hour, shift, day, week,
month, etc.

•• Use historical data to find patterns or as a baseline measure of past performance.

Step 3. Prepare a frequency table from the data. Count the number of data points, n, in the
sample (see Table 19.3). In this example, there are 125 data points; n = 125.

1  Michael Brassard and Diane Ritter, The Memory Jogger 2: Tools for Continuous Improvement and Effective Planning,
Second Edition [Salem, NH: GOAL/QPC, 2010], 91. www.goalqpc.com
2  Ibid.

218 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Table 19.3 Sample Data Set


Sample Data Set
9.9 9.3 10.2 9.4 10.1 9.6 9.9 10.1 9.8
9.8 9.8 10.1 9.9 9.7 9.8 9.9 10.0 9.6

Unlawful to replicate or distribute


9.7 9.4 9.6 10.0 9.8 9.9 10.1 10.4 10.0
10.2 10.1 9.8 10.1 10.3 10.0 10.2 9.8 10.7
9.9 10.7 9.3 10.3 9.9 9.8 10.3 9.5 9.9
9.3 10.2 9.2 9.9 9.7 9.9 9.8 9.5 9.49
9.0 9.5 9.7 9.7 9.8 9.8 9.3 9.6 9.7
10.0 9.7 9.4 9.8 9.4 9.6 10.0 10.3 9.8
9.5 9.7 10.6 9.5 10.1 10.0 9.8 10.1 9.6
9.6 9.4 10.1 9.5 10.1 10.2 9.8 9.5 9.3
10.3 9.6 9.7 9.7 10.1 9.8 9.7 10.0 10.0
9.5 9.5 9.8 9.9 9.2 10.0 10.0 9.7 9.7
9.9 10.4 9.3 9.6 10.2 9.7 9.7 9.7 10.7
9.9 10.2 9.8 9.3 9.6 9.5 9.6 10.7

•• Determine the range (R) for the entire sample. The range is the smallest value in the
set of data subtracted from the largest value. For this example: R = x (largest) – x
(smallest) = 10.7 – 9.0 = 1.7

•• Determine the number of class intervals (k) needed. The two methods listed below
are general rules of thumb for determining class intervals. The number of intervals
can influence the graphical pattern that will be displayed for this sample. Too few
intervals will produce a tight, high pattern. Too many intervals will produce a
spread-out, flat pattern.

•• Method 1: Take the square root of the total number of data points and round to
the nearest whole number. For this example: k = square root (125) = 11.18 = 11
intervals.

•• Method 2: Use the table below to provide a guideline for dividing the sample
into a reasonable number of classes (Table 19.4). For this example, 125 data
points would be divided into 7–12 class intervals.
Table 19.4 Dividing Sample into Classes

Dividing Sample into Classes


Number of Data Points Number of Classes (k)
Under 50 5-7
50-100 6-10
100-250 7-12
Over 250 10-20

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 219
Chapter 19: Collecting and Summarizing Data

•• Determine the class width (H). The formula for this is: H = R /k, = 1.7/10 = .17.
•• Round the number to the nearest value with the same decimal numbers as the
original sample. For this example, round up to .20. It is useful to have intervals
defined to one more decimal place than the data collected.
Unlawful to replicate or distribute

•• Determine the class boundaries, or end points. Use the smallest individual
measurement in the sample or round to the next appropriate lowest round
number. This will be the lower end point for the first class interval, which for
this example would be 9.0.
•• Add the class width (H) to the lower end point. This will be the lower end point
for the next class interval. For this example: 9.0 + H = 9.0 + .20 = 9.20. Thus, the
first class interval would be 9.00 and everything up to, but not including, 9.20;
that is, 9.00 through 9.19. The second class interval would begin at 9.20 and
would be everything up to, but not including, 9.40.
•• Each class interval must be mutually exclusive; that is, every data point will fit
into one and only one class interval.
•• Consecutively add the class width to the lowest class boundary until the k class
intervals and/or the ranges of all the numbers are obtained.
•• Construct theFrequency Based
frequency table based onvalues
on the Data computed above. A
frequency table based on the data from the example is shown in Figure 19.9.

Class Class Mid-


# Boundaries Point Frequency Total
1 9.00-9.19 9.1 1
2 9.20-9.39 9.3 9
3 9.40-9.59 9.5 16
4 9.60-9.79 9.7 27
5 9.80-9.99 9.9 31
6 10.00-10.19 10.1 22
7 10.20-10.39 10.3 12
8 10.40-10.59 10.5 2
9 10.60-10.79 10.7 5
10 10.80-10.99 10.9 0

Figure 19.9 Frequency Table Based on Data


Based on graphic from: Michael Brassard and Diane Ritter,
The Memory Jogger 2: Tools for Continuous Improvement and Effective Planning, Second Edition
[Salem, NH: GOAL/QPC, 2010], 96-97. Used with permission. www.goalqpc.com

Step 4. Draw a histogram from the frequency table:

•• On the vertical line (y-axis), draw the frequency (count) scale to cover the class
interval with the highest frequency count.
•• On the horizontal line (x-axis), draw the scale related to the variable being measured.

220 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge
Frequency Tally for Each Class
•• For each class interval, draw a bar with the height equal to the frequency tally of
that class (see Figure 19.10).
Spec. Specifications:
40 Target 9 ± 1.5 USL

Unlawful to replicate or distribute


30

20

10

0
9.0 9.2 9.4 9.6 9.8 10.0 10.2 10.4 10.6 10.8

Thickness
Figure 19.10 Histogram Based on Frequency Table
Based on graphic from: Michael Brassard and Diane Ritter,
The Memory Jogger 2: Tools for Continuous Improvement and Effective Planning, Second Edition
[Salem, NH: GOAL/QPC, 2010], 95. Used with permission. www.goalqpc.com

Interpreting Histograms
Figure 19.11 focuses on four aspects to consider when interpreting a histogram: centering, variation,
shape, and process capability.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 221
Chapter 19: Collecting and Summarizing Data

Interpreting Histograms
Interpret the Histogram
a) Centering. Where is the distribution centered? b) Variation. What is the variation or spread of
Is the process running too high? Too low? the data? Is it too variable?
Customer Customer
Unlawful to replicate or distribute

Requirement Requirement
Process
Centered

Process
Too High Process Within
Requirements

Process
Too Low

Process Too Variable

c) Shape. What is the shape? Does it look like a normal, bell-shaped distribution? Is it positively or negatively
skewed; that is, are more data values to the left or to the right? Are there twin (bi-modal) or multiple peaks?

Bi-Modal Distribution Positively Skewed

Normal Distribution

Multi-Modal Distribution Negatively Skewed


Note: Some processes are naturally skewed; don’t expect every distribution to follow a bell-shaped curve.
Note: Always look for twin or multiple peaks indicating that the data is coming from two or more different
sources, e.g., shifts, machines, people, suppliers. If this is evident, stratify the data.

d) Process Capability. Compare the results of your histogram to your customer requirements or specifications.
Is your process capable of meeting the requirements, i.e., is the histogram centered on the target and within
specification limits?
Figure 19.11 Interpreting Histograms
Based on graphic from: Michael Brassard and Diane Ritter,
The Memory Jogger 2: Tools for Continuous Improvement and Effective Planning, Second Edition
[Salem, NH: GOAL/QPC, 2010], 96-97. Used with permission www.goalqpc.com

222 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Centering and Spread Compared to


Customer
Figure 19.12 shows the centering Targettoand
and spread compared Limits
the customer targets and limits.

Lower Upper
Specification Specification
Target

Unlawful to replicate or distribute


Limit Limit

a) Centered and well within customer


limits.

Action: Maintain present state.

b) No margin for error.

Action: Reduce variation.

c) Process running low. Defective


product/service.

Action: Bring average closer to target.

d) Process too variable. Defective


product/service.

Action: Reduce variation.

e) Process off center and too variable.


Defective product/service.

Action: Center better and reduce


variation.

Note: If the Histogram should suddenly stop at one point (such as a specification limit), without any previous
decline in the data, then it’s time to get suspicious of the accuracy of the data. This could be an indicator that
the sample doesn’t include defective product which has been sorted out.
Figure 19.12 Centering and Spread Compared to Customer Target and Limits
Based on graphic from: Michael Brassard and Diane Ritter, The Memory Jogger 2: Tools for Continuous
Improvement and Effective Planning, Second Edition [Salem, NH: GOAL/QPC, 2010], 98. Used with permission.
www.goalqpc.com

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 223
Chapter 19: Collecting and Summarizing Data

Software packages are available that will automatically calculate the class intervals and allow the user
to revise them as required. The number of intervals shown can influence the pattern of the sample.
Plotting the data is always recommended. Three unique distributions of data are shown in the
following three figures. All three data plots share an identical mean, but the spread of the data about
the mean differs significantly.
Unlawful to replicate or distribute

The histogram in Figure 19.13 illustrates a distribution where the measures of central tendency (mean,
median, and mode) are equal. This is a normal distribution and is sometimes referred to as a bell-
shaped curve. Notice that there is aSymmetric data
single point of central tendency, and the data are symmetrically
distributed about the center. Some processes are naturally skewed.

Mode Mean
100

Median
Frequency

50

0
20 30 40 50 60 70 80 90 100 110

Figure 19.13 Symmetric Data

A negatively skewed distribution is shown in Figure 19.14. For skewed-left data, the median is between
the mode and the mean, with the mean on the left. This distribution does not appear to be normally
distributed and may require transformation prior to statistical analysis. Data that sometimes exhibit
Negatively
negative skewness are cash flow, Skewed Data
yield, and strength.

Median = 73.8
Mode
300

Mean = 70
200

100

0
0 10 20 30 40 50 60 70 80

Figure 19.14 Negatively Skewed Data

224 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

A positively skewed distribution is shown in Figure 19.15. The long tail of the skewed distribution
points is in the positive x-direction. The median is between the mode and the mean, with the mean
Positively
on the right. This distribution is not normallySkewed
distributed andData
is another candidate for transformation.
Data that sometimes exhibit positive skewness are home prices, salaries, cycle time of delivery, and
surface roughness.

Unlawful to replicate or distribute


Median = 65.7
Mode
300
Mean = 70
200

100

0
60 70 80 90 100 110 120 130 140

Figure 19.15 Positively Skewed Data

19.3.2 Displaying Data Using Paretofor


Reason Charts
Failed Appointments
A Pareto chart focuses efforts
Source of Data is:that
on the problems offer the greatest
Shore-Based potential for improvement by
Command
showing their relative frequency or size in a descending bar graph.
35

30

25

20

15

10

0
Forgot

Worked

Personal
Business

Leave

Misc.

Transferred

Vehicle

% 31 25 21 8 8 4 2

Information provided courtesy of


U.S. Navy, Naval Dental Center, San Diego

Figure 19.16 Pareto Chart


Based on graphic from: Michael Brassard and Diane Ritter,
The Memory Jogger 2: Tools for Continuous Improvement and Effective Planning, Second Edition
[Salem, NH: GOAL/QPC, 2010], 129. Used with permission. www.goalqpc.com

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 225
Chapter 19: Collecting and Summarizing Data

A Pareto chart (see Figure 19.16)3

◆◆ Helps a team focus on those causes that will have the greatest impact if solved.

◆◆ Operates on the proven Pareto principle: 20 percent of the sources cause 80 percent of any
Unlawful to replicate or distribute

problem.

◆◆ Displays the relative importance of problems in a simple, quickly interpreted, visual format.

◆◆ Helps prevent shifting the problem in which the solution removes some causes but worsens
others.

◆◆ Measures progress in a highly visible format that provides incentive to push on for more
improvement.

Steps for Creating a Pareto Chart:4

Step 1. Decide which problem to investigate.


Step 2. Choose the causes or problems that will be monitored, compared, and rank-ordered by
brainstorming or with existing data.
Step 3. Choose the most meaningful unit of measurement, such as frequency or cost.
•• Sometimes it is not known before the study which unit of measurement is best. Be
prepared to do both frequency and cost.

Step 4. Choose the time period for the study.


•• Choose a time period that is long enough to represent the situation. Longer studies
do not always translate to better information. Look first at the volume and the
variety within the data.

•• Make sure the scheduled time is typical in order to take into account seasonality or
even different patterns within a given day or week.

Step 5. Gather necessary data on each problem category either by “real time” or reviewing
historical data.
•• Whether data is gathered in “real time” or historically, check sheets are the easiest
method for collecting data.

Note: Always include, with the source data and final chart, the identifiers that indicate
the source, location, and time period covered.

Step 6. Compare the relative frequency or cost of each problem category.


Step 7. List the problem categories on the horizontal line and frequencies on the vertical line.
•• List the categories in descending order from left to right on the horizontal line with
bars above each problem category to indicate its frequency or cost. List the unit of
measure on the vertical line.

3  Michael Brassard and Diane Ritter, The Memory Jogger 2: Tools for Continuous Improvement and Effective Planning,
Second Edition [Salem, NH: GOAL/QPC, 2010], 122. Used with permission. www.goalqpc.com
4  Ibid, 122-126.

226 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

19.3.3 Displaying Data Using Runs Charts


Using a run chart allows a team to review process data and look for trends or patterns in the data over
a specific period of time. A run chart can provide the following:5

◆◆ Monitors the performance of one or more processes over time to detect trends, shifts, or

Unlawful to replicate or distribute


cycles.

◆◆ Allows a team to compare a performance measure before and after implementation of a


solution to measure its impact.

◆◆ Focuses attention on truly vital changes in the process.

◆◆ Tracks useful information for predicting trends.

A danger in using a run chart is the tendency to see every variation in data as being important. The
run chart should be used to focus on truly vital changes in the process. Simple tests can be used to
look for meaningful trends and patterns. These tests are found in Chapter 22. Remember that for
more sophisticated uses, a process behavior chart is invaluable because it is simply a run chart with
statistically-based limits.6

Variation
Like control charts, run charts (see Figure 19.17) can be used to assess whether there are any signs of
special-cause variation.
Sample Run Chart

45

40

35

30

25 Median

20

15

10

0 5 10 15 20 25

Note: There are 20 data points that are not on the median out of
11 runs. If points fall on the median, they are ignored since they
don’t add to or interrupt a run.

Figure 19.17 Run Chart

5  Michael Brassard and Diane Ritter, The Memory Jogger 2: Tools for Continuous Improvement and Effective Planning,
Second Edition [Salem, NH: GOAL/QPC, 2010], 182. Used with permission. www.goalqpc.com
6  Ibid, 184.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 227
Chapter 19: Collecting and Summarizing Data

In general, there are five steps to using a run chart:

Step 1. Collect 20 or more data values over time.


Step 2. Plot the data in time order.
Unlawful to replicate or distribute

Step 3. Pencil in the median line.


Step 4. Count the runs above and below the median. A run is a series of points on the same
side of the median; a series can be of any length from one point to many points. A run
ends anytime the connecting line crosses the median.
Step 5. Look for patterns in the data.

Run charts are not as powerful as control charts for analyzing process data.  The out of control rules
for control charts are not used for run charts since there are no control limits on run charts.  Run
charts provide a quick check for obvious problems and are useful when there is not enough data for a
control chart.

19.3.4 Scatter Diagram (Scatterplot)


The scatter diagram pairs numerical data, one variable on each axis, to search for a possible
relationship between them. If the variables are correlated, the points will fall along a line or a curve.
The better the correlation, the better fit the points will have with the line or curve. However, even if a
relationship is shown, it should not be assumed that one variable caused the other (see Figure 19.18).

For example, a retailer wants to know if there is a relationship between the number of customers per
day in their store and the sales for that day. Or, someone may want to see if there is a relationship
between a person's height and their weight.

A scatter diagram is the first step in determining relationships between variables.

See correlation and regression in Chapter 25 for other tests that can be used.

228 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

y
4.5
1. Positive Correlation. An increase in y may
Rating

depend on an increase in x. Session ratings are


4.0
likely to increase as trainer experience increases.

Unlawful to replicate or distribute


3.5
x
150 400 650
Trainer Experience
y
4.5 2. Possible Positive Correlation. If x is increased,
y may increase somewhat. Other variables may
Rating

4.0 be involved in the level of rating in addition to


trainer experience.
3.5
x
150 400 650
Trainer Experience
y
4.5
3. No Correlation. There is no demonstrated
Rating

4.0 connection between trainer experience and


session ratings.
3.5
x
150 400 650
Trainer Experience
y
4.5 4. Possible Negatiave Correlation. As x is
increased, y may decrease somewhat. Other
Rating

4.0 variables, besides trainer experience, may also


be affecting ratings.
3.5
x
150 400 650
Trainer Experience
y
4.5
5. Negative Correlation. A decrease in y may
Rating

4.0 depend on an increase in x. Session ratings are


likely to fall as trainer experience increases.
3.5
x
150 400 650
Trainer Experience

Figure 19.18 Scatterplot

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 229
Chapter 19: Collecting and Summarizing Data

19.4 Using Existing Data


Sometimes past data, or existing data, will be available, but extreme caution should be used when
using existing data. Before using this data, the following qualifications must be addressed:

◆◆ It must be in a usable form.


Unlawful to replicate or distribute

◆◆ It must be either current data or there must be verification that the process has not changed
since last measured.
◆◆ The data collection methods used must be verified.
◆◆ The operational definitions must be the same.
◆◆ The data should represent the population and be unbiased.
◆◆ The data must be backed up by applicable MSA records.
◆◆ If these conditions cannot be met, new data are needed.

230 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Part VI: Principles of Statistical Process


Control

Unlawful to replicate or distribute


S tatistical Process Control (SPC) tools and techniques help monitor and control manufacturing and
service process performance. Tracking process performance allows reduced variation, improves
understanding the process, and more statistically-valid decision-making to determine the action
needed to improve the overall process. SPC tools accomplish this by gathering data in real time and
converting that data into vital useful information. Probability distributions, control charts, and process
capability will be discussed in the following chapters.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
This page intentionally left blank.
Unlawful to replicate or distribute

232 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Chapter 20: Statistical Process Control

Unlawful to replicate or distribute


Key Terms
central limit theorem special cause variation
common cause variation statistical process control (SPC)
rational subgrouping subgroups

Body of Knowledge
1. Describe the theory and objectives of statistical process control (SPC).

2. Define and distinguish between common and special cause variation.

3. Define and describe how rational subgrouping is used.

4. Describe the central limit theorem.

S tatistical Process Control (SPC) is a methodology used for the ongoing monitoring, control,
and improvement of processes through the use of statistical tools. SPC contains a number of
procedures and graphical methods that help achieve several objectives: quantifying one or more
measures of a process; determining whether the process is operating within an acceptable range of
variability; identifying ways that the process can be improved to achieve its best target value; and
reducing variability.

Control charts (Chapter 22) and process capability (Chapter 23) are discussed first as they are the
primary SPC tools for monitoring processes.

Control charts are graphs used to study how a process changes over time. Data are placed in time
sequence. Control charts have a central line representing the average and upper and lower lines for
the control limits. The control lines, or limits, are calculated from historical data. By comparing
current data to these lines, conclusions can be made as to whether the process is consistent (in control
or stable) or inconsistent (out of control or unstable). Developed by Walter Shewhart in the early
1920s, these charts have been used a long time in all types of organizations, both manufacturing and
service. They are a primary tool for distinguishing between the common and special causes of process
variation. The widespread use of control charting procedures has been greatly assisted by statistical
software packages such as QI Macros and Minitab.

20.1 Common and Special Causes of Variation


Common cause variations are due to the causes that are always present in a process. They are
inherent to the process, are stable over time, and account for most of the process variation. This type of
variation was originally identified by Dr. Shewhart as "chance variation."

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Chapter 20: Statistical Process Control

Special cause variations are the assignable causes of special events. They often can be easily identified
with control charts. Special cause variations account for a small part of process variations when the
normal process function is interrupted by unpredictable events. This type of variation was originally
identified by Dr. Shewhart as "assignable causes." Special cause variation indicates that the process is
out of control or unstable.
Unlawful to replicate or distribute

Generally, special causes are addressed before attacking common cause variation. Various tests can
help determine when a special cause event has occurred and will be covered in Chapter 22. Once
special causes of variation are identified and eliminated (or at least reduced), common cause variability
can be addressed through root cause analysis.

20.2 Data Collection for SPC


SPC results are only as good as the data that is gathered. The data must be as close to “the truth” as
possible, where “the truth” is defined as a perfect description of a fact or a reality. It is not possible
to be 100% certain when describing a reality but measures can be taken, such as calibration of
instruments, measuring system analysis, and technician training, to get as close to that reality as
possible.

For example:

Twenty samples of cake mix were taken as a part of a weight study conducted by Jackie, a lab
technician. For this example, it is known that the actual weight of sample number 10 in Jackie's study
is 16.48 ounces.

So, “The Truth” is known to be 16.48 ounces before Jackie weighs it on her scale. However, Jackie's
scale has not been calibrated. Furthermore, her scale has not gone through a gage R&R study, which
would have revealed a significant technician to technician variation (reproducibility). In fact, Jackie
under-weighs this sample by .1 ounce, and the scale itself is inaccurate by .1 ounce. Therefore, the scale
displays a weight of 16.28 ounces.

Jackie then compounds the error by reading the sample as 16.23 ounces, mistaking the “8” for a “3”.
Jackie then inputs this data into the computer, but enters 16.13 instead of 16.23. So, through a series
of mishaps and errors, the actual weight of 16.48 strayed down to 16.13. Unfortunately, before the
analysis of the results even began, the above errors assured that the results would be erroneous.

20.3 Rational Subgrouping


The key to successful control charts is the formation of rational subgroups.

Within a subgroup, variation should be representative of a common cause only. The items in a
subgroup should be collected closely together and can be consecutive items. Items within a subgroup
should not be collected across a shift change or across any other process change that could be
associated with a special cause.

234 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

20.4 Central Limit Theorem


The central limit theorem (CLT) states that regardless of the population shape, the sampling
distribution of the mean approaches a normal distribution (bell curve) if the sample size is large
enough. Note that the means in this case refers to the sample means, not the individual samples

Unlawful to replicate or distribute


themselves. The approximation improves as the sample size of the means gets larger. How large is large
enough? The closer the population distribution is to a normal distribution, the fewer samples needed.
Populations that are heavily skewed or have several modes may require larger sample sizes. This means
that statistical techniques that assume normality can be applied, even when the sampling populations
are strongly not normal if we take enough samples.

For example:

If a die is rolled ten times, it may have an average of 2.5; and if rolled another ten times, the average
may be 4.2. The two sets of ten provide an average of the two sets of averages (average of an average).
After rolling 100 sets of ten, 100 averages can now be plotted so the 100 averages plot will be closer to
a normal distribution than the plot of two averages. If 1,000 sets of ten are rolled, the plot will be even
closer to a normal distribution.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 235
This page intentionally left blank.
Unlawful to replicate or distribute

236 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Chapter 21: Probability Distributions

Unlawful to replicate or distribute


Key Terms
binomial distribution normal distribution
continuous variable Poisson distribution
discrete variable probability distribution
exponential distribution Weibull distribution
hypergeometric distribution

Body of Knowledge
1. Define and describe various distributions as they apply to statistical process control and (SPC)
probability.

2. Identify and choose the correct probability distribution that best fits the data.

P robability, which is the likelihood of an event occurring or not occurring, is the result of a natural
function described mathematically and follows defined patterns of distribution. A probability
distribution maps the probabilities for a given sample space. It is a description of the population. It
is not to be confused with a frequency distribution, which is limited to the samples that are actually
tested or examined. Therefore, a probability distribution is for a population, while a frequency
distribution is for a sample set of that population.

Probability distributions can take the form of a table, an equation, or a graph. There are many types of
probability distributions. This chapter discusses discrete probability distributions (binomial, Poisson,
and hypergeometric) and continuous probability distributions (normal, exponential, and Weibull).

21.1 Probability Distributions: Discrete vs. Continuous


All probability distributions can be classified as discrete probability distributions or as continuous
probability distributions, depending on whether they define the probabilities associated with discrete
variables or continuous variables.

If a random variable is a discrete variable, its probability distribution is called a discrete probability
distribution. With a discrete probability distribution, each possible value of the discrete random
variable can be associated with a non-zero probability and can be presented in tabular form.

If a random variable is a continuous variable, its probability distribution is called a continuous


probability distribution. A continuous probability distribution assumes that a particular value is zero
and uses an equation or formula to describe the distribution.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Chapter 21: Probability Distributions

21.2 Discrete Probability Distributions


21.2.1 Binomial Distribution
The Binomial Distribution is the result of an experiment with discrete variables. It is characterized
by having a fixed number of independent trials and only two outcomes with the probability of each
Unlawful to replicate or distribute

outcome constant from trial to trial. Its uses include acceptance sampling and inferential statistics,
such as hypothesis testing for discrete data sample sets.

For example:

The binomial distribution can be used when knowing the number of times an event occurs given
a specific number of trials, i.e., how likely it is to observe two or more defective items in a random
sample of 25 items that are selected from a process that has a 2% defect rate. Inputting the data into
Minitab software yields the graph in Figure 21.1.

This distribution plot displays the probability for each number of defects in a sample of 25. The
probability of zero defects is about 0.6, one defect is 0.3, two defects are under 0.1, and two or more
defects are 0.08865 (the area shaded in red).

Distribution Plot
Binomial, n=25, p=0.02

0.6

0.5

0.4
Probability

0.3

0.2

0.1
0.08865
0.0
0 2
Number of Defects in a Sample of 25

Figure 21.1 Distribution Plot

Other examples of binomial experiments include tossing a coin 50 times to see how many heads
appear, asking 500 people if they will vote for a certain candidate, or comparing the defect rates of two
different processes.

21.2.2 Poisson Distribution


The Poisson distribution is similar to the binomial distribution since they both model counts of
events. However, the Poisson distribution places no upper bound on this count. For example, when
counting the number of scratches on a windshield, there is no upper limit on the number of scratches
on a particular windshield. However, the binomial distribution does set an upper limit on the count.

238 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

The number of events you observe cannot be greater than the number of trials you perform.

Poisson probabilities are useful when there are a large number of independent trials with a small
probability of success with a single trial and the variables occur over a period of time. It can also be
used when a given density of items is distributed over a given area or volume.

Unlawful to replicate or distribute


Again, Poisson distribution involves discrete data, and the length of the observation period or area
(in the case of the windshield) is fixed. Occurrences are independent. Poisson's uses include reliability
testing and determining the number of defects in a certain defined area (dents in a car).

21.2.3 Hypergeometric Distribution


The hypergeometric distribution is a discrete distribution that models the number of events in a
fixed sample size when the total number of items in the population is known. Each item in the sample
has two possible outcomes (an event or a nonevent). The samples are without replacement, so every
item in the sample is different. When an item is chosen from the population, it cannot be chosen
again. Therefore, a particular item’s chances of being selected increases on each trial, assuming that it
has not yet been selected. The hypergeometric distribution shape is similar to the binomial/Poisson
distribution.

The hypergeometric distribution can be used for samples drawn from relatively small populations,
without replacement.

21.3 Continuous Probability Distributions


21.3.1 Normal Distribution
The normal distribution (Gaussian) is a continuous probability distribution. It exhibits a symmetrical
distribution about the mean (bell-shaped curve). There is a strong tendency for the data to take on
a central value; and positive and negative deviations from this central value are equally likely. The
frequency of the deviations falls off rapidly moving farther away from the central value.

The normal distribution has several features that make it popular. First, it can be fully characterized
by just two parameters – the mean and the standard deviation. Second, the probability of any value
occurring can be obtained simply by knowing how many standard deviations separate the value from
the mean.

•• About 68% of the area under the curve falls within one standard deviation of the mean.

•• About 95% of the area under the curve falls within two standard deviations of the mean.

•• About 99.7% of the area under the curve falls within three standard deviations of the
mean.

Collectively, these points are known as the empirical rule or the 68-95-99.7 rule (see Figure 21.2).

For example:

Assuming that IQ scores are normally distributed with a mean of 100 and a standard deviation of 10,
the probability that a randomly chosen person has an IQ of less than 90 can be determined. Using

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 239
Chapter 21: Probability Distributions

Minitab software, a plot of this normal distribution is obtained (Figure 21.2). The probability that a
68 - 95 - 99.7 Empirical Rule
randomly chosen person has an IQ of less than 90 is 0.1587 or 15.87%.

• In any normal distribution:


Unlawful to replicate or distribute

• 68.26% of the observations will fall within 1σ of µ


• 95.44% of the observations will fall within 2σ of µ
• 99.73% of the observations will fall within 3σ of µ

3σ 2σ 1σ µ 1σ 2σ 3σ

68.26%

95.44%

99.73%

Figure 21.2 68-95-99.7 Empirical Rule

21.3.2 Exponential Distribution


The exponential distribution is often used to model the time elapsed between events. There are fewer
large values and more small values in this distribution.

For example:

The amount of money customers spend in one trip to the supermarket follows an exponential
distribution. There are more people that spend less money and fewer people that spend large amounts
of money.

The exponential distribution is widely used in the field of reliability. Reliability deals with the amount
of time a product lasts. Other uses include probability of arrival times, distance or space between
occurrences of events of interest, and waiting times.

21.3.3 Weibull Distribution


The Weibull distribution is a versatile distribution that can be used to model a wide range of
applications in engineering, medical research, quality control, and finance.

240 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

For example:

The Weibull distribution is frequently used with reliability analyses to model time-to-failure data,
such as the probability that a part fails after a defined time period. The Weibull distribution is also
used to model skewed process data in capability analysis. Depending on the values of its parameters,

Unlawful to replicate or distribute


the Weibull distribution can take various forms. It can be used to describe many types of data and fits
many common distributions (normal, exponential, and lognormal). The Weibull distribution is an
alternative to the normal distribution in the case of skewed data.

21.4 Choosing the Right Probability Distribution


Choose the right statistical analysis requires knowing the distribution of the data. Therefore, choosing
the right probability distribution is very important. The first step in this process is deciding if the data
are discrete or continuous.

Practical knowledge or direct experience with a product's performance history is a good place to start.
Is the data following a symmetric distribution? Are they skewed left or right? Is the failure rate rising,
falling, or staying constant? What distribution has worked for this analysis in the past?

Suppose an organization wants to assess the capability of a process assuming that the data follow a
normal distribution, but which is found out later to not be normal after all. Many practitioners may
blindly go forward, oblivious to the fact that they are using the wrong test for their distribution.
However, by assessing the data and finding that it is indeed not normal, an informed decision may be
made, and the process can be examined for special causes that prevented the data from being normal.
After addressing the special causes, more data can be collected to verify if the distribution is now
normal; or, it may be decided to settle for a normal distribution anyway because the effect is thought to
be minimal on the test. Computer software may also be used to select the correct distribution.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 241
This page intentionally left blank.
Unlawful to replicate or distribute

242 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Chapter 22 Control Charts

Unlawful to replicate or distribute


Key Terms
c charts p charts
common cause variation special cause variation
IMR charts u charts
lower control limit upper control limit
np charts XBarR charts

Body of Knowledge
1. Describe the purpose of control charts.

2. Build a control chart and analyze the chart results.

3. Recognize which chart should be applied in a given situation.

C ontrol charts are one of the primary SPC tools for monitoring processes and are used in the
measure phase to help set the baseline metrics for your project and in the control phase as a
control to help sustain the project improvements.

22.1 Control Chart Overview


Control charts are graphs that are used to study how a process changes over time. The data first are
placed in time sequence; and the mean and standard deviation then are calculated from the data.
Control charts have a central line for the mean and upper and lower lines for control limits.

Control limits are calculated from the standard deviation. The Upper Control Limit (UCL) is
drawn three standard deviations above the mean, and the Lower Control Limit (LCL) is drawn
three standard deviations below the mean. By comparing the data to these lines, conclusions can be
drawn as to whether the process is in control or out of control. Control charts are applicable in both
manufacturing and service organizations. They are a primary tool to distinguish between common and
special causes of process variation. Most organizations use statistical software packages to build their
control charts.

Common cause variations are due to many causes that are always present in the process. They are
inherent to the process, stable over time, and account for most of the process variation.

Special cause variations are the assignable causes of special events. They often can be easily identified
with control charts. Special cause variations account for a minority of process variations, i.e., when the
normal process function is interrupted by unpredictable events. Special cause variation, which stems
from external sources, indicates that the process is out of control.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Chapter 22 Control Charts

Generally, special causes are addressed before attacking common cause variation. Various tests can
help determine when a special cause event has occurred, (see Section 22.5.2). Once the special causes
of variation are identified and eliminated (or at least reduced), common cause variability can be
addressed through root cause analysis.
Unlawful to replicate or distribute

There are many types of control charts. The charts discussed here are listed in Table 22.1.

Table 22.1 Control Charts


Chart Description
I chart Plots individual variable data over time when no natural subgroups are present.

MR chart Plots the moving range over time to monitor process variation for individual
observations.

These two charts are usually used together.


X-bar chart Plots variable data subgroup means when natural subgroups are present.

R chart Plots the subgroup ranges.

These two charts are usually used together.


P chart Plots the proportion (percentage) of defectives in each subgroup, using attribute data.

NP chart Plots the number of defects in each subgroup using attribute data.

U chart Plots the number of defects per unit sampled in each subgroup.
Used when the subgroup size varies.
C chart Plots the number of defects in each subgroup.
Used when the subgroup size is constant.

22.2 Basic Control Charts Procedure


1. Choose the appropriate control chart for the target data after confirming the type of the data
being studied. For example, variable data would not be plotted on a P chart.

2. Design a data collection plan. Instruct the process operators to take good notes about anything
that occurs out of the ordinary and document any actions taken during the sample collection
period.

3. Follow the procedure for the chosen particular control chart to collect the data and construct
the chart. Use the computer software instructions or the appropriate worksheet if working
manually.

4. Analyze the data.

5. Continue to plot data as necessary. Data may be plotted continuously when using control
charts as an on-going means of process control.

6. A minimum number of data points are needed for the study. Different data sources vary on

244 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

this number. Larger sample sizes are better.

22.3 Control Charts for Variable Data


22.3.1 IMR (Individual and Moving Range) Chart

Unlawful to replicate or distribute


IMR charts are used in conjunction with each other. The I chart plots an individual variable sample
where no natural subgroup exists. Since subgroups are not used, the moving range (MR) chart is
calculated by using a MR between individual samples.

The I chart displays the individual values of each measurement of the process. The MR chart displays
the variation for each measurement from the previous measurement, i.e., the variation of the process.

Upper Control Limit (UCL) and Lower Control Limit (LCL)


For the I chart, the UCL and LCL are calculated based on the overall average plus or minus three
standard deviations. The standard deviation is calculated by dividing the average of the ranges by a
factor called d2, where d2 = 1.128.

For the MR chart, the UCL is calculated by multiplying the average of the ranges by a factor called D4,
where D4 = 3.267. The LCL = 0.

Creating the IMR Chart from Minitab


Open Minitab and input the data in a column. Choose IMR from the drop down menu and input the
IMR” Hit
column of data into the box marked “variables. Chart
"OK" and the IMR chart will appear. See Figure
22.1.

60 UCL=59.33

60
Individual Value

x=51.35
60

60
LCL=43.37
60 1
1 3 5 7 9 11 13 15 17 19
Observation

1
1
10.0 UCL=9.80
Moving Range

7.5

5.0

2.5 MR=3

0.0 LCL=0
1 3 5 7 9 11 13 15 17 19
Observation

Figure 22.1 IMR Chart

If there are any points outside the control limits on either chart, then the process is not under statistical
control. However, even if all the points are within the control limits, the process could still be out of
control. See Section 22.6 for more details on commonly used rules to detect out of control conditions.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 245
Chapter 22 Control Charts

22.3.2 X-barR (Subgroup Average and Range) Chart


X-barR charts, which are used when natural subgroups are present in the process are used in
conjunction with each other. The X-bar chart plots an average of the subgroup measurements,
usually two to five measurements per subgroup. The range (R) chart plots the ranges of the individual
subgroups; and the X-barS chart is used when larger subgroups are employed.
Unlawful to replicate or distribute

The data in Table 22.2 represent five subgroups, where each subgroup consists of two samples. The
weights of each products in grams are recorded. For example, there are two samples in subgroup
one (the first sample weighs 3 grams and the second sample weighs 5 grams). The average of these
two samples is plotted as the first subgroup on the X-barR chart as 4 grams. The range or difference
between the two samples of subgroup one is 2 grams, which is plotted on the R chart.
Table 22.2 X-barR Chart Example Data
Subgroup Sample 1 Sample 2 x-bar R
1 3 5 4 2
2 5 7 6 2
3 4 3 3.5 1
4 7 7 7 0
5 6 4 5 2

Creating a X-barR Chart from Minitab


Suppose a production line produces 28 gram packages of spice mix, and a weight study is performed
on this line, with two consecutive samples (one subgroup) pulled every 30 minutes for eight hours. The
data are gathered and input it into Minitab.
X-barR Chart Then,
of"X-barR"
SpicewasWeight
chosen from the drop down menu and
the column of data was input into the box marked “variables.” After hitting "OK" the charts appear.
See Figure 22.2).
1
29.2
1
UCL=28.954
28.8
Sample Mean

28.4 x=28.378

28.0
LCL=27.802

1 3 5 7 9 11 13 15
Sample

1.00 UCL=1.001

0.75
Sample Range

0.50

0.25 R=0.306

0.00 LCL=0
1 3 5 7 9 11 13 15
Sample

Figure 22.2 X-barR Chart of Spice Weight

246 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

The average of the subgroups (average of the averages) is 28.378 grams. The UCL is 28.954 and the LCL
is 27.802. There are two points above the UCL (points 4 and 10).

The average of the ranges is 0.306. The UCL is 1.001 and the LCL is 0. There are no points above the UCL.

Unlawful to replicate or distribute


Since there are two points above the UCL on the X-bar chart, the process is deemed out of control.

Creating a X-barR Chart from a Worksheet


A series of 40 samples of sandbags are collected in subgroups of two (20 subgroup samples), arranged
in time sequence for product weight.

1. Calculate the average of the two samples and range (difference between the highest and lowest
values) for each subgroup. Record on the chart. Note that for this example, only the first four
subgroups are calculated. (see Table 22.3). All 20 subgroups must be calculated to complete the
chart.

Table 22.3 Data Used to Create X-barR Chart Graph


Average
Subgroup Sample 1 Sample 2 subgroup Subgroup range
weight
1 50 51 50.5 1
2 49 49 49 0
3 51 49 50 2
4 48 52 50 4
Z X = 199.5 ZR=7

X = 49.875 R = 1.75

2. For the calculations process, the number of samples in each subgroup (n) must be determined.
For this study n = 2. The number of subgroups in the study (k) are also needed, which for this
study, k = 20.

3. In a control chart constant table (see Table 22.5), the control limit factors A2, D3, and D4 are
needed for our formulas, which are as follows when n = 2:

Factor A2 = 1.88, Factor D3 = 0, and Factor D4 = 3.267

4. For the X-bar chart, the averages are totaled, from which the average of the averages is
calculated.

50.5 + 49 + 50 + 50 = 199.5 199.5/4 = 49.875 (X)

See Figure 22.3.

5. For the R chart, the subgroup ranges are totaled, from which the average range is calculated.

1 + 0 + 2 + 4 = 7 7/4 = 1.75

See Figure 22.3.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 247
Chapter 22 Control Charts

6. Draw the lines on the charts for the means. Now the upper and lower control limits
can be calculated.

7. Calculate the control limits for X-bar chart.


Unlawful to replicate or distribute

First calculate the standard deviation (s). The formula is s = (A2 )(R-bar) = (1.88)(1.75) =
3.29.
Now calculate UCL. Three standard deviations are added to 49.875, which is
49.875 + (3)(3.29).
UCL = 49.875 + 9.87 = 59.75. Draw the line on the chart.
Now calculate LCL. Three standard deviations are subtracted from 49.875, which is 49.875
- (3)(3.29). LCL = 49.875 – (3)(3.29) = 49.875 – 9.87 = 40.00. Draw the line on the chart.

8. Calculate the control limits for R chart.


UCL = (D4)(R-bar) where D4 =3.267 and R-bar =1.75. UCL = 5.72.
LCL = (D3)(R-bar), where D3 = 0 and R-bar =1.75. LCL = 0.
Draw the LCL on the chart.

22.4 Control Charts for Attribute Data


The following is taken from the U.S. Department of Energy’s How to Measure Performance: A
Handbook of Techniques and Tools and is reprinted with the permission of the Performance-Based
Management Special Interest Group.

Attribute data are qualitative data that can be counted. Some examples include a count of scratches
per item or a count of acceptability for a go/no-go gage. Attribute data are usually represented as
nonconforming units and are analyzed by using P, NP, C, or U charts.

22.4.1 P Chart for Proportion Defective


Proportion (P) charts are used to show the fraction nonconforming of a nonstandard sample size
over a constant area of opportunity, e.g., each period of interest. The steps to follow for constructing a
P chart are the same as for a C chart, except that the control limits are computed for each time period
because the sample size varies.

22.4.2 NP Chart for Count of Defectives


Like P charts, NP charts are used to analyze nonconforming items over a constant area of opportunity;
however, the NP chart focuses on the number of nonconforming items when the sample size is
constant. The steps to follow for constructing an NP chart are the same as for a P chart.

22.4.3 U Chart
U charts (sometimes referred to as “rate” charts) deal with event counts when the area of opportunity
is not constant during each period. The steps to follow for constructing a U chart are the same as for a
C chart, except that the control limits are computed for each individual quarter because the number of
standard units vary.

248 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

22.4.4 C Chart
The Count (C) chart is the principal type of process behavior chart used to analyze attributes data. C
charts are used in dealing with the counts of a given event over consecutive periods of time.

Unlawful to replicate or distribute


22.5 Selecting the Correct Control Chart
Control charts not only can graphically depict variation but also can distinguish between common and
special cause variations, which allows a project team to eliminate special causes and reduce common
cause variation.

The most basic type of control chart, the individuals chart, is often used for all types of data. Yet, more
specialized types of control charts can provide more valuable information about process performance,
data variation, and process changes. Information on the different types of control charts and when to
use them is presented in the following sections.

The process map shown in Figure 22.3 can be used to select the correct control chart.
Type of
data

Variables data Attribute data

Type of
Subgroup
flaws
size
counted

Subgroup size is Subgroup size is Defective units Defects per unit


equal to 1 greater than 1

Subgroup Subgroup Subgroup


size size size

Subgroup size is Subgroup size is Subgroups are Subgroups are Subgroups are Subgroups are
8 or less greater than 8 the same size different sizes the same size different sizes

NP Chart or C Chart or
I-MR Chart Xbar-R Chart Xbar-S Chart P Chart U Chart
P Chart U Chart

Figure 22.3 Choosing the Correct Process Behavior Chart

22.6 Control Chart Analysis


22.6.1 Basic Guidelines
Look at the average and spread of the data distribution first. Note that specification limits are not
included on control charts. Capability studies must be performed (see Chapter 23) to statistically
compare specifications to process performance. The only lines on the charts are the mean, the upper
control limit (UCL), and the lower control limit (LCL). Make sure there are no missing data and
review the operator notes looking for special events that may have occurred during the study. Again,
do not confuse the UCL and the LCL with the upper specification limit (USL) and lower specification
limit (LSL) as they are entirely different. Then, out of control conditions or special causes must be
found.

22.6.2 Commonly Used Rules to Detect Out of Control Conditions (Special Causes)
Many rules, or tests, are used to detect special causes which can contribute to a process becoming

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 249
Chapter 22 Control Charts

out of control. The basic rule is that one point outside the control limits signals the presence of a
special cause. From there, organizations such as Western Electric, AIAG, and Minitab have provided
additional rules. The non-random trends in the data are the focus of detection; and it is up to the
organization as to which rules they follow.
Unlawful to replicate or distribute

For example:

The software package Minitab uses these eight rules (tests). The dashed horizontal lines in the
following illustrations represent the distances of 1σ and 2σ from the center line. The green line is the
mean and the red lines are the control limits.
One point more than 3σ from center line
Test 1: One point more than 3σ from center line

Figure 22.4 One Point More Than 3σ from Center Line

Test 1 evaluates the pattern of variation for stability. Test 1 provides the strongest evidence of the lack
of control. If small shifts in the process are of interest, Tests 2, 5, and 6 can be used to supplement Test
Nine
1 to create points
a control in agreater
chart with rowsensitivity.
on the same side of the center line

Test 2: Nine points in a row on the same side of the center line.

Figure 22.5 Nine Points in a Row on the Same Side of the Center Line

Test 2 evaluates the pattern of variation for stability. If small shifts in the process are of concern, Test 2
can be used to supplement Test 1.

250 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge
Six points in a row, all increasing or all decreasing

Test 3: Six points in a row, all increasing or all decreasing.

Unlawful to replicate or distribute


Figure 22.6 Six Points in a Row, All Increasing or All Decreasing

Test 3 detects a trend or continuous movement up or down. This test looks for long series of
Fourteen points in a row, alternating up and down
consecutive points without a change in direction.

Test 4: Fourteen points in a row, alternating up and down.

Figure 22.7 Fourteen Points in a Row, Alternating Up and Down

Test 4 detects the presence of a systematic variable. The pattern of variation should be random;
Two out when
therefore, of three
a point points
fails Test 4,more than
means that 2σ from
the pattern the center
of variation line (same side)
is predictable.

Test 5: Two out of three points more than 2σ from the center line (same side).

5
Figure 22.8 Two Out of Three Points More Than 2σ from the Center Line (Same Side)

Test 5 evaluates the pattern of variation for small shifts in the process.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 251
Chapter 22 Control Charts
Four out of five points more than 1σ from center line (same side)

Test 6: Four out of five points more than 1σ from center line (same side).

6
Unlawful to replicate or distribute

Figure 22.9 Four Out of Five Points More Than 1σ from Center Line (Same Side)

Fifteen points in a row within 1σ of center line (either side)


Test 6 evaluates the pattern of variation for small shifts in the process.

Test 7: Fifteen points in a row within 1σ of center line (either side).

Figure 22.10 Fifteen Points in a Row within 1σ of Center Line (Either Side)

Test 7 identifies a pattern of variation that is sometimes mistaken as a display of good control. This
type of variation is called stratification and is characterized by points that follow the center line too
Eight points in a row more than 1σ from center line (either side)
closely.

Test 8: Eight points in a row more than 1σ from center line (either side).

Figure 22.11 Eight Points in a Row More Than 1σ from Center Line (Either Side)

Test 8 detects a mixture pattern. A mixture pattern occurs when the points tend to avoid the center
line and instead fall near the control limits.

252 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

22.7 Examples of Control Chart Applications


22.7.1 Example One
A study of one pound (454 grams) jelly jar weights needs to be conducted. One sample is pulled from
the production line every 30 minutes until there are 30 samples. An IMR chart was chosen because

Unlawful to replicate or distribute


the data are variable, and there are individual samples with no subgroups. The data are then run in
IMR
Minitab, the results of which are Chart
shown of22.12.
in Figure Jar Weights

460 UCL=461.39
Individual Value

450 x=448.48

440
LCL=435.58
430

420 1

1 4 7 10 13 16 19 22 25 28
Obsevation

1 1

30
Moving Range

20
UCL=15.85
10

MR=4.85
0 LCL=0
1 4 7 10 13 16 19 22 25 28
Obsevation

Figure 22.12 IMR Chart of Jar Weights

Analysis: The average sample weight is 448.48 grams, which is below the declared weight of 454 grams.
The average range between the subgroups is 4.85. Two points are above the UCL on the MR chart, and
one point is below the LCL on the I chart. They are special causes and are a sign that the process is out
of control. The special causes should be investigated and corrected, and the average weight should be
moved up to conform to the declared weight, after which new data should be gathered and analyzed.

22.7.2 Example Two


A study is being conducted of a defective rate for production line 10, for which the preferred rate is
below five parts per lot. The products are produced in lots of 100, and the samples are believed to
reflect the number of defective parts from each lot. There is attribute data and the lot size is constant
(100). A NP chart is chosen, the number of defects by lot are the input, and the data are processed in
Minitab. The results are shown in Figure 22.13.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 253
Chapter 22 Control Charts

NP Chart of Defective Rate


25

1
1

20
Unlawful to replicate or distribute

Sample Count
15 UCL=15.21

10

NP=7.37

0 LCL=0

1 11 21 31 41 51 61 71 81 91
Sample

Figure 22.13 NP Chart of Defective Rate

Analysis: The average is 7.37 defective parts per lot of 100 items (7.37%), which is above the goal of five
defects per lot. The process is out of control as evidenced by samples 9 and 10, which are more than
three standard deviations above the mean. These points should be investigated and measures taken to
correct them, after which a new study should be implemented.

22.8 Control Chart Formulas


The formulas found in Table 22.4 are used to compute the centerline, upper control limit, and lower
control limit for the specified control charts.

254 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Control Chart
Table 22.4 Control Chart Formulas

Chart Upper Lower


Centerline

Unlawful to replicate or distribute


Type Control Limit Control Limit

∑X
X X= X + A2 R X - A2 R
k

∑R
R R= D4 R D3 R
k

∑ IX
IX IX = IX + E2 MR IX - E2 MR
N

∑ MR
MR MR = D4 MR D3 MR
k-1

∑ NP P ( 100% - P ) P ( 100% - P )
P P= x 100% P+3 P-3
N N N

∑ NP NP NP
NP NP = NP + 3 NP ( 1 - ) NP - 3 NP ( 1 - )
k N N

∑C
C C= C+3 C C-3 C
k

∑U U U
U U= U+3 U-3
k N N

Symbol Definitions
Σ = sum P = percent defective
C = number of defects R = range
k = number of subgroups U = number of defects per unit
NP = number defective x = observation
N = number of observations in the N = total number of observations
subgroup

Table 22.5 Control Chart Constant Table


n A2 D3 D4 d2
2 1.88 0 3.27 1.13
3 1.02 0 2.57 1.69
4 0.73 0 2.28 2.06
5 0.58 0 2.11 2.33
6 0.48 0 2.00 2.53

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 255
This page intentionally left blank.
Unlawful to replicate or distribute

256 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Chapter 23: Process Capability and Performance

Unlawful to replicate or distribute


Key Terms
Cp Ppk
Cpk process capability
defects per million opportunities (DPMO) rolled throughput yield (RTY)
defects per unit (DPU) throughput yield (TPY)
parts per million (PPM) voice of the customer
Pp voice of the process

Body of Knowledge
1. Define, describe, and conduct process capability studies.

2. Calculate process performance and process capability indices.

3. Define and distinguish between control limits and specification limits.

4. Differentiate between process performance indices (Pp and Ppk) and process capability
indices (Cp and Cpk).

O ne method for reporting process performance is through the statistical measurements of the
process capability indices (Cp and Cpk) and process performance indices (Pp, and Ppk).

The purpose of process capability studies is to compare the actual variation in a process (voice of the
process) to the specification tolerance (voice of the customer). The voice of the process is reflected
in the control limits studied in Chapter 22. The voice of the customer is reflected in the specifications.
The specification tolerance is divided by the process variance as a way to measure process capability.
The proportion of values that fall inside specification limits indicates whether or not the process is
capable of meeting the customer requirements.

Process capability studies can be conducted on any manufacturing or service process that has
established specifications (either internal or external). Specifically, they can be performed on new
equipment, a new process or product start-up, an existing process to establish a baseline, or a periodic
monitoring tool. Since capability indices are not associated with a given unit, e.g., inches, minutes, etc.,
these statistics may be used to compare the capability of one process to another.

High capability numbers are good, and lower standard deviations result in higher capabilities. A
capability analysis can answer questions such as the following:

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Chapter 23: Process Capability and Performance

•• Is the variability of a process low enough to consistently provide parts that fall within the
specification limits?

•• Is the proportion of defective parts consistently less than 5% during a month?


Unlawful to replicate or distribute

•• Is a temperature curing process capable across multiple batches of the product?

•• Does a process need to be shifted to operate within the specification limits?

QI Macros and Minitab can compute both indices for a data set in order to compare all of the metrics
at the same time. If they are close to the same, it does not matter which one is used. If they are
different, then use these differences to decide where to look for improvements in the process. Also,
make sure that when the customer or supplier is using these metrics, that everyone's calculations and
terminology are the same.

Before starting a process capability study, the following must first be accomplished:

1. Calibrate the measuring system/device.

2. Perform MSA.

3. Ensure the data are normal and that the process is in control.

4. Confirm the customer requirements.

5. Most sources suggest taking a minimum of 25 subgroups (minimum of 100 samples) in time
series for the study.

23.1 Process Capability Indices


Process Capability Indices Cp and Cpk are used to describe a process that is stable over time and
in a state of statistical control. A process is capable when the output always conforms to the process
specifications. These indices only account for the variation within the subgroups. They do not account
for the shift and drift between subgroups. To measure the process variation, they use an estimate of the
standard deviation: R-bar/d2 (See Chapter 22).

23.1.1 Cp
The Cp shows an overall comparison of process variation compared to the specification tolerance. It
does not compare performance to the mean; that is, it only measures variability and will not tell you if
the process is centered between the upper and lower specifications. Even if the distribution is as shown
in Example 1 (Figure 23.1), the Cp produces values as if the distribution is as shown in Example 2
(Figure 23.2).

258 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

LSL USL LSL USL

Unlawful to replicate or distribute


24 26 28 30 32 34 36 24 26 28 30 32 34 36

Figure 23.1 Cp Example 1 Distribution Figure 23.2 Cp Example 2 Distribution

Cp calculations:
The formula is

Cp = (USL – LSL) / 6 s

Where USL = upper spec limit, LSL = lower specification limit, and S = standard deviation (R-bar/d2).

The Cp is calculated using the within-subgroup variation of the data set only.

For example, where

USL = 12,

LSL = 8,

S=1

Cp = (12-8)/6*1 = .67

23.1.2 Cpk
The Cpk measures both the variation and how close the process average is to the specification limits. If
the process is perfectly centered between the USL and the LSL, then Cp will equal Cpk.

Cpk calculations:
Cpk = the smallest value that can be calculated from Cpu or Cpl.

Cpu
Cpu measures the process capability relative to the upper specification limit. The formula is as follows:

C p u = ( U S L – X-b a r ) / 3 s

Cpl measures the process capability relative to the lower specification limit. The formula for this
measure is as follows:

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 259
Chapter 23: Process Capability and Performance

Cpl = (X-bar – LSL) / 3s

USL = upper specification limit,

LSL = lower specification limit,


Unlawful to replicate or distribute

X-bar = the sample mean,

s = standard deviation.

Using the same data used to calculate the Cp above (only this time the mean must be factored in),
where the X-bar = 11, USL = 12, LSL = 8, and standard deviation = 1

Cpu = (12-11)/3*1= .33.

Cpl = (11-8)/3*1= 1

Cpk = .33

Therefore, using the same data, the Cp is .67 but the Cpk is only .33.

23.1.3 Difference between Cp and Cpk


Cp calculations represent an overall comparison of the process output vs. the desired limits. They
are based on the full range of variation compared to the specifications and do not compare the
performance to the mean.

Cpk calculations represent comparisons of variation against the upper and lower specification limits
separately. These calculations include the mean.

For example:

In a darts game, when the darts thrown are clustered in the same spot and form a good grouping,
the results will be a high Cp regardless of whether or not the darts are close to the bullseye. However,
when this tight group of shots lands on the bullseye, the result is a high Cpk.

23.2 Process Performance Indices


Process Performance Indices (Pp and Ppk) are defined as a statistical measure of a process that may
not yet be in a state of statistical control. They factor in all of the variation in the process. Hence, they
take into account the shift and drift between subgroups. These metrics differ from process capability
metrics (Cp Cpk) in how the standard deviation is calculated. Pp and Ppk metrics use the usual form
of the standard deviation (square root of the sum of the squares divided by n – 1). See Chapter 17.

23.2.1 Pp
The Pp metrics show an overall comparison of process variation compared to the specification
tolerances. They do not compare performance to the mean; that is, they will not indicate whether or
not the process is centered between the upper and lower specification but rather will only measure

260 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

variability.

Pp is calculated using the total variation of the data set, which is sometimes called long-term variation.

Pp = (USL – LSL)/ 6 s

Unlawful to replicate or distribute


For example, where the USL = 12, the LSL = 8, and the standard deviation = 2

Pp = (12 – 8)/12 = .33.

Using the distribution in Figure 23.3, Pp produces values as if the distribution in Figure 23.4 was used.

LSL USL LSL USL

24 26 28 30 32 34 36 24 26 28 30 32 34 36

Figure 23.3 Pp Example 1 Distribution Figure 23.4 Pp Example 2 Distribution

23.2.2 Ppk
Ppk measures both variation and how close the process average is to the specification limits. If the
process is perfectly centered between the USL and the LSL, then the Pp will equal the Ppk.

Ppk is calculated using the total variation of the data set, which is sometimes called long-term
variation.

Ppk = the smallest value that can be calculated from the Ppu or the Ppl.

Ppu

Ppu measures how close the process mean is relative to the upper specification limit.

Ppu = (USL – X-bar)/3 s

Ppl

Ppl measures how close the process mean is relative to the lower specification limit.

Ppl = (X-bar – LSL)/3 s

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 261
Chapter 23: Process Capability and Performance

USL = upper specification limit, LSL = lower specification limit, X-bar = the sample mean, and s
=standard deviation.

Using the same data used to calculate the Pp above (only this time the mean must be factored in,
where the mean = 11, USL = 12, LSL = 8, and standard deviation = 2)
Unlawful to replicate or distribute

Ppu = (12-11)/3*2= .167, Ppl = (11-8)/3*2= .5, Ppk = .167

So, using the same data the Pp is .33, but the Ppk is only .167.

23.2.3 Difference between Pp and Ppk


Similar to CP, Pp calculations represent an overall comparison of the process output vs. the desired
limits. They are based on the full range of variation compared to the specifications and do not compare
performance to the mean. The calculations are similar to the Cp, except the Pp calculates the standard
deviation differently.

Ppk calculations represent the comparisons of variation against the upper and lower specification
limits separately. These calculations include the mean. Again, the calculations are similar to the Cpk,
except the Ppk calculates the standard deviation differently.

The dart example given above for the Cp and Cpk holds true for the Pp and Ppk.

23.3 Process Capability for Variable Data Example:


Our main customer for our 50 pound gravel bags complains that we are “shorting” them on some of
their bags. They claim many bags only weigh 49 pounds. Our operations manager has increased the
weight of these bags from the target weight of 50 pounds to 52 pounds, in an attempt to satisfy the
customer. The plant manager says we are “giving away” gravel, and wants that practice to stop. So, the
quality manager calls for a process capability study of our 50 pound gravel bag filler line to see what is
really happening.

The upper specification level is 51 pounds and the lower specification level is 49 pounds. We sample 2
bags (subgroup of two) at 30 minute intervals over two shifts, weigh the bags, and enter the data into
Minitab. We produce control charts (Figure 23.5) to see if the process is in control, and the process
capability report (Figure 23.6) to judge capability to the specifications.

262 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Control Chart for Gravel Weights

Unlawful to replicate or distribute


1 1
52 1 1
1
1 1 1 1 1
1 1
51 1
Sample Mean

UCL=50.698
x=50.378
50 LCL=50.059
1 1 1 1 1 1 1 1

49 1 1 1
1 1
1 1
1 4 7 10 13 16 19 22 25 28
Sample

1
0.60
UCL=0.5554
Sample Range

0.45

0.30

0.15 R=0.17

0.00 LCL=0
1 4 7 10 13 16 19 22 25 28
Sample
Process Capability Report for Gravel Weights
Figure 23.5 Contol Chart for Gravel Weights

LSL USL
Process Data Overall
LSL 49 Within
Target *
USL 51 Overall Capability
Sample Mean 50.3783 Pp 0.34
Sample N 60 PPL 0.47
StDev(Overall) 0.974087 PPU 0.21
StDev(Within) 0.181786 Ppk 0.21
Cpm *
Potential (Within) Capability
Cp 1.83
CPL 2.53
CPU 1.14
Cpk 1.14

49 50 51 52

Perfromance
Obseved Expected Overall Expected Within
PPM < LSL 50000.00 78534.12 0.00
PPM > USL 250000.00 261670.20 313.37
PPM Total 300000.00 340204.33 313.37

Figure 23.6 Process Capability Report for Gravel Weights

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 263
Chapter 23: Process Capability and Performance

Table 23.1 Means and Upper and Lower Control Limits for 50 pound Gravel Bags
Interpreting the Results

X-bar chart R chart


Unlawful to replicate or distribute

Upper control limit 50.698 0.5554


Mean 50.378 0.17
Lower control limit 50.059 0

The means and upper and lower control limits are listed in Table 23.1 for both control charts.

We see right away from the X-Bar chart that the process is out of control. It shows that our customer
will occasionally receive 49 pound bags of gravel, and that we are over packing an average of .378
pounds per bag.

However, the R chart is in control. This means there is more variation between the subgroups of our
samples than there is within the subgroups. There are only two points within the UCL and LCL in the
X-Bar chart, while all the points except one are within the UCL and the LCL of the R chart. Without
even looking at the process capability report, we expect it will not be good. It is certainly not to be
trusted, since we know the process needs to be in control before a valid process capability study may
be made.

Now, we inspect the process capability report- where we compare our process variation with our
customer specification limits. The histogram shows two curves that Minitab drew for us. The narrow
steep curve is calculated from the R chart standard deviation. It shows a process that is within
the upper and lower specification limits. However, the curve calculated from the overall standard
deviation is much wider and flatter. Clearly this curve goes outside the upper and lower specification
limits. The histogram shows that the process is not capable of meeting customer expectations. This is
verified when examining the process capability (Cp and Cpk) and process performance (Pp and Ppk)
indices.

The report also clearly demonstrates the differences between Cp, Cpk, Pp, and Ppk.

Cp and Cpk are calculated using the standard deviation derived from the R chart alone. Clearly,
we see from the R chart that the variation is low, resulting in a standard deviation (stddev) within
0.18. Hence, we see high numbers for the Cp and the Cpk. The Cp is 1.83, since it is just comparing
subgroup variation to the spec tolerance, and does not care if the process is centered or not. The Cpk
is 1.14, which is the lesser of the Cpl (2.53) and Cpu (1.14). The Cpl is much higher than the Cpu
because now we take into consideration the average of the sample- which is much closer to the UCL
than the LCL.

Pp and Ppk are using individual variance, there is no grouping. If a "subgroup" is used, however, the
subgroup sample size is 1 and a between subgroup also would be (1sample size)/individual variance.
The histogram that data points spread vary widely. The process is out of control. Hence, the overall
variance of the data is much higher- resulting in a high stddev overall.

Again, the Ppk is lower than the Pp because the process is not centered.

264 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

There are two ways to make the process capability better. First, we should verify the accuracy of the
customer tolerances. Sometimes misunderstandings occur between customers and suppliers as to
the tolerances, and clarifications are in order. Customer specifications should be confirmed before
undertaking a process capability study. Then, the second way to increase process capability can be
undertaken, i.e., reducing process variation using LSS tools and techniques.

Unlawful to replicate or distribute


Therefore, the indices are known to range from 2.53 (Cpl) to 0.21 (Ppk and Ppu). Which numbers
should be used? The Ppk and Pp can be used to show that there is much variation among the samples
over time, which is where improvement efforts should be focused; and Cpk and Cp can be used to
show that there is little variation in the subgroup samples and would indicate that other locations in
the process should be searched for areas of improvement. The differences between the Ppk and Pp (or
Cpk and Cp) also can be used to show that the process mean is not centered between the upper and
lower specification limits.

Finally, the capability study process should be reviewed; and, in the case of this example, the data were
collected every 30 minutes over two shifts, and only two samples of each data point were collected.
This data collection procedure is not considered robust due to the small sample size, longer data
collection interval, and using two different shifts.

Before moving to any analysis or future process capability study, a normality test must be undertaken.
In fact, the data does not support normal distribution, which means only Pp and Ppk analysis is valid
in this example. Furthermore, the most important step in the process is to stabilize the process before
running any capability study.

These conclusions are visually apparent from the histogram graph. However, by using the capability
indices, we now have a scientific statistical description of the capability of this process, and a solid
baseline from which to improve.

To improve the process, the following steps should be taken:

1. Bring the process in control by reducing the subgroup to subgroup variation.

2. Move the process mean closer to the target.

3. Review the product specifications with our customer.

23.4 Process Capability and Process Performance Summary


The Cp, Cpk, Pp, and Ppk indices are summarized in Table 23.2.
Table 23.2 Cp, Cpk, and Ppk, Indices Summary

Index Cp Cpk Pp Ppk


Compares process variation to the x x
specification limits
Compares process variation to
the specification limits taking into x x
consideration the process average
Calculates standard deviation x x
factoring in all the variation
Calculates standard deviation x x
factoring in subgroup variation only

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 265
Chapter 23: Process Capability and Performance

23.4.1 Process Capability for Attribute Data Example


While studying defective rates for production line 10, the decision is made to perform a capability
study on the line to measure the capability of the production line. The manager wants a zero defective
rate. The process owner feels that a 5% defective rate is a reasonable goal to achieve. The data for the
study are the defective rates from 50 consecutive days for this line and is attribute data. The data are
Unlawful to replicate or distribute

entered into Minitab and the results are shown in Figure 23.7.

The customary way to measureCapability


Process capability for attribute
Report data isfor
to measure the mean
Attribute rate of
Data
nonconformity or the average percent of defects. Minitab software also adds the sigma level,
cumulative defective, rate of defects, and a histogram. See Figure 23.7.
P Chart Rate of Defectives

0.2 1 UCL=0.2048 20

%Defective
Proportion

0.1 10
P=0.0920

0.0 LCL=0 0
1 6 11 16 21 26 31 36 41 46 40 60 80
Sample Sample Size
Tests are performed with unequal sample sizes
Cumulative %Defective Histogram
Target
Summary Stats
12 (95% Confidence) 16
%Defective: 9.20
Lower CI: 8.22
11 Upper CI: 10.25 12
%Defective

Frequency
Target: 0.00
PPM Def: 91965
10 Lower CI: 82152 8
Upper CI: 102534
Process Z: 13288
9 Lower CI: 12672
4
Upper CI: 13907

8
0
0 10 20 30 40 50 0 5 10 15 20
Sample %Defective

Figure 23.7 Process Capability Report for Attribute Data

Interpreting the Results


The p chart shows an average defective rate of 9.2%. This is far away from the process manager’s target
of 5%. The p chart indicates that there is one point out of control. The chart of cumulative % defective
shows that the estimate of the overall defect rate appears to be settling at 9%, but more data may need
to be collected to verify this rate. The rate of defects does not appear to have been affected by the sample
size.

23.5 Process Performance Metrics


There are many process performance metrics that may be used in improvement projects. They
include the following:

1. Defects per unit (DPU)

2. Defects per million opportunities (DPMO)

266 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

3. Throughput yield (TPY)

4. Parts per million (PPM)

5. Rolled throughput yield (RTY)

Unlawful to replicate or distribute


For example:

A process produces 100 packages of cake mix per shift. Three types of defects can occur. The number
of occurrences of each defect type are:

1. Cake does not rise - 5

2. Cake exhibits off flavor - 2

3. Cake exhibits off color - 3

DPU = Number of defects /number of units = 10/100=.10, or 10%.

DPMO = Number of defects x 1,000,000/total number of opportunities = 10,000,000/300 =


33,333
TPY = Number of accepted units/number of units = 90/100 = .9 or 90% parts per million = DPU x
1,000,000 = .1 x 1,000,000 = 100,000 are defective.

RTY
A product goes through a three-step process, with the following individual process step yields:

Step 1. Started with 100 pieces. Ten were scrapped during this step = 90% yield.
Step 2. Received 90 pieces from step 1. Nine were reworked during this step = 90% yield.
90-9=81, then 81/90*100%=0.90%.
Step 3. Received 81 pieces from step 2. Fifteen were scrapped during this step = 81% yield.
81-15=66, then 66/81*100%=81.5%
RTY = 90% x 90% x 81.5% = 66%

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 267
This page intentionally left blank.
Unlawful to replicate or distribute

268 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Part VII: Analyze Phase of DMAIC

Unlawful to replicate or distribute


T he third phase of the Lean Six Sigma Methodology is the Analyze Phase. The purpose of the
Analyze Phase is to develop theories of root causes (critical inputs, or Xs), confirm the theories
with data, and finally identify the root cause(s) of the problem. The verified cause(s) will then form the
basis for solutions in the fourth phase, the Improve Phase.

With the merging of Lean and Six Sigma, we are looking at two types of primary metrics: time and
defects. We use different approaches for analyzing these metrics as outlined in the process map.

Lean Six Sigma enables us to address both time and defect metrics- depending upon the scope of the
project. If the project is about improving the process in general, then the waste analysis should come
first. If the project is about a reduction in process variation, then root cause analysis is the starting
point. The degree to which time is spent on either metric depends upon your project charter, project
scope, and available resources. Be sure to keep to the project scope and not over extend the resources
of the team. Trying to do too much may result in doing nothing at all.

Previously in chapters 14-16, we discussed value stream mapping and waste analysis- which help
improve time-related outputs. In the following chapters, we will discuss root cause analysis tools-
which help improve defect-related outputs.

By the end of this phase, the team should be able to answer the following questions:

1. Which inputs actually affect our outputs most (based on actual data) and by how much?

2. Do combinations of variables affect outputs?

3. If an input is changed, does the output really change in the desired way?

4. How many observations are required to draw conclusions?

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
This page intentionally left blank.
Unlawful to replicate or distribute

270 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Chapter 24: Root Cause and Variation Analysis

Unlawful to replicate or distribute


Key Terms
5 whys one factor at a time
decision matrix root cause analysis

Body of Knowledge
1. Recognize and apply alternate methods of root cause identification and validation.

2. Draw upon process experience to systematically identify potential root causes.

3. Use sequential questions to uncover causal relationships.

4. Describe the tools used to narrow down x's to the critical few.

5. Explain how a cause-and-effect diagram can help to analyze the y vs. x relationship.

S olving a process problem means identifying the root cause and eliminating it. The ultimate test
of whether a root cause has been properly identified and eliminated is the ability to switch the
problem on and off by removing and reintroducing the suspected factor. The root causes are the most
basic reason that a situation did not turn out as planned or as expected. Sometimes contributing
factors are additional reasons, but they are not necessarily the most basic reasons why the process did
not turn out as planned. Examples of contributing factors are training, incorrect procedures, lack of
procedures, communication failures, and insufficient resources.

There may be no one main root cause. Several factors may have to work together in a certain order in a
certain environment to produce the problem.

Root cause analysis is a three step approach. Imagine using a funnel in this approach.

1. Fill the funnel by identifying as many potential causes for the problem as possible.

2. Screen potential causes to a more manageable number.

3. At the end of the funnel, determine and validate the critical inputs that appear to be the
largest contributors to the problem and evaluate their impact on the output (Y).

These critical inputs (aka critical x’s) are the key issues that are addressed during the Improve phase.

A number of tools for identifying root causes are available. Table 24.1 lists the key tools available and
where in the root cause analysis process they may be used.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Chapter 24: Root Cause and Variation Analysis

Table 24.1 List of Key Tools

Tool Identify potential Screen potential Determine critical x


causes causes
Process map or value X X
Unlawful to replicate or distribute

stream map
Pareto chart

Cause and effect X


diagram (fishbone)
FMEA X X X

5 Whys X X

Brainstorming X X X

Waste analysis X X X

List reduction X

One factor at a time X X

Hypothesis testing X X

Correlation/regression X X

Design of experiments X X

24.1 Identify Potential Causes


A wide net should be cast to search for potential causes. Identify as many as possible.

1. The cause and effect diagram (fishbone) is a good tool to identify potential causes. During
the construction of the fishbone diagram, brainstorming can be utilized to collect potential
causes, which essentially uses the knowledge of the project teams to identify potential inputs
that affect our output.

2. The process map and/or the value stream map should be examined for suspects.
See Chapter 14.

3. Failure Mode Effect Analysis (FMEA) can be a good tool to use. See Chapter 29.

4. The 5 Whys method, a variation on brainstorming, is a very quick and focused technique to
use.

272 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

5 Whys Procedure

1. Define the specific problem to be analyzed.

2. Ask the first "why." There may be three or four sensible answers.

Unlawful to replicate or distribute


3. Once an answer is determined, ask "why" again.

4. Continue until the true potential root cause has been identified.

5. Once the root cause is reached, develop the solution.

Working as a team is recommended, and it may take more than five "whys." Sometimes there is more
than one answer so the possible answers may branch off in different directions, much like a tree.

Example:

Some people were sick after eating a company's sugar cookies.

1. Why did they get sick?


They were allergic to peanuts and ate some peanut butter that was in the sugar cookies.

2. What was the peanut butter doing in the sugar cookies?


It was put into the sugar cookies by mistake.

3. Why was it put into the sugar cookies by mistake?


Peanut butter cookies were run on that line before the sugar cookies and all of the peanut
butter was not properly cleaned out.

4. Why did a bad clean out happen?


The person cleaning out the system did not follow the procedure and was inadequately
trained.

24.2 Screen Potential Causes


Now the list of suspected causes can be reduced to a more manageable number. In this phase, more
time is spent discussing each suspect and using tools such as list reduction and a decision matrix to
further reduce the list.

List reduction is merely posting the list for all to see, and the entire team ultimately votes on the most
likely suspects. Sometimes more than one round of votes is needed.

After the list reduction has taken place, a decision matrix can be used to further evaluate and
prioritize the list of inputs. A list of the weighted criteria should be established; and then each input is
evaluated against those criteria.

Decision Matrix Procedure


1. Choose the criteria with which to evaluate the inputs.

2. Assign a relative weight to each criterion based on how important that criterion is to the
situation.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 273
Chapter 24: Root Cause and Variation Analysis

3. Draw the matrix. Fill in the criteria and their weights and the list of inputs.

4. Establish a rating scale for each criterion; for example, 1 = low, 2 = medium, 3 = high.

5. Multiply each input’s rating by the weight. Add up the points.


Unlawful to replicate or distribute

Example:

Table 24.2 shows an example decision matrix which is used to decide which aspect of the overall
problem of “excessive oil leak” to tackle first.
Table 24.2 Decision Matrix
Suspected cause Root cause Safety Cost Total
Poor design of anterior
copper feed-in tube. 7 4 1 12

Inadequate training
and lack of work 4 4 7 15
instructions.
Supplier quality issues
with part #234. There 1 1 1 3
is a sole supplier.
High temperatures and
humidity in the work 1 4 1 6
environment.

Root cause: How sure are the decision-makers that it is a root cause?

1 = Somewhat but not supported by data.

4 = Sure, and supported by some data.

7 = Very sure, and completely supported by data.

Safety: How much will the fix improve safety?

1 = very little

4 = somewhat

7 = very much

Cost: What is the implementation cost?

1 = high cost to implement

4 = moderate cost to implement

7 = low cost to implement

274 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

From the original list of potential causes, four good suspects were chosen for further discussion.
Training/work instructions and poor design of the feed-in tube emerged as the top two causes. The
team decided to run experiments on the feed-in tube to verify that item as a critical input, review the
work instructions, and provide refresher training as necessary.

Unlawful to replicate or distribute


24.3 Determine/Validate the Critical Inputs
After screening the potential causes, the critical inputs should be determined and validated. There are
many methods that can be utilized to meet the requirements of the process.

Experiments using the identified and screened inputs also can be performed. An experiment is a test
where one or more process inputs can be changed to evaluate their effect on the output.

The simplest experiment is called one factor at a time (1-fat/OFAT), where one factor is varied at
different levels while the other factors are held constant; for example, changing the oven bake times
while all other factors are held constant to see if this factor has an impact on the height of the cakes.

The problem with 1-fats is that too many experiments are needed, interactions between inputs cannot
be revealed, and resources may be wasted by studying the wrong inputs.

So, what is the best approach? The next chapter introduces some powerful improvement tools that can
help perform experiments more efficiently, e.g., hypothesis testing, correlation, regression, and design
of experiments.

Correlation examines the relationship between two variables. See Chapter 25.

Regression goes one step beyond correlation in identifying the relationship between two variables. It
creates a model for the variables so that values can be predicted within a range framed by the data. See
Chapter 25.

Hypothesis tests may be used to judge the effects of certain inputs on the output to be improved. See
Chapter 26.

Design of Experiments analyzes the effect of varying several inputs simultaneously in order to get the
most data with the fewest runs. See Chapter 27.

24.4 Example of the Root Cause Analysis Process


For months, the workers in a company office have complained about the taste of the coffee made in the
office so they formed a team to investigate the problem.

First, they brainstormed a list of potential causes. The items on their list included the water source,
water amount, coffee brand, coffee amount, coffee cost, phase of the moon (this is a brainstorming
session and anything goes), person making the coffee, coffee temperature, humidity, time of day when
coffee was made, coffee storage area, age of the coffee, height of the person making the coffee, type of
stirrers, coffee grind, day of the week, age of the coffee pot, and cleanliness of the coffee pot.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 275
Chapter 24: Root Cause and Variation Analysis

Next they voted as a team on which items to eliminate or combine. It took two sessions to bring the
original list down to eight.

Then, using a decision matrix, they eliminated all but the water source and coffee brand. Since they
had two factors, they decided to do a two-factor, three-level design of experiment (see Chapter 27).
Unlawful to replicate or distribute

The results showed that the water source was the significant factor in influencing coffee taste. They
followed up with a one factor at a time experiment where they tried ten different sources of water and
finally decided upon the Acme Deluxe Custom water brand. After making this change, they tasted the
coffee for several days and were most satisfied with the results.

276 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Chapter 25: Correlation Analysis and Regression

Unlawful to replicate or distribute


Key Terms
equation for the fitted line residuals
Pearson correlation coefficient R-squared value
regression scatterplot

Body of Knowledge
1. Describe the difference between correlation and causation.

2. Explain the usage of regression models for estimation and prediction.

3. Evaluate correlation between variables graphically.

C orrelation analysis is a family of statistical tests that determine the relationships between two
variables. The tests can be simple scatterplot graphs which suggest that a relationship exists
between the two variables, a Pearson coefficient which provides a definitive statistical measure, or a
complex regression statistical analysis which helps to define/predict a certain outcome.

Correlation itself does not imply a cause and effect relationship. Sometimes, an apparent correlation
can be a coincidence, and other times both variables are related to an underlying cause, or a third
variable that is not included in the analysis.

Figure 25.1 below further explains the “correlation does not imply causation” concept. In the example
shown in Figure 25.1, there appears to be a negative correlation between car mileage (MPG) and car
price. However, common sense tells us that price does not usually directly influence the MPG. In fact,
in this hypothetical example, car weight was investigated, and it happens that heavier cars are also
more expensive. Heavy usually means less MPG. However, there is a third variable: car weight that is
related to the two variables in the study. The weight of the car influenced the results of the study, but
it was not included in the study. Therefore, if one looks at Figure 25.1 in isolation, a false conclusion
could be made that the MPG is influenced by the car price, but it is likely more influenced by the
heaviness of a car.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Chapter 25: Correlation Analysis and Regression

Scatterplot of MPG vs Car Price


40

35
Unlawful to replicate or distribute

30

25
MPG

20

15

10

10,000 20,000 30,000 40,000 50,000 60,000


Car Price

Figure 25.1 Scatterplot of MPG vs Car Price

There are several tools available to use for correlation analysis. Of these tools, the scatterplot, the
Pearson correlation coefficient, and regression analysis are discussed below.

25.1 Scatterplots
Scatterplots are graphical pictures that compare how one set of independent variables may influence
a dependent variable. Creating a scatterplot is a good way to determine whether a relationship exists
between the variables and the nature of that relationship. This should be the first step in correlation
analysis. See Chapter 5 for more information on scatterplots.

If the scatterplot indicates that there could be a relationship between the two variables, the next step is
the Pearson correlation coefficient.

25.2 Pearson Correlation Coefficient


A correlation coefficient measures the extent to which two variables tend to change together. The
Pearson correlation coefficient evaluates the linear relationship between two continuous variables
A relationship is linear when a change in one variable is associated with a proportional change in the
other.

For example:

A correlation coefficient can be used to evaluate whether increases in the temperature in a production
facility are associated with decreasing thickness of the chocolate coating of a product. The correlation
coefficient of the sample is denoted by r. The value of a correlation coefficient (r) ranges between -1
and 1. Larger values of r indicate stronger linear relationships. The strongest linear relationship is
indicated by r = -1 or 1. The weakest linear relationship is indicated by r = 0. A positive r means that
if one variable gets larger, the other variable tends to get larger as well. A negative r means that if one
variable gets larger, the other variable tends to get smaller.

278 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

The scatterplots in Figure 25.2 show how different patterns of data produce different degrees of
correlation.

10 10 10

Unlawful to replicate or distribute


8 8 8

6 6 6

4 4 4

2 2 2

0 0 0
0 2 4 6 8 10 0 2 4 6 8 10 0 2 4 6 8 10

Maximum positive correlation Strong positive correlation Zero correlation


(r = 1.0) (r = 0.80) (r = 0)

10 10 10

8 8 8

6 6 6

4 4 4

2 2 2

0 0 0
0 2 4 6 8 10 0 2 4 6 8 10 0 2 4 6 8 10

Maximum negative correlation Moderate negative correlation Strong correlation & outlier
(r = -1.0) (r = -0.43) (r = 0.71)

Figure 25.2 Scatterplots Showing Data Patterns and Degrees of Correlation

However, correlation analysis does not indicate the mathematical relationship, rather it only indicates
that a relationship exists. Correlation analysis also does not discriminate between causes and effects.

Regression analysis can help gather more information.

25.3 Regression Analysis


Regression analysis is a statistical tool that finds a model for a relationship between pairs of numerical
data. The model is a line or curve that best fits the data. The results for this analysis are an equation
for that line or curve, an R-squared value that indicates how good the model fits the data, and other
statistical measures such as the residual analysis.

Linear regression, or the fitted line plot, shows the best fit of a straight line through the scatterplot of
the data. Nonlinear regression shows a curve that may better fit the data. Multiple regression is used
when there are many independent variables that may affect one dependent variable.

While linear regression may be done manually, computer software, such as QI Macros or Minitab,
make the process much easier. After data imputing, the software analysis generates a graph of the best
fit regression line drawn through the data and a number of statistics.

The statistics include the following:

1. The equation for the fitted line: ŷ = mx + b, where (ŷ) is the predicted value, m is the slope of
the line, x is the independent variable, and b is the intercept of the line where the line crosses
the y-axis. This equation allows you to predict ŷ from a given value of x.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 279
Chapter 25: Correlation Analysis and Regression

2. The coefficient of determination, R-squared, which is a number between 0 and 1, and


measures how well the data fits the line. As this number becomes larger, the fit improves.
R-squared = 1 is a perfect fit. Regression analysis customarily uses the least squared method to
find the best fit. The vertical distance of each data point from the regression line is called the
residual. The residuals from all the data points are calculated, squared, and summed up. The
Unlawful to replicate or distribute

line with the smallest (least) sum of the squares of all the data points is the best fit.

25.4 Correlation Analysis Example


A study is conducted of the cake baking process in a laboratory testing facility because the cakes they
produce are not rising as high as they should.

After going through the root cause analysis process, the temperature of the baking oven was identified
as a critical input. It was suspected that there was a correlation between the oven temperature and the
cake heights, and a study therefore was ordered to test the theory.

While keeping the other variables constant, the oven temperature was varied. Then, the cakes baked
at the various temperatures were measured. This is an example of a one factor at a time (1-fat)
experiment. The results of the study are shown in Table 25.1.
Table 25.1 Oven Temperature vs Cake Heights Results
Temp 250 300 310 325 330 340 350 350 360 375 375 380 390 400 410 420 425 440 450

Height 4 8 7 9 10.6 10.5 10.4 11 11 12 12.1 13 13 12.4 12.2 12 11 10 9

First, a scatterplot was produced of the data (Figure 25.3).

Scatterplot of Cake height vs. Temperature


13

12

11

10
Cake height

250 300 350 400 450

Temperature

Figure 25.3 Scatterplot Graph of Oven Temperature vs. Cake Heights

280 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

There appears to be a straight line relationship between the oven temperature and the cake heights,
up to a point around 400 degrees. After this point, the cakes are burned and dried up, resulting in the
height shrinkage. Now it is known that this is probably a critical input. But is that enough information?

Next, a Pearson correlation of the data is conducted. The results indicated a Pearson correlation

Unlawful to replicate or distribute


coefficient of r = 0.712 for oven temperature vs. cake height.

The high r value indicates a high degree of correlation between oven temperature and cake height.
However, this test does not indicate the mathematical relationship between the two variables, but
rather only that a relationship exists between them. For more information, a regression analysis was
conducted.

The goal is to find the line that best predicts Y from X, or to find the line that minimizes the sum of
the squares of the vertical distances of the points from the line. This is called the method of least
squares. Applying this method produces a fitted line plot where a line is drawn that best minimizes
these distances (called residuals) between the line (predicted value) and the actual value and that
relationship is shown at the top of the plot in the form of the Regression Equation.

It also produces an R-squared value, which is a comparison of the variation of each sample from
the model to the overall variation across the samples. The higher the R-squared value, the better the
model is at predicting the output.
Fitted Line Plot
Cake height = - 1.798 + 0.03296 Temperature

13 S 1.81397
R-Sq 50.8%
R-Sq(adj) 48.0%
12

11

10
Cake height

250 300 350 400 450

Temperature

Figure 25.4 Regression Graph Linear Model of Oven Temperatures vs. Cake Heights

In Figure 25.4, Minitab is shown to have found the best possible linear relationship between the oven
temperature and the cake height. It is drawn in the form of a line in such a manner that the actual
values in the study are fitted as close to the line as possible to minimize the sum of the squares of the

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 281
Chapter 25: Correlation Analysis and Regression

residuals. For example, there are three oven temperatures from the study that are almost identical
on the plotted line. The distances between those values and the plotted line are called the residuals,
and their values are almost zero. If all the data were on the fitted line, the total sum of the squared
differences would be close to zero for an almost perfect correlation between the oven temperature and
the cake height. As you can see in Figure 25.4, this is not true. Some values are very far away from the
Unlawful to replicate or distribute

fitted line, and their residuals are quite large.

The linear model (Figure 25.4) shows a low R-squared value of 50.8, which indicates a low correlation
across the range of oven temperatures. The linear equation is given as cake height = 1.798 + 0.03296
temperature. This model is not a good fit because the straight line will not work, and there are too
many points a great distance from the line. Therefore, another model will be used.
Fitted Line Plot
Cake height = - 60.47 + 0.3712 Temperature
-0.000476 Temperature * 2

14
S 0.877164
R-Sq 89.1%
R-Sq(adj) 87.8%
12

10
Cake height

2
250 300 350 400 450

Temperature

Figure 25.5 Regression Graph Quadratic Model of Oven Temperatures vs. Cake Height

The quadratic model (Figure 25.5) adds another term in the equation in an attempt to better fit the
model. This additional term allows the line to curve. Now, more of the actual values are closer to their
predicted values on the line. Consequently, a higher R-squared value of 89.1 is shown, which indicates
a good correlation across the range of oven temperatures. The quadratic equation is given as cake
height = -60.47 + 0.3712 temperature - 0.000476. This model is a good fit. However, an even better fit
may be possible if yet another term is added.

282 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Fitted Line Plot


Cake height = 96.54 - 1.021 Temperature
+ 0.003557 Temperature * 2 - 0.000004 Temperature * 3

13 S 0.533949

Unlawful to replicate or distribute


R-Sq 96.2%
R-Sq(adj) 95.5%
12

11

10
Cake height

250 300 350 400 450

Temperature

Figure 25.6 Regression Graph Cubic Model of Oven Temperature vs. Cake Heights

The cubic model (Figure 25.6) adds another term in the equation, which adds more flexibility to
the fitted line plot. A higher R-squared value of 96.2 is now shown, which indicates an excellent
correlation across the range of oven temperatures. This model is the best fit for the cake data set.

This model is only applicable for the oven temperatures in this data set. The heights of the cakes baked
at lower than 250 degrees or higher than 450 degrees cannot be predicted because they were not of
interest in this study and should have been added at the beginning of the original process.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 283
This page intentionally left blank.
Unlawful to replicate or distribute

284 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Chapter 26: Hypothesis Testing

Unlawful to replicate or distribute


Key Terms
alternative hypothesis significance level
hypothesis tests t-test
null hypothesis two-tailed tests
one-tailed tests type I error
p-value type II error
power of the test

Body of Knowledge
1. Recognize situations where a formal test of hypothesis is warranted.

2. Explain how to use hypothesis testing for drawing conclusions and making interpretations.

3. Explain and describe the steps involved in hypothesis testing.

4. Explain how sample size can affect hypothesis tests.

5. Format null and alternate hypotheses properly.

6. Identify types of error and incorporate them into a testing plan.

7. Apply confidence intervals to interpret the results of a test.

8. Discuss the factors that affect the power of a hypothesis test.

U nlike other improvement methodologies, Six Sigma focuses on statistically significant sources of
variation that determine the product or process output, as opposed to a focus on the output itself.
Most parts of DMAIC can be understood by anyone as a methodology, but putting it into practice is
a different story. In a LSS project, once the required data have been collected, the next step is to use
the data pertaining to certain parameters of the population to make conclusions. In order to ensure
that the right inferences have been drawn and the appropriate tests have been used, it is important to
thoroughly understand hypothesis testing.

A hypothesis test is a statistical test that is used to determine whether there is enough evidence in a
sample of data to infer that a certain condition is true for the entire population.

A hypothesis test examines two opposing hypotheses about a population: the null hypothesis and the
alternative hypothesis. The null hypothesis is the statement being tested. Usually the null hypothesis is
a statement of “no effect” or “no difference.” The alternative hypothesis is the statement that is the exact
opposite of the null hypothesis and contains all of the other possibilities.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Chapter 26: Hypothesis Testing

Based on the sample data, the test determines whether to reject the null hypothesis or fail to reject the
null hypothesis in favor of the alternative hypothesis. A p-value and confidence intervals are used to
make the determination. If the p-value is less than or equal to the level of significance (normally a 5%
alpha risk), the null hypothesis can be rejected.
Unlawful to replicate or distribute

Examples of questions that can be answered with a hypothesis test:

◆◆ Does the average height of undergraduate women differ from 66 inches?

◆◆ Is the standard deviation of their height equal to less than five inches?

◆◆ Do male and female undergraduates differ in height?

Hypothesis tests are used mainly in the Analyze and Improve phases. and are commonly used are to
check for differences before and after improvements and to determine the effects of different levels of
inputs on the output.

26.1 Terms Associated with Hypothesis Testing


Test: One of the statistical tests, such as t test or chi-square.

Hypothesis: A statement expressed as fact, which will either be proved or disproved by the test.

Null Hypothesis: The hypothesis to be tested is called “null” because often the null hypothesis states
there is no difference between the two sets of data. It will involve some statistic, make some claim
about the population of outcomes, and include a statement of equality. It will not necessarily be “what
we think is true.”

We can “reject the null,” or “fail to reject the null.” We never “accept the null” completely since we
would have to sample the entire population to be sure.

Alternative Hypothesis: Exact opposite of the null hypothesis with no other possibilities. It involves
the same statistic as the null.

Statistic: Any calculated number that summarizes some aspect of the sample data (average, variance,
or proportion).

Test Statistic: Calculated number used to test the null hypothesis.

Two Tailed, Right Tailed, Left Tailed: Describe if the test is concerned with both sides of the
frequency distribution or just one side.

p value: Probability that the test statistic could have occurred by chance. Small p values mean that
the effect is real and not merely by chance. If the p is lower than the alpha risk, the null hypothesis is
rejected. For example, if the alpha risk is set at .05 and the p-value generated is equal to or lower than
.05, the null hypothesis is rejected.

Alpha Risk/Type I Error: The probability of incorrectly rejecting the null hypothesis.

286 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Beta Risk/Type II Error: The probability of incorrectly failing to reject the null hypothesis.

Power: 1 – (Beta risk). 80% power (20% Beta risk) is the standard for the test.

Resolution: The difference the test is able to detect.

Unlawful to replicate or distribute


Sample Size: Number of samples taken for the test. Alpha risk, beta risk, confidence levels, resolution,
and sample size must all be taken into consideration when planning the test.

Normality and Stability: Must be checked before performing a test involving variable data.

Confidence Level: Indicates how sure the result will happen expressed as a percentage, e.g., 95%
confidence level means there is 95% certainty about the result.

Confidence Interval: The range of values calculated from the data that gives an assigned probability
(the confidence level) that the true value lies within that range.

26.2 Types of Hypothesis Tests


Hypothesis tests may be used for either variable or attribute data. Table 26.1 lists the commonly used
hypothesis tests.
Table 26.1 Commonly Used Hypothesis Tests
Testing for: Attribute data Variable data
Compares one sample average 1 proportion test 1 sample t test
to a historical average, standard,
or target
Compares two independent 2 proportion test 2 sample t test
sample averages

Compares three or more Chi square test ANOVA test


independent sample averages

Compares two or more None Test for equal variances


independent sample variances

26.3 Basic Hypothesis Testing Procedure


1. Identify and define the process.

2. Define the objective of the study and the decision or conclusion that must be made about the
data.

3. Assemble the team and plan the study.

4. Choose the appropriate hypothesis test.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 287
Chapter 26: Hypothesis Testing

5. State the null hypothesis and the alternative hypothesis. Determine if the test is two tailed, left
tailed, or right tailed.

6. Choose the significance level, or alpha risk, which usually is 5%. The confidence level is
1- alpha risk. For an alpha risk of 5%, the confidence level is 95%.
Unlawful to replicate or distribute

7. Using computer software, calculate the power (1 - beta risk) using the sample size and the
desired resolution, which is usually 80%. Given 80% power, then determine the difference
the test can predict and the number of samples necessary. Make adjustments as needed by
changing the power, sample size, resolution, or alpha risk.

8. Collect data and calculate the results using computer software. Make sure the data are normal
and the process is in control.

9. Evaluate the results.

26.4 Analyzing the Results


First, the data are reviewed for outliers, typographical errors, and obvious issues encountered while
collecting and inputting the data. Then, as many graphs as possible are produced to help in the
evaluation. The summary report generated from the software is reviewed next, comparing the p-value
results to the alpha risks to determine whether to reject or fail to reject the null hypothesis. The null is
rejected when the p-value is lower than the alpha risk. The confidence intervals and summary statistics
also are reviewed to determine if the sample results are within the confidence interval. Finally, any
other items or comments in the report are reviewed.

26.5 Examples of Hypothesis Tests


26.5.1 2 Sample t Test for Variable Data
A manufacturer wants to see if there is a real difference in the average weight of its one-pound flour
bags between two product lines. Thirty samples are randomly pulled from each line and then weighed.
The average weight from Line C1 was 454.48 grams, and the average weight from Line C2 was 455.01
grams. Clearly, at first glance, there appears to be a difference in the average weight between the two
lines. However, the difference only represents the samples that were collected. In order to understand
what is happening in the entire population and if there is really a statistical difference, a hypothesis test
must be conducted.

Null hypothesis: Line C1's average weight equals Line C2's average weight.

Alternative hypothesis: Line C1's average weight does not equal Line C2's average weight.

The null hypothesis either will be accepted (or cannot be rejected), meaning the average weight of the
two populations are the same, or the null hypothesis is rejected, meaning that there is a difference in
the average weight of the two populations.

Using the variable data sample averages from two populations are compared. The 2 sample t test is
chosen, the data are entered, and the summary report is reviewed, which is shown in Figure 26.1.

288 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

2 Sample t Test for the Mean of Before and After


Summary Report

Mean Test Individual Sample

Unlawful to replicate or distribute


Is Before greater than After? Statistics Before After
0 0.05 0.1 > 0.5
Sample size 30 30
Mean 54.447 53.243
Yes No 90% CI (54.19, 54.70) (52.865, 53.621)
P < 0.001
Standard deviation 0.81779 1.2190

The mean of Before is significantly greater than the mean of After (P < 0.05)
Difference Between Samples
Statistics *Difference

90% CI for the Difference Difference 1.2033


Is the entire interval above zero? 90% CI (0.75419, 1.6525)
*Difference = Before - After

Comments

• Test: You can conclude that the mean of Before is greater than After
0.0 0.4 0.8 1.2 1.6 at the 0.05 level of significance.
• CI: Quantifies the uncertainty associated with estimating the difference in
means from sample data. You can be 90% confident that the true difference
is between 0.75419 and 1.6525, and 95% confident that is is greater than
Distribution of Data 0.75419.
Compare the date and means of the samples. • Distribution of Data: Compare the location and means of samples. Look
for unusual data before interpreting the results of the test.
Before

After

51.6 52.8 54.0 55.2 56.4

Figure 26.1 2-sample t test report

The p-value is less than 0.07, which is above the significance level of .05. Therefore, the null cannot be
rejected, which means there is not enough evidence to support a difference in the two lines.

26.5.2 1 Proportion Test for Attribute Data


The operations manager of an apple juice processing plant states that the probability of receiving a bad
apple from the supplier is 8%. After checking 555 apples from the supplier over the course of a week,
55 bad apples are found.

Null hypothesis: The probability of receiving a bad apple from the supplier is equal to 8%.

Alternative hypothesis: The probability of receiving a bad apple from our supplier is not equal to 8%.

Using the attribute data, the standard (8%) is compared to the sample population. The 1 proportion
test is chosen, the data are entered into Minitab, and the summary report is reviewed, which is shown
in Figure 26.2.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 289
Chapter 26: Hypothesis Testing

1 Sample %Defective for Apples


Summary Report

Mean Test
Is Before greater than After? Statistics
Unlawful to replicate or distribute

0 0.05 0.1 > 0.5


Total number tested 555
Number of defectives 55
Yes No % Defective 9.91
P = 0.117
95% CI (7.55, 12.70)
Target 8
The % defective of Apples is not significantly different from the target (P < 0.05)

Comments

• Test: There is not enough evidence to conclude that the % defective differs
95% CI for % Defective
from 8% at the 0.05 level of significance.
Is the entire interval above or below the target?
• CI: Quantifies the uncertainty associated with estimating the % defective
8 from sample data. You can be 95% confident that the true % defective is
between 7.55% and 12.70%.

8 10 12

Figure 26.2 1-sample proportion test report

The p value is .117, which is above the significance level of .05. Therefore, the decision was made to
fail to reject the null, which means that the bad apple percent of the population was equal to 8%. The
operations manager was correct. The confidence level was 95% confident that the defective rate could
be as low as 7.55% or as high as 12.7%.

26.5.3 Other Examples


A company wants to determine if the average weight of its boxed cake mix is below the target weight
of 454 grams. Using the variable data, the samples from one population are compared to the standard.
The 1-sample t test is chosen.

A manufacturer wants to compare the percent fill weights for its bottling process, which has three
fillers. Using the attribute data, the averages from the three populations are compared. The chi square
test is chosen.

A company wants to know if the weight variability decreased on its pancake line after improvements
were made. Using the variable data, the variances from two populations (before and after) are
compared. The test for equal variances is chosen.

290 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Chapter 27: Design of Experiment (DOE)

Unlawful to replicate or distribute


Key Terms
design of experiment (DOE) main effect
factor level replicate
interaction run

Body of Knowledge
1. Explain the benefits of design of experiment (DOE).

2. Identify the steps in the DOE process.

3. Define and describe terms such as independent and dependent variables, factors and levels,
responses, randomization, effects, and replication.

4. Describe the purpose and principles of DOE.

5. Recognize the correct circumstances to employ DOE and follow the experimental process in
doing so.

6. Interpret main effects analysis and interaction plots.

D esign of experiment (DOE) is a process that helps identify the critical X variables that drive the
Y output metrics. DOE is a powerful tool because it allows for the analysis of several different
variables. It is an efficient, dependable tool for determining causal relationships between multiple
variables through a systematic series of tests in which various X’s are manipulated and the effects on
the Y’s are observed. DOE can be broken down into the following three categories:

1. Screening: further reduces a large set of X’s to a manageable number.

2. Characterizing: determines the contributions from each X and how the X’s interact with each
other after the numbers of X’s are reduced.

3. Optimizing: after characterizing the X’s, it is possible to optimize the process and hopefully
glean some insightful information. This is an optional step; in most cases, the characterization
step provides enough information.

Classical experiments concentrate on one factor at a time (1 fat) and try to keep everything else
constant. However, in a DOE, many factors may be evaluated at the same time as well as the
interactions between factors.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Chapter 27: Design of Experiment (DOE)

27.1 Terms Associated with Design of Experiments


Factor: An input; some test variable that is believed to be significant to the output.

Noise Factor: A variable that is too expensive or difficult to control as a part of an experiment.
Unlawful to replicate or distribute

Levels: The values of a factor that are of interest in the testing.

Run: An experimental condition that is characterized by the factor and levels associated with it.

Observation: The resulting output of a run.

Design: The entire set of runs in an experiment.

Main Effect: The effect of one of the factors in the experiment.

Interaction: The effect of two or more factors in conjunction with one another.

Fractional: Examines fewer experiments than the full design states.

Full factorial: Experimental designs which contain all combinations of levels of all factors.

Replicate: Represents all of the runs of an experiment. Two replicates are two complete runs.

n: number of runs that will be conducted in a given experiment, which is computed as: n = LF

Where n = number of runs, L = number of levels, and F = number of factors.

For example:

The factors to be tested are time, speed, and temperature at high and low levels and two replicates. A
two-level three-factor experiment and two replicates will be run for a total of 23 * 2 = 16 runs.

27.2 Types of Design of Experiments


Full factorial: designs which contain all combinations of levels of all factors.

Fractional: contains fewer trials than the full design states.

Mixture: used where product is made up of several components, e.g., cake mix.

Taguchi: design experiments that usually requires a fraction of the full factorial combinations.

Response surface analysis: used after an initial screening occurs to find the optimum settings.

Evolutionary operations (EVOP): on-line optimization where usually two factors are studied using
small step changes while the line is running.

292 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

27.3 Basic Design of Experiments Testing Procedures


1. Identify and define the process.

2. Define the objectives of the study.

Unlawful to replicate or distribute


3. Assemble the team and plan the study.

4. Brainstorm potential factors (inputs). Choose two or more factors to test.

5. Define the output to be measured.

6. Define the level of the factors to be tested. Choose two or more to test, which means each of
the factors can be tested at several different levels. Set the levels boldly without being careless.
Avoid conditions that are not feasible or are dangerous.

7. Decide on the number of replicates.

8. Select the design that best fits the study (full factorial, fractional, etc.).

9. Randomize the order of the runs.

10. Conduct the experiment under the prescribed conditions and evaluate the results.

The other factors in the process must be kept constant. Only the factors involved in the experiment
can be varied. Good records should be kept. Beware of noise factors beyond your control that can
affect the experiment. These factors can be mitigated by performing the experiment in a controlled
environment, i.e., same machine and same operator. When performing DOEs, consideration must
be given to safety, possible damage to equipment and cost of line time and possible scrap/rework
generated.

During the experiment, the levels of one or more factors during each run are changed in a manner that
captures all possible combinations of the factor levels. Then, the observations that occur during each
run, including the effects of interactions between factors, are analyzed.

27.4 Analyzing the Results


There are no hard and fast rules for analyzing all DOEs. There still remains some art in both the design
and analysis of the experiments, which can be learned only from experience.

First, the data should be reviewed, looking for outliers, typographical errors, and obvious issues
encountered while collecting and inputting the data. As many graphs as necessary should be
constructed to clearly identify and illustrate the results.

Response vs. factor levels (main effects and interaction plots) and Pareto charts that rank the effects,
residual graphs that test the model, and ANOVA tables also are examined.

27.5 Example
A manufacturer wants to reduce the hardness variability of one of its parts. They believe the brand of
oven, air flow, and temperatures are potential key factors for this output. A DOE is set up and run in
order to further investigate these factors.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 293
Chapter 27: Design of Experiment (DOE)

The factors are airflow as measured by the air flow speed dial, the oven temperature, and the two
brands of ovens.

The decision is made to test two levels of each factor: air flow is 10 and 50, temperature is 200 and 400,
and the ovens are Acme and Deluxe. A two-level three-factor experiment and two replicates will be
Unlawful to replicate or distribute

run; and a particular experiment is denoted as:

No. of LevelsNo. of Factors

Therefore, (2)(2)(2) = eight runs. There will be two replicates for a total of 16 runs in this experiment.
All possible runs will be conducted for a full factorial design; and this information is entered into
Minitab. The software produces a design that describes the 16 runs, in random order (Table 27.1).

For example:

Test Run 3 has an air flow dial at 10, a temperature of 400, and an Acme oven. The product is tested
and the results are recorded. The table is followed for all 16 runs. Note that each run is a unique
combination of settings; and no two runs are the same.

Table 27.1 Full Factorial Design


Run order Air flow speed Oven Brand of oven Output results
dial temperature
3 10 400 Acme
5 10 200 Deluxe
12 50 400 Acme
14 50 200 Deluxe
16 50 400 Deluxe
6 50 200 Deluxe
9 10 200 Acme
10 50 200 Acme
13 10 200 Deluxe
1 10 200 Acme
8 50 400 Deluxe
15 10 400 Deluxe
11 10 400 Acme
7 10 400 Deluxe
2 50 200 Acme
4 10 400 Acme

Minitab provides a great deal of information about the results, some of which is beyond the scope of
this course. However, three graphs are examined below.

294 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Pareto Chart of the Effects


(response is Results, α = 0.05)
Term
12.70

Factor Name

Unlawful to replicate or distribute


C A Airflow
B Temperature
AB C Oven

ABC

AC

BC

0 10 20 30 40 50

Effect

Figure 27.1 Pareto Graph of the Effects

Figure 27.1 shows a Pareto chart of the factors (air flow, temperature, and oven brand) and the
significance of their effects on the output. The red line reflects the alpha (0.050) significance level.
The graph shows that the oven brand and the air flow are significant contributors to the output and
the interaction of air flow, and the temperature also contributes significantly to the output. Some of
the factors did not have a significant effect on the output: temperature, temperature interacting with
the ovens, temperature air flow interacting with the ovens, and the interactions of all three factors
together.
Main Effects Plot for Results
Fitted Means
Air flow Temperature Oven

260

250
Mean of Results

240

230

220

210
20 40 300 400 Acme Deluxe
Figure 27.2 Main Effects Plot

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 295
Chapter 27: Design of Experiment (DOE)

Figure 27.2 is another way to show the effect of the factors on the output and compares the relative
strength of the effects.

If the line is horizontal, there is no main effect present. The output average does not change depending
on the factor level.
Unlawful to replicate or distribute

If the line is not horizontal, there may be a main effect present. The output average changes depending
on the factor level. Steeper lines mean stronger effects.

Interaction Plot for Results


Fitted Means

Air flow * Temperature


280
Temperature

260 300

400
240
Mean of Results

220

200

Air flow * Oven Temperature * Oven


280
Oven

260 Acme

Deluxe
240

220

200

20 40 300 400

Air flow Temperature

Figure 27.3 Interaction Plot

Finally, Figure 27.3 shows the interaction effect of two factors on the output and compares the relative
strength of the effects.

If the lines are parallel to each other, there is no interaction present. The change in the output average
from the low to the high level of a factor does not depend on the level of a second factor.

If the lines are not parallel to each other, there may be an interaction present. The change in the output
average from the low to the high level of a factor depends on the level of a second factor. The greater
the degree of departure from being parallel, the stronger the effect.

296 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Part VIII: Improve Phase of DMAIC

Unlawful to replicate or distribute


T he purpose of the Improve phase is to generate and test possible solutions, prioritize them, and
then select the best solution. During testing, the process or product is evaluated for risks and
potential failure modes.

The Improve phase focuses on fully understanding the top causes identified in the Analyze phase, with
the intent of either controlling or eliminating those causes to achieve breakthrough performance.

The Improve phase also includes a limited production trial (or beta test for non-manufacturing
projects) that demonstrates a significantly improved process-sigma-level prior to moving into the
Control phase.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
This page intentionally left blank.
Unlawful to replicate or distribute

298 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Chapter 28: Selecting a Solution

Unlawful to replicate or distribute


Key Terms
impact/effort matrix pilot study
Improve phase solution prioritization matrix

Body of Knowledge
1. Generating solutions to the problem.

2. Re-evaluating the measuring systems

3. Performing a final capability study

I n the Improve phase, solutions are determined to correct the problem, re-evaluate the measurement
systems, and implement a final process capability study to measure the improved metrics against
the original baseline. Additionally, Lean tools are applied as appropriate in order to reduce waste and
improve the flow in the process.

28.1 Generating Solutions and Reducing Waste


Generating, implementing, and monitoring solutions is a group task. As many of the employees that
actually work in the process as possible should be involved because they must deal with the changes
in their process. People are more committed to implementing and monitoring solutions if they have a
hand in the development of those solutions.

There should be as many solutions as possible; and there are many ways to obtain them. Think of the
funnel approach again. Start with a good number of solutions from which to choose, and then work
down the list to the best solution(s) for everyone.

The ways to create solutions include listening to the experts, brainstorming, affinity diagrams, list
reduction, impact/effort matrix, solution prioritization matrix, and performing experiments. The
experimentation tools used in the Analyze phase may serve well in the Improve phase. These include
hypothesis testing, regression, and design of experiments.

Two additional tools available to help select solutions are the Impact/Effort Matrix and the Solution
Prioritization Matrix.

The Impact/Effort Matrix is a simple four-square matrix that places each solution in the quadrant that
best reflects the impact expected and the effort required if that solution were chosen. This is a good
place to start. See Table 28.1.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Chapter 28: Selecting a Solution

Table 28.1 Impact/Effort Matrix

High impact and low effort High impact and high effort

Quick Wins for high rewards Major Projects that are very
with little effort. complex and time consuming.
Unlawful to replicate or distribute

Low impact and low effort Low impact and high effort

Quick Fixes which can add Thankless Tasks, so why


up for high rewards. are we doing this?

The Solution Prioritization Matrix is a more complex matrix where the solutions are ranked by the
team according to the agreed criteria. After using the Impact/Effort matrix, this is a good next step.
The criteria may or may not be weighted as appropriate.

In the example below (Table 28.2), the solutions are listed on the left side, and the criteria are listed on
the top. Each solution is ranked based on the criteria: 1 = unfavorable, 5 = neutral, and 10 = favorable.
The higher the score, the better the solution in accordance with the criteria.

Table 28.2 Solution Prioritization Matrix

Solutions
Criteria for solution selection Eliminate paper Upgrade Increase staff by
copies computers one person

Impact on customer 7 7 7

Impact on completion time 8 5 5

Impact on employees 9 7 8

Impact on cost 9 3 1
Totals 33 22 21

The Improve phase is also the time to review the value stream map and make changes as applicable
to reduce waste and improve the flow of the product or service. The tools and techniques listed in
Chapter 15 are used, such as pull systems, Kanban, 5S, standard work, POUS, quality at the source,
cellular flow, and batch reduction. Normally, these activities would be scheduled as separate kaizen
events at appropriate times. Kaizen events may be accomplished in as short as a few hours. They
seldom last more than a couple of days. They should be done within the scope of the project.

300 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

28.2 Re-evaluate the Measuring Systems


There are two reasons for performing measuring system analysis (MSA) in the Improve phase. The
first reason is that over the course of our project, we may have established that there are inputs that
need to be controlled. Their measuring systems must be checked to assure they are adequate. The

Unlawful to replicate or distribute


second reason is that changes may have been made in the output, which likely will result in reducing
the variability. Therefore, the question becomes: with this new metric and reduced tolerance level, is
the measuring system still effective? To find out, another MSA should be performed on the output.

28.3 Performing a Final Capability Study


Now we are ready to confirm that our solutions are effective. We will perform a final capability study
to measure our improved process against our baseline which was set during the Measure phase.
Typically, to do this, a Pilot Study is executed.

28.3.1 Steps to Execute a Pilot Study


1. Select the pilot steering team members.
2. Brief the participants.
3. Train the employees.
4. Conduct the pilot study.
5. Evaluate the results with the capability study.
6. Compare the results with the baseline study performed during the Measure phase.
7. Plan the path forward.

28.3.2 Critical Issues in Planning a Pilot Study


1. Ensuring that the training is effective.
2. Ensuring that the full range of process conditions is tested during the pilot run.
3. Ensuring that pilot conditions are reproducible in full-production scenarios.
4. Arrangement of quarantine activities for trial runs.
5. Minimizing disruptive impact on the business while ensuring the validity of the pilot study.
6. Ensuring safety of employees.

28.3.3 Evaluating the Results of a Pilot Study


1. Measure the output values and defect rates.
2. Recalculate process sigma/process capability and compare to baseline.
3. Evaluate the effect of the new process on employee safety, activities, and morale.
4. Ensure that other key process and product metrics are not affected by the new process.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 301
This page intentionally left blank.
Unlawful to replicate or distribute

302 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Chapter 29: Risk Analysis and Mitigation

Unlawful to replicate or distribute


Key Terms
expected profit risk
feasibility study SWOT analysis
fmea unintended consequences

Body of Knowledge
1. Use FMEA to evaluate how a process might fail and the effects that failure could have.

2. Identify the potential effect that risk can have on project goals and schedules, resources, costs,
and stakeholders.

3. Use SWOT analysis to define risk.

4. Use expected profit analysis to define risk.

5. Understand the ramifications of unintended consequences.

R isk is defined as an undesirable situation or circumstance that has both a likelihood of occurring
and a potentially negative outcome. It involves the probability of an event and its outcome.

Risk management and analysis is a process to identify, assess, mitigate, and control risks in a
systematic and cost-effective manner. It is used during LSS projects to identify the potential effects that
risk can have on the project goals, schedule, resources, costs, and stakeholders.

29.1 Expected Profit


When analyzing the risk of undertaking a project, the expected profit is taken into consideration
which can be quantified as the sum of the total number of outcomes of profit multiplied by their
probabilities.

For example, a $2 million project is proposed. Successful completion of the project may bring two new
customers for which the probability is as follows: the Acme Company is between 50% and 60% and the
Bacme Company is between 50% and 70%. The probability of getting both companies is between 10%
and 20%. Table 29.1 shows the best case scenario and Table 29.2 shows the worst case scenario for the
outcome.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Chapter 29: Risk Analysis and Mitigation

Table 29.1 Best Case Scenario


Outcome Profit Probability Profit x Probability
Acme only 2.0 M 0.6 1.2 M
Bacme only 2.0 M 0.7 1.4 M
Unlawful to replicate or distribute

Acme and Bacme 4.0 M 0.2 0.8 M


3.4 M total

Table 29.2 Worst Case Scenario


Outcome Profit Probability Profit x Probability
Acme only 2.0 M 0.5 1.0 M
Bacme only 2.0 M 0.5 1.0 M
Acme and Bacme 4.0 M 0.1 0.4 M
2.4 M total

If the data are correct, the project will increase profit (after a one-time expense of $2.0 M) by between
$2.4 M and $3.4 M.

29.2 SWOT Analysis


A strengths, weaknesses, opportunities, and threats (SWOT) analysis is a good tool to help
assess the risk associated with a project. Strengths and weaknesses are internal to a company while
opportunities and threats are external. When analyzing the results for a SWOT, the key challenge are
determining how to take advantage of the company's strengths and improve its weaknesses to take
advantage of the opportunities while reducing or eliminating threats.

For example, a consulting company is considering expanding their product line to include Six Sigma
training. After a brainstorming session, the SWOT in Table 29.3 was produced.

Table 29.3 SWOT Analysis


Strengths Weaknesses

Subject matter experts are on board. Subject matter experts lack practical experience.
Funding is available for new staff as needed. No PhDs are on board.
A web site is established. No administrative staff on board.

Opportunities Threats

Central location in the nation. Many firms offer six sigma already.
Airport is accessible. Cost may be prohibitive for some students.
Number of Six Sigma students are increasing. Some companies train employees internally.

29.3 Feasibility Study


A feasibility study is an investigation to determine the viability of a proposed project by assessing

304 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

its cost, value, and risk. It attempts to identify the strengths and weaknesses of an existing business
or proposed venture and the opportunities and threats present in the environment, the resources
required, and the prospects for success.

Unlawful to replicate or distribute


29.4 Unintended Consequences
In most organizations, individual processes make up a complete system. When these processes are
improved within a system, care must be taken that there is no harm caused to the other processes in
that system.

For example:

1. A stamping process was improved to increase productivity by 20%. However, the increased
wear and tear on the equipment resulted in increased maintenance costs and downtime, which
negated the productivity increases.

2. A new process to load product into boxcars resulted in two forklift truck accidents.

3. Farmer Jones lives on a creek upstream from Farmer Smith. Farmer Jones decides to dam
up the creek so his cattle can have more water, which reduces the creek below the dam to a
trickle, and Farmer Smith soon runs out of water for his sheep.

29.5 Failure Mode and Effects Analysis (FMEA)


Failure Mode and Effects Analysis (FMEA) is a method traditionally used to assess the risk in
processes and products. There are design (DFMEA) failure mode and effects analysis and process
(PFMEA) failure mode and effects analysis.

FMEAs are an offshoot of Military Procedure MIL-P-1629 enacted November 9, 1949. It was formally
developed and applied by NASA in the 1960s to improve and verify the reliability of the space program
hardware and later was adapted by the aerospace and automotive industries. It has since spread to
other sectors, including food processing and healthcare.

FMEAs attempt to answer these questions. What could possibly go wrong? How bad will it be if it does
go wrong? How often will it go wrong? Can it be detected before it goes wrong?

One important factor of an FMEA is timeliness. It is meant to be a “before the event action,” not an
“after the event” exercise. A FMEA should be a preventive action, not a corrective one.

FMEAs should be designed as simple as possible and can take a lot of time to complete. FMEAs
require considerable knowledge of the system operation.

FMEAs are living documents that must be followed up and updated as necessary. They are meant to be
used, not filed away.

FMEAs are a great way to identify potentially critical X’s for a LSS project. Most LSS material places
FMEAs in the Improve phase. It can also be done in the Define phase, where the process is fully
investigated and understood, and a potential x bucket is being filled.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 305
Chapter 29: Risk Analysis and Mitigation

QI Macros contains a FMEA template. This complex template is used in the automotive industry,
and is a fine guideline if fully understood and followed rigorously. It is, however, better suited for the
manufacturing sector than services. Tables 29.4 and 29.5 are examples of simple generic FMEA forms.

Table 29.4 Example of a PFMEA Form


Unlawful to replicate or distribute

Process Potential Potential s Potential o Current d r Actions Responsibility Actions s o d r


step failure effect(s) cause(s) process p recommended taken p
mode of failure of failure controls n n
Admit Allergies Patient 9 Incorrect 2 Checklist 5 90 Modify electronic Registration Checklist 9 1 2 18
patient not listed reaction patient Patient checklist so that Manager modified
to drugs history Care Plan form cannot be
during completed until
the all blanks are
surgery filled in.

Table 29.5 Example of a DFMEA Form


Product Potential Potential s Potential o Current d r Actions Responsibility Actions s o d r
part failure effect(s) cause(s) of controls p recommended taken p
mode of failure failure n n

Car Rust in Rusted 5 Not 6 Test 5 150 Add lab Design Checklist 5 3 3 45
Door lower out car enough vehicle accelerated manager modified
panel door seal applied testing and
to lower change seal
edge amount

29.5.1 FMEA Work Instructions


1. Define the process via a process map or VSM.

2. Identify the basic process steps from the process map or VSM and list them on the FMEA form.

3. Brainstorm and list potential failure modes and effects for each process step.

4. Assign severity, occurrence, and detection ratings.

5. Calculate the RPN and prioritize them to form the critical list.

6. For the critical items, list the current process prevention controls and current detection devices.

7. Select those items which received a high PRN and provide corrective action, then reassess the
severity, occurrence, and detection factors to obtain a new RPN.

306 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

29.5.2 FMEA Key Rating Terms


Severity (S): Rating of the impact of a particular failure mode on the output.

Occurrence (O): Rating of how frequently the failure mode occurs.

Unlawful to replicate or distribute


Detection (D): Rating of ability to detect the failure mode.

By multiplying the three ratings (S x O x D) together, the Risk Priority Number (RPN) is determined.

29.5.3 Rating Criteria Example

Severity: 1, 4, 7, 10
1 = Minor. No noticeable effect.
4 = Moderate. Causing dissatisfaction. Some system degradation. Chance of minor injury.
7 = Critical. Causing high degree of dissatisfaction. Loss of system function, or major injury.
10 = Catastrophic. Failure may cause death. Extended repair outages. Loss of a customer.

Occurrence: 1, 4, 7, 10
1 = Unlikely. Unreasonable to expect this failure mode to occur.
4 = Sporadic. Past history indicates occasional failures: 1 per 1,000.
7 = Conceivable. Past history indicates problems occur from time to time: 1 per 100.
10 = Recurrent. Past indicates failures will occur with regularity: 1 per 10.

Detection: 1, 4, 7, 10
1 - Very easy to catch before it occurs. Almost always preceded by a warning.
4 - Moderate probability of detecting the failure before it occurs. 50% chance of a warning.
7 - Low probability of detecting the failure before it occurs. Little chance of a warning.
10 - Remote probability of detecting the failure before it occurs. No warning at all.

29.5.4 FMEA Examples


Table 29.4 provides a partial example of a PFMEA for step one (admit patient) of a medical procedure
process. Table 29.5 provides a partial example of a DFMEA for the design of a car door. The
explanations for the columns are as follows:

◆◆ Process step: what is the process step under review?

◆◆ Product part or component: what is the product or product part under investigation? (for
DFMEA only)

◆◆ Potential failure mode: what can go wrong with the key input?

◆◆ Potential effect(s) of failure: what are the effects if the key inputs are wrong?

◆◆ Severity: what is the severity of the impact? Scale ranges from no impact (low) to loss of life or
customer (high).

◆◆ Potential cause of failure: what are the causes if the key inputs are wrong?

◆◆ Occurrence: quantifies how often the failure mode occurs. Scale ranges from very unlikely
(low) to very likely (high).

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 307
Chapter 29: Risk Analysis and Mitigation

◆◆ Current process controls: what is in place in the current state of the process to prevent the
failure mode?

◆◆ Detection: quantifies the ability to detect the failure once it has occurred. The scale ranges
from almost certain (low) to not possible (high).
Unlawful to replicate or distribute

◆◆ Risk Priority Number (RPN): severity x occurrence x detection. Used to prioritize the effects
of the failure modes.

◆◆ Actions recommended: what can be done to correct the issue?

◆◆ Responsibility: who is responsible for the corrective action?

◆◆ Actions taken: what corrective actions were actually taken?

◆◆ Severity: revised after corrective action was taken.

◆◆ Occurrence: revised after corrective action was taken.

◆◆ Detection: revised after corrective action was taken.

◆◆ RPN: recalculated after corrective action was taken.

308 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Part IX: Control Phase of DMAIC

Unlawful to replicate or distribute


T he purpose of the Control Phase is to create and implement a process control plan, a standardized
process, documented procedures, and a response plan. Transferring ownership back to the process
owner is the final step.

Much of the Control Phase is about standardization, which is the step that enables long-term
high quality production of goods and services on a reliable, predictable, and sustainable basis.
Standardization is making sure that important elements of a process are performed consistently in the
most effective manner and that changes are made only when the data show that a new alternative is
better. Using standard practices will provide the following contributions to the success of the project:

1. Reduce variation among individuals or groups and make a process output more predictable.

2. Provide “know-why” for operators and managers now on the job.

3. Provide a basis for training new people.

4. Provide a trail for tracing problems.

5. Provide a means to capture and retain knowledge.

6. Give direction in the case of unusual conditions.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
This page intentionally left blank.
Unlawful to replicate or distribute

310 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Chapter 30: Process Control Planning

Unlawful to replicate or distribute


Key Terms
control plan process metrics
process audits statistical process control

Body of Knowledge
1. Develop a control plan.

2. Identify the key elements of process audits.

3. Describe and apply statistical process control techniques.

4. Explain the purpose of process metrics.

P lanning, developing, implementing, and maintaining process controls is an important step in


sustaining the improvement gains realized in a LSS project. Process controls include control
charts, process capability studies, control plans, process audits, and process metrics.

30.1 Statistical Process Control (SPC)


Once a process has been improved, it must be monitored to make sure the gains are sustained. Process
controls are implemented in the form of SPC tools and techniques to prevent any backsliding that may
occur.

SPC controls take two forms:

Control charts are used to monitor process stability, determine when a special cause is present, and
when to take action. Control charts are the voice of the process. Control charts may be used, when
appropriate, on the critical inputs (X), the outputs (Y), or both. Controls charts are discussed in
Chapter 22.

Process capability studies determine how the voice of the process (control limits) compares with the
voice of the customer (specification limits). They indicate when the process average should be shifted
toward the center or when there is too much variation. Capability studies may be accomplished, when
appropriate, on the critical inputs (X), the outputs (Y), or both. Capability studies are discussed in
Chapter 23.

30.2 Control Plans


Control plans (see Figure 30.1) are documents that identify critical input and/or output variables and
the associated activities that must be accomplished to maintain control of the process, product, and/

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Chapter 30: Process Control Planning

or service to be produced. Control plans come in many forms and may be called by many names. They
may consist entirely of text, pictures, or a combination of both. Control plans may be relatively simple
or very complex. This all depends upon the process, product, and/or service to be delivered. Control
plans must be readily accessible to the appropriate employees.
Unlawful to replicate or distribute

Six Sigma Process Control Plan

Process Name: Prepared By: Page:


of Customer: Approved By:
Document #: Location: Approved By:
Revision Date: Area: Approved By: Supercedes:

Sub Specification/ Decision


Sub CTQ Requirement Measurement/ Sample Frequency Who Where Rule/ SOP
Process Process Method Size Measures Recorded Reference
Step Corrective
KPOV KPIV LSL HSL Action

Figure 30.1 Six Sigma Process Control Plan

A well-documented control plan should include documentation for process set-up, process and
product/service measuring and monitoring, process control, and process troubleshooting. A team
approach should be used when developing and implementing control plans.

The key elements of a control plan include:

1. Employee training plan which includes safety and environmental training


2. Documentation
a. Detailed description of the process to be controlled
b. Value stream maps and process maps as appropriate
c. Operating procedures, work instructions and visual controls
d. List of variables to be measured and controlled, testing frequencies, and testing procedures
e. Description of methods, techniques, and tools used to obtain data
f. Checklists
g. Auditing requirements
3. Measuring and monitoring plan
a. Statistical process control plan including metrics
b. Control charts and process capability studies
c. Calibration program and measuring system analysis plan
d. Other measuring and monitoring requirements as applicable
4. Response plan and contingency plan
5. FMEA

312 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

30.3 Process Audits


Periodic process audits are a good way to ensure that the process is operating in the manner in which
it was intended. A process audit consists of comparing a series of interviews and documentation
reviews against existing work instructions, procedures, and control plans.

Unlawful to replicate or distribute


A process audit answers the following questions:

Are the policies and procedures followed?


Are customer and regulatory requirements met?
Are the required records present, complete, and correct?
Are the improvements being sustained?
Can the improvements be found?

30.3.1 LSS Project Audit Work Instruction


1. Schedule the audit.

2. Prepare for the audit by reviewing the project charter. List the employees that must be
interviewed. Prepare a checklist.

3. Conduct the audit incorporating employee interviews, visual inspection of the process, and
review of documents. Look for other possible improvements that may be made to the process.

4. Review findings with the team.

5. Write and submit the audit report to management.

6. Initiate corrective actions as directed by management.

7. Follow-up corrective actions as necessary.

Process audits are regularly scheduled and include an audit checklist and an audit report which
documents the results. Many times the employees that work in the process are trained in the auditing
process so that they can perform the audits.

30.3.2 Process Audits Interviews


Interviewing people as a method to gather data can be daunting task. Who are the players? What do
they know? What do they do? What instructions are they supposed to follow? What do they write
down? How were they trained?

Some people like to talk about their jobs and the processes that take place where they work. Others
may be more reluctant for a number of reasons, such as fear of being misrepresented or misquoted.
They might also just be shy. Even someone who is happy to be interviewed is not necessarily easy to
interview. Part of the interviewer's job is to keep the interviewee on track without making them feel
manipulated or ignored and to put them at ease and get the interview rolling.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 313
Chapter 30: Process Control Planning

Here are some things to know in order to conduct successful interviews.

1. Have a plan.
Write your questions down. Have a checklist and use it as a guideline.
Unlawful to replicate or distribute

2. Pay attention.
Be interested. Listen. Display respect. Good interviewers do not just ask questions, they listen to the
answers. It is the answers that may guide the next question. If they seem to have completed that line of
thought, then move on; otherwise, press them for more information. Good follow up questions lead to
good interviews.

3. Clarify to understand.
Tell the interviewee when you do not understand something. Do not pretend to know everything.

4. Ask precise questions.


The questions should show the interviewer's knowledge of the subject and should help reach the goals
set for the interview. Start with open ended questions like “Can you show me what you do?" or “Can
you tell me about your process?” Make them think and let them talk. More specific questions can
be asked if they start to wander off the subject or if the checklist of intended questions are not being
addressed.

5. Take time outs.


The interviewer's notes are the record of the interview. If the information is coming too fast, a time-out
should be called in order to organize your notes. Be sure to tell the person what has been written down
so there are no surprises.

6. There will be no confrontations.


There will be times when the interviewee will be evasive, untruthful, angry, or will have a reason to be
cautious about the interview. Do not enter into a confrontation. Instead, work with the person. Back
off, and give them some questions that they can answer or feel more comfortable answering, and they
may drift toward what they do not want to talk about. Perhaps the offending question can be tactfully
addressed later in the interview. If all else fails, thank them and conclude the interview.

Interviewing people is often a difficult task, but it is an essential one in the data collection process.
What do you see, hear, and sense? What are the documents, records, forms, and work instructions to
review? Take pictures, if appropriate.

30.4 Process Metrics


There are many process metrics that may be included in the control plan. These metrics encompass
whatever is important to the customer and the organization. They include process flow metrics (see
Chapter 14), such as task time, cycle time, and inventory levels, and process performance metrics (see
Chapter 23) such as scrap, error, and rework rates.

314 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Chapter 31: Project Closure

Unlawful to replicate or distribute


Key Terms
document control procedure training plan deployment
lessons learned

Body of Knowledge
1. Document and communicate lessons learned from projects.

2. Develop and implement training plans.

C losing out a project is much more than documenting the completion of tasks and summarizing
project performance. Lessons learned that may prove helpful in future projects must be captured,
which also can assist in investigating how the improvements may be promulgated throughout the
organization’s other processes and systems. Furthermore, future training of personnel who work in the
improved process must be addressed as training is essential to sustaining the gains made in a control
project. Finally, whenever possible, the successful completion of one project should be the springboard
to the next project.

31.1 Lessons Learned


The lessons that were learned during a project must be documented for future reference. Additionally,
how the improvements can be spread across an organization to other processes and systems also must
be investigated. This should be the last section of a project charter. A project should not be officially
closed out until this section is completed with the appropriate documentation and approval signatures.

The control phase of DMAIC consists of determining the place to capture all the relevant project
information. Organizations should enforce a requirement that no project may be closed out until the
relevant information and lessons learned are captured and stored in a company repository, which
should be managed and maintained by an individual in the company. Once stored, the data may be
searched and reviewed when deciding upon new projects or replicating a past success. For example,
tools such as value stream maps and FMEAs may be used again in future projects after being updated.
Therefore, the project information must be properly stored in a legible form that is readily accessible to
the organization, which can be accomplished electronically or by distributing paper copies.

31.2 Training Plan Deployment 


Training the personnel that work in the improved process is key to the sustainability of a LSS projects.

Employees who worked in the old process must now be trained in the new process as well as employees
who are new to this process. This training may be on-the-job, classroom training, or a combination of

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Chapter 31: Project Closure

both. The length of this training is relative to the complexity of the process and the amount of change
that occurred during the improvements. In any case, the skills, knowledge, and ability of all affected
employees must be sufficient to allow then to work comfortably in the improved process.

The employees should be consulted when developing and implementing the training plan. This
Unlawful to replicate or distribute

participation often encourages employees to participate in the training which, in turn, helps sustain
process improvements in their area.

31.3 Documentation
Training is often supplemented with visual controls, work instructions, or procedures. This is known
as “standard work” in Lean manufacturing. The purpose of these documents is to ensure that the work
can be practically performed the same way over time.

Document Control Procedure

Documents must be controlled and kept current with the date issued, revision number, and
authorizing signature present on all documents. Documents may be paper copies, or in electronic
form. Documents must be approved for adequacy prior to use. They must be legible and readily
accessible to the employees who use them. Obsolete documents must be destroyed, unless kept for
special purposes. Retained obsolete documents must be identified as such and safeguarded in a
manner that precludes their unintended use.

31.4 After Project Closure


After closing out one project, it is time to start another project. Quality improvement is a lot like
pushing a rock up the hill. If you stop pushing, the rock may roll back down the hill, and it may take
you with it. Therefore, persistence in making improvement efforts is needed; and it is good practice to
always try to have an improvement project in progress. Remember that every LSS project is different.
Equipment will change. People will change. The tools will change.

Below are some tips for those taking the LSS journey.

1. Eliminate Non-Valuable Items


Eliminate. Reduce. Consolidate. Minimize. Make the process easier to perform and understand.
Simplifying the process will decrease errors. If the number of steps in the process are reduced from ten
to eight, it is understood that no errors will be committed in the steps that were eliminated.

2. Deliver Quick Wins


Even at the middle of a DMAIC process, it is important to know how to identify Quick Wins. Quick
Wins are solutions that may come along any time during the DMAIC process. Quick Wins should
be considered only if they can be implemented easily and inexpensively and everyone involved in
the process is in agreement. Quick Wins should be documented in the project charter along with the
financial gains related to them.

316 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

3. Data Matters
Data identifies what needs to be done. Sometimes, deep analysis is not needed, but sometimes it is
necessary. The trustworthiness of the data must be verified.

Unlawful to replicate or distribute


4. Cast your Suspect Net Far and Wide
Even as the Define stage is initiated, potential causes of a problem may be discovered and they should
be documented immediately. All suspects (inputs) should be considered.

5. Training is Very Rarely a Root Cause


Training will show up on everyone’s initial suspect list. It almost always is a contributing factor but not
a root cause.

6. Get your Measurement Systems Right


Is the data accurate? Is the data precise? A gage R&R for variable data or an Attribute Agreement
Analysis should be performed for attribute data. In the rare cases when the team feels a measuring
system analysis (MSA) is not applicable, the decision and the reason why should be documented in the
charter.

7. Establish a Robust Project Selection Process


LSS programs will be judged by their results. The problem must be related to a key business issue, and
the improvements must be linked to financial performance. The problem must be linked to a defined
process and the key internal and external customers must be identified. Defects/errors must be clearly
defined and their occurrence measured.

8. People Matter
LSS practitioners work with both people and equipment and should possess the ability to lead, work
with teams, be part of a team, and understand team dynamics (forming, storming, norming, and
performing).

9. Follow the Yellow Brick Road


The DMAIC process is the road to success with LSS projects.

10. Sometimes You Will Lose


No one leads winning projects all of the time. When a project does not turn out as expected, losses
should be cut, whatever possible gains should be taken, and the project should be closed out. Finally,
the lessons learned should be recorded and the next project should be prepared to begin.

The skills, knowledge, and ability of LSS practitioners should improve with each passing project, but
they should consider working on the following self-improvement items:

Communication and writing skills: LSS practitioners should have the ability to communicate
effectively with all levels of the organization.

Project management: LSS is all about projects and sustaining the improvements.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 317
Chapter 31: Project Closure

Teaching/coaching: Training is a huge part of any LSS endeavor; and LSS practitioners are a part of
the training team.

Team experience/team building/conflict resolution/leader: LSS projects are built around teams, not
just one person.
Unlawful to replicate or distribute

Data driven/math skills: It is not necessary to be a statistician, but sufficient knowledge of the
DMAIC process and the LSS tools is needed in order to know when and how to conduct the process
and the proper way to analyze the results.

318 © 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017
Lean Six Sigma | Green Belt Book of Knowledge

Unlawful to replicate or distribute


Microsoft, Word, and Excel are either registered trademarks or trademarks of Microsoft Corporation in the United
States and/or other countries.

QI Macros is a trademark of KnowWare International, Inc.

PMI and PMBOK are registered marks of the Project Management Institute, Inc.

MINITAB® and all other trademarks and logos for the Company’s products and services are the exclusive property
of Minitab Inc. All other marks referenced remain the property of their respective owners. See minitab.com for more
information.

© 2009, 2014, 2017 Purdue University All Rights Reserved | Third Edition 2017 319

You might also like