Professional Documents
Culture Documents
Objectives
To understand the term quality in general and its relationship with
statistics.
How to achieve quality improvement by reducing the variation, and following systematic quality improvement methods such as Shewhart-Deming
cycle, Six Sigma cycle etc.
To understand the importance of statistical thinking and the role of several
simple statistical tools for use at the shop floor to engage everyone in an
organisation to improve quality.
To understand the distinction between common and special causes of variation, formation of rational subgroups, and avoidance of process tampering.
To appreciate the role of Design of Experiments (DOE) for reducing common cause variation and measurement of process capability.
To understand the methodology of Shewhart control charting for process
monitoring.
R and S variables control charts for Phase I and II, and
To implement X,
understand their construction.
To implement p and c control charts, and understand their construction.
To comprehend the role of sampling inspection for product assurance,
Operating Characteristic curves and quality levels.
Quality and its management played a crucial role in human history. Managing
quality was important even for ancient civilisations. Standardisation was recognised as the first step towards quality. In ancient Rome, a uniform measurement
system was introduced for manufacturing bricks and pipes; and building regulations were in force. Water clocks and sundials were used in ancient Egypt and
Babylon (15th century BC) even though they were not terribly accurate. The
Chinese Song Dynasty (10th century) even mandated the control of shape, size,
length, and other quality factors of products in handicrafts using measurement
tools, such as carpenters squares.
The industrial revolution began in the United Kingdom during the 18th
century and then extended to US and other countries. Quality has become
harder to manage due to mass production. Mass production was achievable
by the division of labour and the use of machinery. In such a production line,
workers performed repetitive tasks in a cooperative way using machinery. This
resulted in huge productivity gains. But the number of factors and variables
affecting the quality of a product in a mass production line were also numerous
when compared to the production of a single item by an artisan who did all work
from start to end. Division of labour for mass production also took away the
pride of workmanship. Hence quality suffered in the production line and quality
monitoring became an essential activity. Due to mass manufacture, engineers
were forced to look beyond using standardised measurements. The causes of
quality variation were numerous and hence statistical methods were needed for
quality monitoring and assurance.
Prof Water Shewhart and Harold Dodge implemented statistical methods
for quality in the mid twenties in USA. The Second World War was the main
catalyst for the extensive use of statistical quality control methods for improving
Americas war time production. Certain statistical methods were even classified
as military secrets. Dr. Kaoru Ishikawa, a well-known Japanese quality philosopher speculated that the Second World War was won by quality control and by
utilisation of statistical methods. The western industries could not sustain their
achievements in quality mainly due to the failure of management. The Japanese
success in the quality front in the late half of the last century can be partly attributed to the wider use of some simple statistical tools together with more
advanced ones such as experimental designs. A word of caution! Quality problems can only be partly solved by statistical methods. For achieving excellence
in quality, company-wide participation, customer focus, good management etc
are important. In the last three decades, many companies in both developed
and developing countries embraced the concept of total quality management
which evolved from the humble beginning namely using simple statistical tools
at the shop-floor.
2.1
What is quality?
totality of features
and characteristics
given needs
Quality
Suppose that we would like to understand the quality of a brand of blackcurrant nectar bottled by a company. Let us suppose that the following features
and characteristics are identified:
Features Colour, appearance, smell, flavour, packaging and labelling etc
Characteristics Relative density at 20C, content of wine acid, alcohol, acetic
acid, vitamin C, microbial attributes etc
How to measure the above? Let the analytical measures of the characteristics
and features, called quality measures, be as given below:
1. Colour rating scale 1 to 5 (5 being excellent)
2. Appearance rating scale 1 to 5 (5 being excellent)
3. Smell rating scale 1 to 5 (5 being excellent)
4. Flavour rating scale 1 to 5 (5 being excellent)
5. Density measurement by laboratory methods
6. Acid content measurement by laboratory methods
7. Alcohol measurement by laboratory methods
8. Vitamin C measurement by laboratory methods
9. Packaging etc visual inspection
The totality of the above features and characteristics is expected to satisfy
the needs. Hence a list of customer needs should be identified by surveying the
customers. Certain characteristics such as microbiological characteristics partly
represent societal needs. How far the product features and characteristics meet
these needs determines the quality of the black current nectar.
4
2.2
Quality philosophers such as Dr. Genichi Taguchi define quality as the loss a
product causes to society after being shipped, other than any losses caused by
its intrinsic function. Variation from the target of a quality characteristic may
be caused by the uncontrollable factors known as noise, such as
Outer noise due to humidity, temperature, vibration, dust etc.
Inner noise due to wear and deterioration.
In-between noise due to the material, worker etc.
The strength of these noises largely determines the amount of variability
from the target and directly impacts on controllable process factors or parameters such as increasing or decreasing speed, temperature etc. Variability can
be defined and understood only in statistical terms. Hence the use of statistical
methods becomes important for reducing the variability or improving quality.
Montgomery (1996, p.4) defines quality as one which is inversely proportional
to variability. In other words, quality improvement requires reduction of variability in processes and products.
2.2.1
results whether the water is hard or soft and cold or hot. Robust experimental
designs identify the optimum mix of controllable factor levels which produces a
response robust to external noise factors.
2.2.2
Statistical process control (SPC) is the methodology for monitoring and optimizing the process output, mainly in terms of variability, and for judging when
changes (engineering actions) are required to bring the process back to a state
of control. This strategy of control differs from the engineering process control
(EPC) where the process is allowed to adapt by automatic control devices etc.
In other words SPC techniques aim to monitor the production process while
EPC is used to adjust the production process.
2.2.3
Sampling Inspection
Prof Walter Shewhart (1931) who invented the control chart technique and
regarded as the father of the SPC proposed the following three postulates from
an engineering view point:
1. All chance systems of causes are not alike in the sense that they enable
us to predict the future in terms of the past.
2. Systems of chance causes do exist in nature such that we can predict the
future in terms of the past even though the causes be unknown. Such a
system of chance is termed constant.
3. It is physically possible to find and eliminate chance causes of variation
not belonging to a constant system.
The above three postulates may appear unclear in the first reading. The
following paragraphs explain them, and then show how they lead to SPC and
other procedures.
A production process is always subjected to a certain amount of inherent
or natural variability caused by a number of process and input variables. This
stable system of chance causes, known as common causes belong to the process.
3.1
Although both common and assignable causes create variation, common causes
contribute to controlled variation while assignable causes contribute to uncontrolled variation. Shewhart explained the term controlled variation as follows.
A phenomenon will be said to be controlled when, through the use of past experience, we can predict, at least within limits, how the phenomenon may be
expected to vary in the future. Here it is understood that prediction within limits
means that we can state, at least approximately, the probability that the observed
phenomenon will fall within given limits. In other words, a general probability law will apply when the process is subjected to only common causes. In
particular we can state that the current state-of-the-art production involves a
constant amount of variability due to common causes.
7
3.2
Process Tampering
Common cause variability is often the result of uncontrollable variables representing the current state of the art of production. Without understanding the
nature of variability permitted by the common causes, if the production process
is tampered by unnecessary interventions, then the variability in the quality
characteristic will actually increase. It is important that unnecessary process interventions such as tool changes etc should not become another source of special
cause. Deming used to demonstrate this concept using a demonstration called
Funnel Experiment which is briefly described below:
As shown in Figure 2, a funnel is mounted on a stand and the spout is
adjusted towards a target. A marble is then dropped through the funnel and
the final resting position is noted. The distance between the target and the final
resting place represents the random variation. Let us suppose that we do not
adjust the funnel position and simply drop the marble several times and note the
resting positions representing the random variation. Let us also consider certain
additional rules or strategies which represent process intervention or adjustment
actions. A set of such rules used by Deming for his funnel adjustment including
the strategy of not adjusting the funnel (called Rule 1) are summarised below:
Rule 1 The funnel remains fixed, aimed at the target.
Rule 2 Move the funnel from its previous position a distance equal to the
current error (location of drop), in the opposite direction.
Rule 3 Move the funnel to a position that is exactly opposite the point where
the last marble dropped, relative to the target.
Rule 4 Move the funnel to the position where the last marble dropped.
Figure 3 shows the variability involved with the 4 rules using 400 simulated
standard normal random variables (X,Y) targeted at (0,0). It is clear from the
Demings funnel demonstration that intervention in a production process should
not be made unnecessarily if the process is already stable or in control. The
process stability must be monitored statistically (based on probability laws). If
this is done, then the variation in the process output is reduced by avoiding
unnecessary process interventions or process tampering.
3.3
Shewhart-Deming cycle
0
-1
-1
-2
-2
-3
-3
-3
-2
-1
-4
-5
-3
-2
-1
40
15
10
30
20
10
-4
-5
-10
-10
-15
-20
-20
-10
10
20
-30
-20
- 10
10
Act Based on the learning in the earlier phase, action is taken to implement
the improvement strategy on a large scale. No improvement is final in
nature and hence the cycle restarts at the first (Plan) phase.
ACT
PLAN
CHECK
DO
Outputs
(for
customers)
4.1
In the mid 1980, Motorola Corporation faced stiff competition from competitors
whose products were of improved quality. Hence a resolution was made to
12
improve the quality level to 3.4 DPMO (defects per million opportunities) or
below. That is, the resolution was to keep the process variation one-sixth of
the variation allowed by the upper or lower specifications. The low defect level
was achieved by the Six Sigma process management model in Motorola. The
Six Sigma model can be viewed as an improvement over the Shewhart-Deming
cycle. This management methodology is highly data driven and involves the
following steps summarised by the acronym DMAIC (define, measure, analyze,
improve, control).
Define Identify the process or product that needs improvement. Benchmark
with key product or process characteristics of other market leaders.
Measure Select product characteristics (dependent variables); Map the processes; Make necessary measurement and estimate the process capability.
A methodology known as the Quality function deployment (QFD) is used
for selecting critical product characteristics.
Analyse Analyze and benchmark the important product/process performance
measures. Identify the common factors for successful performance. For
analyzing the product/process performance, various statistical and basic
quality tools will be used.
Improve Select the performance characteristics which should be improved.
Identify the process variables causing the variation, and perform statistically designed experiments to set the improved conditions for the key
process variables.
Control Implement statistical process control methods. Reassess the process
capability and revisit one or more of the preceding phases, if necessary.
Statistical thinking and tools play an important role in all the DMAIC
phases. It is important to note that the variables causing quality must be
identified, experimented and improvements are achieved and held.
Quality is the business of everyone in an organisation. Hence employee
training in the use technical tools and problem solving is an integral part of
the Six Sigma quality management model. Trained employees were even given
martial arts titles black belts and master black belts depending on their skill
levels and experience! You may be surprised to know that Motorola saved several
billion dollars using Six Sigma methodology. From early nineties, Six Sigma
methodology is being adopted by many multinational companies for achieving
quality and hence profitability.
To encourage statistical thinking in the shop floor, and to train in management methodologies such as Six Sigma, several EDA tools are used. Simple
EDA tools such as histograms, scatter plots, boxplots etc are extremely useful
for understanding a production process.
13
4.1.1
10
20
Density
30
40
50
Histogram of dimension
1.90
1.91
1.92
1.93
1.94
dimension
Specification limits are limits defined to represent the extreme possible values
of a quality characteristic for conformance of the individual unit of the product.
For example the minimum and maximum for the dimensional characteristic
may be externally fixed as 1.91cm and 1.93cm respectively. We will call the
minimum value as the lower specification limit (LSL) and the maximum value
as the upper specification limit (USL). A quality characteristic may have only a
single specification limit (LSL or USL) or both. Specification limits are fixed on
technical grounds and the actual production should be well contained within the
specification limits to prevent production of nonconforming or defective items.
Histograms are therefore useful to graphically assess whether the production
process is capable of meeting the specifications. The above histogram shows that
14
the process is not meeting the LSL and the USL conditions and a good fraction
of the production must be nonconforming. This high level of nonconformance
calls for variability reduction. Separate histograms for each head with overlaid
specification limits may be drawn. They will be useful to graphically assess the
process capability of each head.
4.1.2
Check Sheet
A check sheet is a simple device for data collection, and summarising all (historical) defect data on the quality characteristic(s) against time, machine, operatives, etc. It helps one identify trends or meaningful results in addition to
its role in proper record keeping. While designing a check sheet, it is necessary
to clearly specify the following:
1. Quality measure
2. Part or operation number
3. Date
4. Name of analyst/sampler
5. Brief method of sampling/data collection
6. All information useful for diagnosing poor quality
The design of a check sheet depends on the requirements of data collection.
For example, the check sheet shown as Figure 7 includes the time of sampling
and the specification limits for a can filling operation.
There is no definition for a check sheet and it can be designed in various
ways dictated by our requirements. It can also be designed to record several
factors and responses for industrial experimentation purposes.
4.1.3
15
XYZ Industries
PQRS.
Can Filler Machine: AAAA
Shift (circle): I II III
Date:
Specification: 991ml Minimum
Operator:________________
Volume (ml)
Hour
1
Hour
2
Target: 1000 ml
Hour
3
Hour
4
Hour
5
Hour
6
under 980
980 - 985
986 - 990
991 - 995
996 - 1000
1001 - 1005
1006 - 1010
over 1010
Steps:
1. Check five cans per hour for 8 hours
2. Place a tally mark in the proper box after measurement
Remarks:
16
Hour
7
Hour
8
Total
Defect Types
Scratches
Di rt
Thick
Thin
Bubbl e
Front View
Figure 8: A Location Check Sheet
Pareto diagram
This graphical tool derives its name from the Italian economist Pareto, and was
introduced for quality control by Dr. Juran, a famous quality Guru. Juran
found that a vital few causes lead to a large number of quality problems, while
a trivial many cause only a few problems.
First the causes are ordered in descending order of interest. The characteristic of interest may be the percentage cause of nonconforming items, economic
losses, etc. The ranked causes are then shown as bars where the bar height
represents the characteristic of interest. The cumulative percentages are also
shown in the diagram. A sample Pareto diagram is shown in Figure 9. The
Pareto diagram is similar to a bar diagram. In both tools, bars represent frequencies. In a bar chart they are not arranged in descending order, while they
are for a Pareto diagram.
The Pareto chart is also successfully used in non-manufacturing applications
such as software quality assurance. A software package is usually tested for
errors before being released commercially. A software package will consist of
several modules and subroutines written by several persons and upgraded over
time. If the software fails on a test, it is possible to track which of the modules
actually caused the failure. Repeated test results when displayed in the form of
17
a Pareto chart will identify those modules which are to be simplified by breaking
complex subroutines into smaller ones etc.
300
100%
250
50%
Error frequency
200
150
25%
100
Cumulative Percentage
75%
0%
50
0
contact num.
price code
supplier code
part num.
schedule date
Error causes
4.1.5
This diagram, also called a fish-bone diagram, was introduced by the Japanese
Professor Dr. Kaoru Ishikawa (hence it is also known as an Ishikawa diagram).
The cause and effect diagram provides a graphical representation of the relationship between the probable causes leading to an effect. The effects are often
the vital few noted in a Pareto chart. The causes are generally due to machines,
materials, methods, measurement, people and environment. The diagram can
also be drawn to represent the flow of the production process and the associated
quality related problems during and between each stage of production. A sample cause and effect diagram is shown in Figure 10 which relates to the cause
for printed circuit board surface flaws.
Very often, it may be necessary to establish a relationship between a characteristic of interest and a major cause quantitatively. If this is done using experimental designs, the cause may be shown enclosed in a box. If only empirical
evidence exists, not fully supported by data, then the cause may be underlined.
The main advantage of a cause and effect diagram is that it leads to early detection of a quality problem. Often, a brainstorming session is required to identify
18
the sub-causes. The disadvantage of the cause and effect diagram is its inability
to show the interactions between the problem causes or factors.
A cause and effect diagram is often used after a brainstorming session. One
other diagram used to organise the ideas, issues, concerns, etc of a brainstorming
session is the affinity diagram. This diagram groups the information based on
the natural relationships between the ideas, issues, etc. A tree diagram is one
which breaks down a subject into its basic elements in a hierarchal way; it can
be derived from an affinity diagram. The basic objective of these diagrams is
to organise the relationships in a logical and sequential manner.
CauseandEffect diagram
Measurements
Materials
Micrometers
Microscopes
Personnel
Alloys
Shofts
Lubricants
Inspectors
Supervisors
Suppliers
Training
Operators
Surface Flaws
Sockets
Moisture
Condensation
Environment
Angle
Bits
Engager
Lathes
Brake
Methods
Speed
Machines
4.1.6
Multi-Vari Chart
This chart is used to graphically display the variability due to various factors.
Multi-vari charting is a quick method of analysing the variation and can be
viewed as the EDA tool prior to advanced (nested) ANOVA model of various
factors.
The simplest form of a muti-vari chart displays the variation over a short
span and a long span of time. For example, five consecutive items are taken
from a grinding operation every half hour and the diameter of the items sampled
is measured. The time taken to produce the five items is the short span of time.
Let us use the range, the difference between the largest and smallest observed
19
value, as a measure of variability in this short span of time. These range values
can then be shown over the longer period of the study.
If a multi-vari chart indicates instability in either the short or longer term,
the factors causing the instability must be listed and analyzed further. Note
that all production conditions such as change of raw materials and process
interventions such as tool adjustments etc will be noted on the check sheets.
A cause and effect diagram or a brainstorming session will help to list factors
presumably causing such instability.
4.1.7
Run Chart
A run chart is a particular form of a scatter plot with all the plotted points
connected in some way. This chart usually shows a run of points above and
below the mean or median. Run charts are mainly used as exploratory tools
to understand the process variation. For instance, the stability of a production
process can be crudely judged by plotting the quality measure or the quality
characteristic against time, machine order etc. and then checking for any patterns or non-random behaviour (such as trends, clustering, oscillation, etc) in
the production process.
Factor A
Factor B
Factor C
1
2
3
4
5
6
7
8
Old settings
18
22
18
22
18
22
18
22
20
145
145
155
155
145
145
155
155
150
35
35
35
35
45
45
45
45
40
Mean Compressibility
147.3
156
153.6
150.3
154.5
160
155.8
167.3
157.2
SD of Compressibility
4.2
3.8
5.9
4.7
3.6
2.8
5.3
5.8
5.1
Variability in quality characteristics affects quality, and results in nonconforming units when specifications are not met. In other words, the production
process becomes incapable of meeting the specifications. In order to assess,
whether a process is capable of meeting the specifications, process capability
indices are defined.
5.1
Cp index
U SL LSL
6
where USL and LSL are respectively the upper and lower specification limits
and is the standard deviation of the process characteristic. Six sigma spread of
the process is the basic definition of process capability when the quality characteristic follows a normal distribution. If Cp =1, then the process is just capable
of meeting the specifications, see Figure 11. In reality the process standard deviation is estimated and the true distribution may depart from normal. Hence
21
in order to allow for the sampling variability and other assumption violations,
the desired value for the estimated Cp is set 1.33 for existing processes and 1.5
for new processes. If the estimated index is lower than 1.33, it implies that the
process variability is high, and actions must be taken to reduce it.
LSL
USL
If the quality characteristic has only one specification, either on the lower or
upper side, then the following indices are used.
CpL =
LSL
(lower
3
CpU =
U SL
(upper
3
specification)
specification)
If the process is centred at , then one has Cp = CpL = CpU . For a better
idea of process centring, the index defined next will be used.
5.2
Cpk index
22
5.3
Cpm index
Consider the following specification requirements for which two production processes are available.
LSL = 35, USL = 65 and target = T = 50 units.
Let the process parameters and the associated process capability indices be:
Process I =50 =5.0 Cp =1.0 Cpk =1.0
Process II =57.5 =2.5 Cp =2.0 Cpk =1.0
The two processes have the same Cpk values but obviously the second process
is not at the target. For Process II, it can be observed that the index Cp is not
equal to Cpk . To have a better indicator of process centering at the desired
target, the following index is used:
Cpm =
U SL LSL
6
where is the square root of the expected square deviation from the target T
namely
2
2 = E(X T )
2
2
= E (X ) + ( T )
2
= 2 + ( T )
6 +(T )2
Cp
1+ 2
where = T
.
The process capability indices are computed in two ways. The first approach
is to obtain the index using an estimate of sigma for the shorter term (for
example, after experimentation). The other approach is to obtain a long term
estimate of the sigma, which is often known after the implementation of the
SPC methods, which will be discussed in later sections.
The interpretation of process capability indices becomes difficult in the following situations:
1. When the process is not in a state of statistical control
2. Non-normal process
3. Correlated process and
4. Inevitable extra variation between different production periods.
Hence caution must be exercised in interpreting the process capability indices.
23
74.010
74.015
74.000
Quality Measure
74.005
73.995
73.990
LCL
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
Group
6.1
Rational Subgrouping
In the technique of control charting, past data are used for judging whether
the future production will be in control or not. To accomplish this, past data
are accumulated. By a subgroup of observations, we mean one of a series of
observations obtained by subdividing a larger group of observations. By a rational subgroup, we mean classifying the observed values into subgroups such
24
that within a rational subgroup variations are expected to be due to common causes and variations between subgroups may be attributable to special
cause(s). In other words, the subgroups should be such that special causes show
up in differences between the subgroups as against special causes differentiating
the member units of a given subgroup. That is, we would like to see that the
member units of a subgroup are as homogeneous as possible. For example, assume that a machine has four heads operated by the same worker and using the
same raw material. We would like to accumulate data pertaining to the same
machine head to form a subgroup, rather than pooling the data from all the
four machine heads to form the subgroup. Any head-to-head difference will be
reflected in the differences between subgroups. The task of subgrouping requires
some technical knowledge of the production process, particularly the production
conditions and how units were inspected and tested. Subgrouping on the basis
of time is the most useful approach followed in practice. This is because the
process stability tends to be lost over time due to the use of several batches of
raw material(s), changing operatives, etc. The other factors for subgrouping are
batch of raw material, machine, operator, etc. Careful rational subgrouping
is extremely important to ensure the effectiveness of a control chart
and easy identification of assignable causes. The success of the control chart
technique largely relies on proper subgrouping.
For a constant cause system the variation within a subgroup is the same as
the variation across subgroups. Therefore, if the assumption of a constant cause
system is correct, it should be possible to predict the behaviour of statistics such
as sample averages, ranges, and standard deviations, across subgroups based on
the homogeneous variation observed within subgroups. Data from a constant
cause system of variation will display only unexplainable variation both within
and across rational subgroups. The range of variation due to constant causes
will be within predictable statistical limits. Non-random patterns of variation
appearing across the rational subgroups can be treated as the signal for special
causes.
For some processes, uncontrollable factors such as the seasonal nature of
input quality of materials may be involved. Such structural variation will also
be treated as common than special. Proper subgrouping is expected to take care
of such issues. For example, a monitoring procedure for road accidents must
consider the structural variation due to Friday and weekends. The monitoring
procedure should be based on two distinct common cause systems namely (i)
Monday-Thursday and (ii) Friday-Sunday. If the whole week is treated as a
subgroup, the common cause variability may be incorrectly estimated.
6.2
Let M be some statistic computed using rational subgroups. We will also call
M a control statistic. Let the mean of M be M and the standard deviation be
M . Then the central line (CL), the upper control limit (UCL) and the lower
control limit (LCL) are fixed at
25
UCL = M + kM
CL = M
LCL = M kM
where k is the distance of the control limits from the central line, expressed in
standard deviation units. This configuration, known as Shewhart control chart,
is shown in Figure 13. The estimation of M and M and fixing a value for k
are statistical problems.
M + kM
kM
Central Line (CL)
M
M
kM
Lower Control Limit (LCL)
M - kM
etc
e
subgroup number
Figure 13: Shewhart Control Chart
Average X
Standard deviation S
Range R
Proportion defective p
Number of defectives np
Number of defects c
Defects per unit u
Name of Chart
Xchart
S-chart
R-chart
p-chart
np-chart
count or c-chart
u-chart
Type
Variables
Variables
Variables
Attribute
Attribute
Attribute
Attribute
Variables control charts are used when the quality characteristic is measurable
on a continuous scale. For example, the dimension of a piston ring is measurable
as a continuous variable. In a typical process, there may be several hundred
variables, and only key performance or use characteristics are considered for
control charting using variables charts. The the control statistic M for variables
charts is usually the mean of the quality characteristic. That is, the intention
of control charting is to monitor the process level. For example, the true mean
dimension of the piston ring may change either upward or downward during
the production. Hence the subgroup means are used to monitor the process
chart. This chart will
level, and the resulting chart is known as the Xbar X
be accompanied by either the range (R) chart or standard deviation (S) chart
which will monitor the increase in (within subgroup) variability over time.
7.1
28
3
2
1
target
1
2
3
3
A
target
3
2
1
target
1
2
3
2
1
target
1
2
3
B
C
C
B
A
target
A
B
C
C
B
A
target
A
B
C
C
B
A
3
2
1
target
1
2
3
target
A
B
C
C
B
A
testing and measuring instruments are available which will quickly determine
the quality characteristics YARNCOUNT, strength, number of thick and thin
places, etc.
Let the nominal or target YARNCOUNT be 40, i.e. a pound of yarn will give
40(840) = 33600 yards of length. Assume that the lower and upper specification
limits for YARNCOUNT are respectively 39.8 and 40.2.
The mill was sampling five leas from randomly selected spindles from the
spinning frame for testing during a production shift. On some days/shifts samples were not taken. Multiple samples were taken on a few shifts. Table 3
provides the historical data collected by the mill. This Table indicates certain
important process conditions that were noted during sampling. The mill found
that the input for the spinning machine, namely the yarn slivers produced in
the preparatory process, was not uniform during certain periods. Such cases, indicated as input sliver problem, occurred intermittently. It took some time to
locate the sources of trouble in the preparatory stages and correct this problem.
Samples numbered 17 and 34 are associated with clear (engineering) evidence
that they represent unusual production conditions or measurement problems.
These samples must be dropped. The same is the case with subgroups associated with input sliver problem and hence the samples numbered 4, 14, and 21
are also dropped. All cases where there is no strong technical evidence for lack
of control such as sample 25 (casual operator employed) will be included in the
analysis using control charts.
It is also very likely that certain unusual production conditions or special
causes would have existed during the period of data collection which are not
evident in the ordinary course of production operations. A trial control chart
for our Phase I analysis will be used to detect the presence of special causes so
that further technical investigation can be initiated to locate and eliminate the
sources of trouble.
How YARNCOUNT varies during a production shift is important for rational subgrouping and effectiveness of the control charts. More studies must be
done by collecting data in the same shift at different time intervals to understand how other process variables such as interference due to doffing, operator
breaks, maintenance schedules, etc, affect YARNCOUNT. They may suggest
more frequent sampling during a given production shift. One of the common
ways of subgrouping is to use a (small) block of time and allow bit longer time
between subgroups. Frequent sampling and formation of small subgroups is
useful than infrequent sampling and formation of large subgroups.
In order to estimate the true mean () or standard deviation () of the
quality characteristic YARNCOUNT, the historical data will NOT be pooled.
The retrospective data may contain periods dominated by one or more special
causes, and a pooled estimate of the standard deviation can be used only when
the process is known to be in control, i.e. dominated only by common causes.
The Shewhart control charts allow only common cause variation within a subgroup, and any extra variation between subgroups is inadmissible and will be
attributed to the presence of special causes. For any given subgroup i, the usual
standard deviation Si (n-1 in divisor) or the range Ri will be used to estimate
30
Date
28-09
28-09
28-09
29-09
29-09
29-09
30-09
30-09
04-10
04-10
04-10
05-10
05-10
05-10
06-10
06-10
07-10
07-10
07-10
08-10
08-10
08-10
11-10
11-10
11-10
11-10
12-10
12-10
12-10
13-10
13-10
13-10
13-10
14-10
14-10
14-10
15-10
15-10
15-10
Shift
1
2
3
1
2
3
2
3
1
2
3
1
2
3
2
3
1
2
3
1
2
3
1
1
2
3
1
2
3
1
2
2
3
1
2
3
1
2
3
Obs 1
40.0
40.0
40.0
41.0
40.0
40.0
40.0
40.0
40.1
40.0
40.0
40.0
40.0
40.5
40.0
40.1
40.1
40.1
40.0
40.1
39.0
40.0
39.9
40.0
40.2
40.1
40.1
40.0
40.0
40.0
39.9
40.0
39.9
60.0
40.0
40.1
40.1
39.9
40.1
Obs 2
39.9
40.0
40.0
40.9
40.0
40.0
40.1
40.0
40.0
40.0
40.0
40.0
40.0
41.0
39.9
40.0
NA
40.0
39.9
40.0
38.1
40.0
40.0
40.0
40.0
40.0
40.1
40.0
40.1
40.0
40.0
40.0
40.0
59.9
40.1
40.1
40.0
40.0
40.0
31
Obs 3
40.0
40.1
40.0
41.0
40.1
40.0
40.0
39.9
39.9
40.1
40.0
40.0
40.0
41.0
40.0
40.0
NA
40.1
40.0
40.0
39.0
40.0
39.9
40.0
40.1
40.0
40.1
40.0
40.0
40.1
40.0
40.0
40.1
60.0
40.1
40.0
39.9
40.0
40.1
Obs 4
40.1
40.0
40.0
41.0
40.0
39.9
40.0
40.0
40.0
40.0
40.1
40.0
39.9
40.8
40.0
40.0
NA
39.9
40.1
40.0
39.0
40.1
40.0
40.1
40.0
40.0
40.0
40.0
39.9
40.0
40.1
40.0
40.0
40.1
40.0
39.9
40.0
40.1
40.0
Obs 5
40.0
40.1
39.9
41.0
40.0
39.9
40.0
40.1
40.0
40.0
40.0
40.0
40.0
41.0
40.0
40.0
NA
39.9
40.0
40.1
39.6
40.0
40.0
40.1
40.0
39.9
40.1
40.0
40.0
40.0
39.9
40.1
39.9
40.0
40.0
40.0
40.0
40.1
40.0
faulty motor
casual operative
Yarncount mix up
the true process standard deviation . If there are m (say) such subgroups, the
mean of the m subgroup standard deviations (Si values) or ranges (Ri values)
will be used to estimate the process . Similarly the mean of the m subgroup
i (say) values) is used to estimate the true process mean . Consider
means (X
Table 4 which gives the means, standard deviations and ranges for the YARNCOUNT data. Note that this table omits the samples 4, 14, 17, 21 and 34 and
relates to a total of 34 subgroups only.
Table 4: Subgroup Means, Ranges and Standard Deviations
Old
sample
number
1
2
3
5
6
7
8
9
10
11
12
13
15
16
18
19
20
22
23
24
25
26
27
28
29
30
31
32
33
35
36
37
38
39
Subgroup i
i
X
Ri
Si
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
40.00
40.04
39.98
40.02
39.96
40.02
40.00
40.00
40.02
40.02
40.00
39.98
39.98
40.02
40.00
40.00
40.04
40.02
39.96
40.04
40.06
40.00
40.08
40.00
40.00
40.02
39.98
40.02
39.98
40.04
40.02
40.00
40.02
40.04
0.2
0.1
0.1
0.1
0.1
0.1
0.2
0.2
0.1
0.1
0.0
0.1
0.1
0.1
0.2
0.2
0.1
0.1
0.1
0.1
0.2
0.2
0.1
0.0
0.2
0.1
0.2
0.1
0.2
0.1
0.2
0.2
0.2
0.1
0.0707
0.0548
0.0447
0.0447
0.0548
0.0447
0.0707
0.0707
0.0447
0.0447
0.0000
0.0447
0.0447
0.0447
0.1000
0.0707
0.0548
0.0447
0.0548
0.0548
0.0894
0.0707
0.0447
0.0000
0.0707
0.0447
0.0837
0.0447
0.0837
0.0548
0.0837
0.0707
0.0837
0.0548
32
=X=
1 X
Xi = (40.00 + 40.04 + . . . + 40.04)/34 = 40.01
m i=1
= S/c
where S is the average of the subgroup standard deviations, viz.
m
1 X
Si
S =
m i=1
where Si is the standard deviation of the ith subgroup and c4 is a constant
that ensures
an unbiased estimator of . That is c4 = E(S)/. Hence, the
constant c4 is known as the unbiasing constant. c4 is purely a function of the
subgroup size n and values are given in Table 5. For YARNCOUNT data,
S
giving
= 0.05703 / 0.94 = 0.0607.
Table 5: Unbiasing constants for Ranges and Standard Deviations
n
2
3
4
5
6
10
15
20
25
c4
0.7979
0.8862
0.9213
0.9400
0.9515
0.9727
0.9823
0.9869
0.9896
d2
1.128
1.693
2.059
2.326
2.534
3.078
3.472
3.735
3.931
It is also possible to estimate the process sigma using ranges. That is, the
estimator is
2
= R/d
is the mean of the subgroup ranges given by
where R
m
X
= 1
Ri
R
m i=1
33
and d2 is the unbiasing constant for the range. That is, d2 = E(R)/ values
are given in Table 11.5 for selected subgroup sizes. For YARNCOUNT data, we
find
7.2
Xbar chart
After estimating the true process level as X and the common cause sigma as
values of and . Since V(X) = n , the 3-sigma control limits for the X-chart
are obtained as:
3
n
The X-chart
control limits for the YARNCOUNT data are:
34
UCL = 40.01+ 3(0.0569/ 5) = 40.086.
It is easy to compute the control limits using the control limit formulae
appearing in Table 6 to 8. The table also gives certain constants (A2 , A3 ,
B1 , B2 , B3 , B4 , D1 , D2 , D3 , D4 ) called control limit factors for computing
control limits. For example, Table 11.6 gives the control limits (based on the
estimate of ) as X A2 R
with control limit factor A2 = 0.577 (which is
R
equal to 3/d2 n). The control limit factors are useful for manual computation
of control limits.
control chart
Table 6: Formulae and constants for X
subgroup size
Factor for control limits
n
A
A2
A3
2
2.121
1.880
2.659
3
1.732
1.023
1.954
4
1.500
0.729
1.628
5
1.342
0.577
1.427
6
1.225
0.483
1.287
10
0.949
0.308
0.975
15
0.775
0.223
0.789
20
0.671
0.180
0.680
25
0.600
0.153
0.606
Control limits
For Analysing Past Production for Control (Standards
Unknown):
central line =X
or X A3 S
control limits = X A2 R
For Controlling Quality during Production (Standards
Known):
0
central line =X 0
control limits = X A 0 or X A2 Rn0
The control limits will be displayed on a time sequence plot or run chart for
i will be plotted against the subgroup
the mean. That is, the subgroup means X
number i with reference lines for the control limits. The overall mean X will
also be placed on the chart to produce the central line. Figure 15 is the resulting
X-chart
for the YARNCOUNT data based on the S estimate of .
UCL
0.10
0.12
S Chart
for yarn
0.06
YARNCOUNT SD
0.08
0.00
0.02
0.04
LCL
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34
Group
7.3
S chart
line being S. For the given subgroup size of 5, one finds the factors for control
limits B3 = 0 and B4 = 2.089 from Table 11.7. For YARNCOUNT data, S =
0.05703 and hence
LCL = 0(0.05703) = 0 and UCL = 2.089(0.05703) = 0.119.
These control limits are then placed on a run chart of Si values with a reference
central line for S = 0.05703. Figure 16 is the S chart for YARNCOUNT data:
36
n
B3
B4
B5
B6
2
0
3.267
0
2.606
3
0
2.568
0
2.276
4
0
2.266
0
2.088
5
0
2.089
0
1.964
6
0.030
1.970
0.029
1.874
10
0.284
1.716
0.276
1.669
15
0.428
1.572
0.421
1.544
20
0.510
1.490
0.504
1.470
25
0.565
1.435
0.559
1.420
Control limits
For Analysing Past Production for Control (Standards Unknown)
central line = S control limits = B3 S and B4 S
For Controlling Quality during Production (Standards Known):
central line = c4 0 control limits = B5 0 and B6 0
UCL
0.10
0.12
S Chart
for yarn
0.08
0.06
0.04
0.02
0.00
YARNCOUNT SD
LCL
1
11
13
15
17
19
21
Group
37
23
25
27
29
31
33
n
D1
D2
D3
D4
2
0
3.686
0
3.267
3
0
4.358
0
2.575
4
0
4.698
0
2.282
5
0
4.918
0
2.115
6
0
5.078
0
2.004
10
0.687
5.549
0.223
1.777
15
1.203
5.741
0.347
1.653
20
1.549
5.921
0.415
1.585
25
1.806
6.056
0.459
1.541
Control limits
For Analysing Past Production for Control (Standards Unknown):
central line =R
and D4 R
control limits = D3 R
For Controlling Quality during Production (Standards Known):
central line = d2 0 = Rn0
control limits = D1 0 and D2 0 or D3 Rn0 and D4 Rn0
38
None of the plotted points breach the UCL and hence we will conclude that the
variability within the process is in control.
Now consider the computation of control limits for the R-chart.
7.4
R chart
and UCL = D4 R
with the central
The control limits are given by LCL = D3 R
line being R. For the given subgroup size of 5, one finds the factors for control
=
limits D3 = 0 and D4 = 2.115 from Table 11.8. For YARNCOUNT data, R
0.12715 and hence
LCL = 0(0.12715) = 0 and UCL = 2.115(0.12715) = 0.2689.
These control limits are then placed on a run chart of Ri values with a reference
= 0.12715. Figure 17 is the R-chart for YARNCOUNT data:
central line for R
UCL
0.15
0.00
0.05
0.10
YARNCOUNT Range
0.20
0.25
0.30
R Chart
for yarn
LCL
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34
Group
Again the R chart suggests that the variability within the process is under
control.
7.5
When a signal (which could possibly be a false alarm) is obtained for lack of
control from a control chart, one usually looks for the presence of special causes.
39
X-chart
limits need not necessarily be a breaching point on the R- or S-chart
(and vice versa). Such a point need not be dropped from the associated Ror S-chart; nor will it call for a revision. For a normally distributed quality
and the sample variance S 2 are independently
characteristic X, the mean X
distributed, and hence such an action may be justified.
For YARNCOUNT data, all the points lie within the control limits. Hence,
the Standard values for the mean and standard deviation for Phase II charting
are set as:
0 = 40.01
Standard value for mean = X
4 = 0.0607.
Standard value for standard deviation = 0 = S/c
7.6
1
m
m
P
Xi .
i=1
MR =
1 X
1 X
M Ri =
| Xi Xi1 |.
m 1 i=2
m 1 i=2
MR
d2
where d2 is found corresponding to sample size 2). The control limits are then
set at
3 MR
X
d2
Figure 18 shows a typical I-Chart for monitoring viscosity of a chemical.
Figure 18: I-Chart
IChart for Viscosity
34.0
34.5
UCL
33.5
Viscosity
33.0
32.5
LCL
10
11
12
13
14
15
Group
41
7.7
Time-weighted Charts
Shewhart control charts are useful to quickly detect sudden big shifts occurring
in a production process. However, Shewhart control charts are not sensitive to
detect small shifts in the process level. The supplementary run rules improve the
sensitivity of Shewhart control charts by detecting small process changes. However advanced control charting procedures which provide time varying weights
are more powerful for detecting small changes in the process level. The following
control charts are suitable when higher sensitivity is desired:
1. Moving Average (MA) charts:
These charts are based on the control statistic
tw+1 + X
tw+2 + ... + X
t1 + X
t
Mt = X
8.1
m
X
di
mn
i=1
.
The control limits are set at
r
p 3
p (1 p)
.
n
For example, consider the data given as Table 9 on the number of defectives
obtained for 50 subgroups of 100 resistors drawn from a process. The value of
p is 0.01. The control limits are found as
r
0.01 (1 0.01)
0.01 3
100
43
or 0 to 0.03985. If the computed value for LCL is negative, it is set at zero. This
means that there is no control exercised to detect any quality improvement.
Figure 19 provides the p-chart for the above data. Table 9 also gives the pi
values needed for plotting the p-chart.
Table 9: Nonconforming resistors in various subgroups
i
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
di
0
0
2
0
1
0
2
2
1
1
0
2
1
1
1
0
0
0
2
3
1
2
0
1
1
pi
0.00
0.00
0.02
0.00
0.01
0.00
0.02
0.02
0.01
0.01
0.00
0.02
0.01
0.01
0.01
0.00
0.00
0.00
0.02
0.03
0.01
0.02
0.00
0.01
0.01
i
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
di
0
0
1
0
0
1
3
0
1
2
2
0
2
2
1
1
1
3
2
1
1
0
0
0
2
pi
0.00
0.00
0.01
0.00
0.00
0.01
0.03
0.00
0.01
0.02
0.02
0.00
0.02
0.02
0.01
0.01
0.01
0.03
0.02
0.01
0.01
0.00
0.00
0.00
0.02
0.02
0.00
0.01
proportion defective
0.03
0.04
LCL
11
13
15
17
19
21
23
25
27
29
33
31
35
37
39
41
43
45
47
49
Group
8.1.1
Sometimes it may be desirable to have a lower control limit greater than zero
in order to look for samples that contain no defectives or to detect quality
improvement. If p is small, obviously, the subgroup size should be very large.
For example, for p = 0.01, the minimum subgroup size must be 891 for LCL
to be greater than zero. Such large subgroup sizes are not practical and hence
supplementary run tests based on several subgroups are employed to detect
quality improvement.
8.2
np-chart
The np-chart is essentially a p-chart, the only difference being the observed
number of defectives is directly plotted instead of the observed proportion defective. If p is the proportion defective, then d, the number of defectives in the
subgroup size n, follows
p a binomial distribution whose expected value is np with
standard deviation np(1 p). Here p could be a standard value for Phase II
control charting. When no standards
p are available, one uses the p estimate and
draws the control limits at n
p 3 n
p (1 p). The central line is drawn at n
p.
The OC function of the np chart is similar to that of the p-chart and on the
X-axis one plots the d values instead of p values.
45
8.3
ec cd
d!
d = 0, 1, 2...
(c > 0).
The mean and the variance of d are the same and equal to c. Hence the control
limits for count d (with three sigma spread) are given by
c 3 c,
the central line being c. If LCL is less than zero, it is set at zero. Here c could
be a standard value. In its absence, c is estimated as the average number of
nonconformities in a sample, say c, and the control limits are set at
c 3 c
Consider Table 10 showing the number of nonconformities observed in 20
subgroups of five cellular phones each. The value of c is 84/20 = 4.2. The
control limits are then found as
4.2 3 4.2
or 0 to 10.4 and the central line is set at 4.2. One plots the total number of
defects found in each subgroup thereafter in the c-chart shown as Figure 20.
While using a c-chart, a signal for a special cause may require further analysis
using a cause and effect diagram.
46
d
3
0
1
1
0
1
0
0
1
0
0
2
0
1
2
2
0
0
0
2
subgroup
5
5
5
5
5
6
6
6
6
6
7
7
7
7
7
8
8
8
8
8
d
0
1
2
2
1
0
0
2
1
2
0
0
0
0
0
1
0
1
0
0
subgroup
9
9
9
9
9
10
10
10
10
10
11
11
11
11
11
12
12
12
12
12
d
1
1
1
2
2
0
1
0
1
0
3
0
2
0
1
1
0
0
0
0
subgroup
13
13
13
13
13
14
14
14
14
14
15
15
15
15
15
16
16
16
16
16
d
1
2
1
2
1
0
0
2
1
0
1
2
1
1
1
0
1
0
0
1
subgroup
17
17
17
17
17
18
18
18
18
18
19
19
19
19
19
20
20
20
20
20
10
UCL
no. of defects
LCL
10
11
Group
47
12
13
14
15
16
17
18
19
20
d
1
1
2
0
2
2
0
1
2
2
0
0
2
0
4
1
0
0
1
0
8.4
u-chart
The u-chart or count per unit chart is a configuration to evaluate the process
in terms of average number of predefined events per unit area of opportunity.
The u-chart is convenient for a product composed of units whose inspection
covers more than one characteristic such as dimension checked by gauges, other
physical characteristics noted by tests, and visual defects observed by eye. Under
these conditions, independent defects may occur in one unit of product and a
preferred quality measure is to count all defects observed and divide by the
number of units inspected to give a value for defects per unit (rather than a
value for the fraction defective). Here only the independent defects are to be
counted. The u-chart is particularly useful for products such as textiles, wire,
sheet materials, etc, which are continuous and extensive. Here the opportunity
for defects/nonconformities is large even though the chance of a defect at one
particular spot is small.
The total number of units tested is subdivided into m rational subgroups of
size n each. Here n can be in fractions. For each subgroup, a value of u, the
defects per unit, is computed. The average number of defects is found as
u
=
Assuming that the number of defects follows the Poisson distribution, the control
limits of the u-chart are given by,
r
u
u
3
n
For unequal subgroups, u
is found as
P
nu
Pi i
ni
where ni is the ith subgroup size and ui is the number of defects per unit in the
ith subgroup. Here n1 ,n2 . . . need not be whole numbers, e.g. the length of the
cloth inspected may be 2.4m. The control limits are set at
r
u
u
3
ni
The u-chart for the cellular phone data is given as Figure 21.
Acceptance Sampling
1.5
2.0
UCL
1.0
0.5
0.0
LCL
10
11
12
13
14
15
16
17
18
19
20
Group
Testing is destructive.
Cost and time for 100% inspection are high.
Less handling of product is necessary, when handling can cause for example
degradation of product.
There are limitations of work force.
Serious product liability risks exist
At the pre-shipment and receiving inspection stages.
The disadvantage of acceptance sampling is the risk of accepting bad lots and
rejecting good lots. Acceptance sampling when applied on the final product
simply accepts and rejects lots and hence does not provide any direct form
of quality improvement. Prof. Dodge, the originator of acceptance sampling,
therefore stressed that one cannot inspect quality into a product.
Acceptance sampling plan is a specific plan that clearly states the rules
for sampling and the associated criteria for acceptance or otherwise. Acceptance
sampling plans can be applied for inspection not only end items but for the
inspection of (i) components, (ii) raw materials, (iii) operations, (iv) materials
in process, (v) supplies in storage, (vi) maintenance operations, (vii) data or
records and (viii) administrative procedures etc. Acceptance sampling is also
commonly employed for safety related inspection by governmental departments,
particularly when goods are imported.
49
9.1
9.2
The OC curve reveals the performance of a sampling inspection plan in discriminating good and bad lots. There are two types of OC curves:
Type A: (For isolated or unique lots) This is a curve showing the probability
of accepting a lot as a function of the lot quality.
Type B: (For a continuous stream of lots) This is a curve showing the probability of accepting a lot as a function of the process average. That is, the
50
0.6
0.4
0.0
0.2
Probability of Acceptance
0.8
1.0
AQL
0.000
0.001
0.002
0.003
0.004
0.005
Fraction Nonconforming
9.2.1
The OC function of the single sampling attribute plan giving the probability of
acceptance for a given lot or process quality p is:
Pa = Pa (p) = P r(d Ac | n, Ac, p).
For Type A situations, the hypergeometric distribution is exact for the case
of nonconforming units. Hence,
Pa = Pa (p) = P r(d Ac | N, n, Ac, p).
=
Ac
X
d=0
D
d
N D
nd
N
n
where N is the lot size, D is the number of defectives in the lot and hence the
lot fraction nonconforming p = D/N .
51
0.6
0.4
0.0
0.2
Probability of Acceptance
0.8
1.0
0.00
0.01
0.02
0.03
0.04
0.05
Fraction Nonconforming p
For Type B situations, the binomial model is exact for the case of fraction
nonconforming units and the OC function is given by
Pa (p) = P r(d Ac | n, Ac, p)
Ac
X
n
=
pd (1 p)nd
d
d=0
Ac np
X
e
(np)d
d!
d=0
1.0
0.6
0.4
0.0
0.2
Probability of Acceptance
0.8
Poisson
bionomial
hypergeometric
0.00
0.05
0.10
0.15
0.20
0.25
0.30
0.35
fraction nonconforming p
9.2.2
1.0
0.6
0.4
0.0
0.2
Probability of Acceptance
0.8
n=50, Ac=1
n=50, Ac=2
0.00
0.05
0.10
0.15
fraction nonconforming p
1.0
0.6
0.4
0.2
0.0
Probability of Acceptance
0.8
n=50, Ac=1
n=100, Ac=1
0.00
0.05
0.10
fraction nonconforming p
54
0.15
1.0
n=71, Ac=1
n=100, Ac=1
0.6
0.4
0.0
0.2
Probability of Acceptance
0.8
AQL
0.00
0.02
0.04
0.06
0.08
0.10
fraction nonconforming p
The proportion of lots of AQL quality accepted by the two plans are:
Pa (AQL) = 84% for n = 71, Ac = 1 plan
Pa (AQL) = 74% for n = 100, Ac = 1 plan.
It is also evident that the plan used for supermarket B is tighter than the plan
used for supermarket A. For the fixed acceptance number, an increase in sample
size means tightening of inspection. It is always desired that the probability of
acceptance at AQL be higher, such as 95%. Both the plans do not have a high
Pa at AQL. Plan n = 71 and Ac = 1 is preferable to plan n = 100, Ac = 1
since Pa (AQL) = 84% is closer to 95%. The manufacturer is regularly supplying
cheese to both supermarkets. Under the Type B situation of series of lots being
submitted, the lots are themselves viewed as random samples from the process
producing cheese. One therefore need not sample in relation to the lot size. If
it is desired to encourage large lot sizes, then the acceptance number should be
accordingly adjusted so that the Pa at AQL is higher for large lot sizes.
Arguments in favour of the n = 100, Ac = 1 plan can also be given. The
consuming supermarkets must be protected against bad quality lots. For example, lots having 5% nonconforming cheeses may be required to be rejected with
55
1.0
0.6
0.4
0.2
Probability of Acceptance
0.8
n=71, Ac=1
n=100, Ac=1
0.0
LQL
0.00
0.02
0.04
0.06
0.08
0.10
fraction nonconforming p
1.0
= 0.05
0.6
0.4
0.2
Probability of Acceptance
0.8
(AQL, 0.95)
(LQL, 0.1)
0.0
= 0.1
0.000
0.005
0.010
0.015
0.020
0.025
0.030
0.035
Fraction Nonconforming p
9.2.3
57
9.3
Single sampling plans are simple to use. Very often the producer is at a psychological disadvantage if a single sampling plan is applied to the lots, since no
second chance is given for the lots not accepted. In such situations, taking a
second sample is preferable.
The operating procedure of the double sampling plan is given in the following
steps:
1. First draw a random sample of size n1 , and observe the number of nonconforming units (nonconformities) d1 .
2. If d1 Ac1 , the first stage acceptance number, accept the lot. If d1 Re1 ,
the first stage rejection number, reject the lot. If Ac1 < d1 < Re1 , go to
Step 3.
3. Take a second random sample of size n2 and observe the number of nonconforming units (nonconformities) d2 . Cumulate d1 and d2 , and let
D = d1 + d2 . If D Ac2 , the second stage acceptance number, accept the
lot. If D Re2 (= Ac2 + 1), reject the lot.
The operating flow diagram for the double sampling plan is given as Figure 30.
Figure 30: Operation of Double Sampling Plan
d1 Re1
Ac1 < d1 < Re1
D Ac2
REJECT
D Re2
Sample
Size
n1
n2
Acceptance
Number
Ac1
Ac2
Rejection
Number
Re1
Re2
plan is relatively harder to administer than the single sampling plan. If the
parameters of the double sampling plan are not properly fixed, it may even be
inefficient when compared to a single sampling plan.
9.4
The multiple sampling plan is a natural extension of the double sampling plan.
The number of stages in a multiple sampling plan is usually fixed at 7. A
multiple sampling plan will require a smaller sample size than a double sampling
plan but is more complex to implement. A multiple sampling plan having m
stages can be compactly represented as in Table 12:
Table 12: Multiple Sampling Plan
Stage
1
2
3
.
.
.
m
Let Di =
i
P
Sample
Size
n1
n2
n3
.
.
.
nm
Acceptance
Number
Ac1
Ac2
Ac3
.
.
.
Acm
Rejection
Number
Re1
Re2
Re3
.
.
.
Rem
(=Acm +1)
j=1
59
9.5
9.6
Summary
Quality is a latent variable and need not always imply excellence. How far the
needs of customers and the society in general are met in a cost effective manner
is the operational way to assess quality.
Statistical methods are important to understand the quality of products (or
services), its measurement and improvement. Quality is inversely proportional
to variability, which can be expressed only in statistical terms. So for improving
quality, we must reduce the variability involved.
60
Statistical thinking is important in understanding a process and for isolating the key variables causing variation. Experimental designs play a key role in
achieving the optimum settings for controllable variables in a production process. New nuisance variables and unusual special cause conditions may arise
during the production affecting quality. Hence control charts are employed to
ensure that a state of statistical control exists during the production in order
to hold the gains.
Variables charts consider certain important quality characteristics measur chart is used to monitor the process mean or level.
able on a continuous scale. X
chart to monitor the increase or decrease in
S or R charts accompany the X
common cause variability. Attribute control charts such as p-chart are employed
to monitor the level of nonconformance for several characteristics, quality attributes and other specification requirements. The sensitivity of 3-sigma limits
is improved by supplementary run tests.
Acceptance sampling provides quality assurance. Sampling plans such as single sampling plans are employed not only in the disposition of the final product
but also for procurement quality assurance etc.
Quality must be built into products and services because it provides a competitive edge and higher profitability. Statistics plays a useful role along with
engineering, management, psychology and other disciplines in achieving quality.
9.7
References
ASQ Statistics Division. (2004). Glossary and Tables for Statistical Quality
Control, ASQ Quality Press, Milwaukee, Wisconsin, USA.
Montgomery, D. C. (1996), Introduction to Statistical Quality Control, Third
Edition, John Wiley & Sons, New York, NY.
Shewhart, W. A. (1931). Statistical Method from an Engineering Viewpoint,
Journal of the American Statistical Association, 26, pp. 262-269.
Sohn, H. S. and Park, T. W. (2006). Process Optimization for the Improvement of Brake Noise: A Case Study, Quality Engineering, 18, pp. 131-143
61
Exercises
11.1 How would you provide numerical measures of the quality of the service
provided by the following?
a. Postal Mail
b. A university canteen
11.2 A city council used to distribute annually paper garbage bags to its ratepayers. It decided to replace the paper bags with plastic ones. The plastic
bags are cheaper and thinner compared to the (heavier) paper bags. Experimental studies showed that both types of bags degrade approximately
at the same time.
Samples of bags submitted by few plastic bag manufacturers were inspected to short-list a supplier. An order was then placed with a supplier
to manufacture the plastic bags in packets of size 52. The manufacturer
supplied the plastic bag packets in large batches over time which were then
distributed to the rate-payers (without any batch by batch inspection).
A number of rate-payers complained on the quality of the plastic garbage
bags supplied to them. The main complaints were (i) the plastic bags
were not strong enough to hold the usual amount and type of waste and
(ii) some packets contained less than 52 bags and hence insufficient for
an year. It was found that the use of excessive recycled plastic for some
production periods caused the strength problems (i.e. splitting etc). It
was claimed that the under-count of bags was a matter of chance and not
a deliberate one.
a. Describe the meaning of the term quality for the plastic bags. What
difficulties are involved in comparing the plastic bags with the paper
ones? Explain your answer considering definition of quality as the
totality of features and characteristics of a product or service that
bear on its ability to satisfy given needs.
b. What quality measures can be used for the plastic bag quality?
c. Why Taguchis philosophy of deviation from the target is a loss to
society is more appropriate in the context of garbage bag quality?
11.3 In finance, the efficiency of a stock market is assessed based on whether or
not the daily returns are randomly distributed. Assume that the normal
distribution models the return variation due to common causes. Use the
NASDAQ daily index and show graphically how dominant special and
common causes are.
11.4 A company is offering financial incentives to sales personnel based on their
share of weekly sales. Does this strategy recognise the existence of common
and special causes of variation in sales? Why this strategy may affect the
staff morale? Discuss.
62
1
2
3
4
5
6
7
8
9
10
Number
of Leas
tested
100
100
100
100
100
100
100
100
100
100
Count
Not
Met
2
3
4
3
2
1
8
5
2
1
Low
CSP
Thick
Places
Thin
Places
Others
1
1
1
3
1
1
2
1
0
0
0
1
2
4
0
0
1
1
0
0
2
1
0
2
0
0
1
0
0
1
0
1
1
4
0
2
1
0
0
1
11.7 Identify the type of graphical quality tool displayed in Figure 31 and state
its uses:
Figure 31: A QC graph
XYZ Company, PQRS.
xxx
xxxxx
Sampler:__________________________________
No. of Gears Inspected:_____
Date:________
No. of Burrs:________
63
11.8 Table 14 gives the data on the number and causes of rejection of metal
castings observed in a foundry.
a. Design a Check Sheet which would have enabled the collection of
these data.
b. Prepare a Pareto Chart for causes of poor metal castings and offer
your recommendations.
1
2
3
4
5
6
7
8
9
10
no.
of
metal
castings
20
20
20
20
20
20
20
20
20
20
Sand
Misrun
Shift
Drop
Core
break
Broken
Others
2
3
4
3
2
1
8
5
2
1
1
1
1
3
1
1
2
1
0
0
0
1
2
4
0
0
1
1
0
0
2
1
0
2
0
0
1
0
0
1
0
1
1
4
0
2
1
0
0
1
0
0
1
0
0
0
0
0
1
1
1
0
1
0
0
1
0
0
0
2
11.9 Table 15 gives the data (25 samples of size five each taken at equal time
intervals) on the inside diameter of piston rings for an automotive engine
produced by a forging process (data from Montgomery, D. C., Introduction
to Statistical Quality Control, John Wiley & Sons, Second Edition). This
data set is also used in one of the subsequent the exercise. Perform EDA
of the retrospective data using the following tools, and discuss whether
or not your EDA discovered anything alarming to call for an engineering
investigation of special causes.
a. histogram
b. run chart
11.10 A large distributing company procures eggs and stores them in thermostabilized conditions. For the packaging process of stored eggs, the following quality characteristics were employed.
WEIGHT (Specifications: 655g)
HAUGH units- an index for the interior quality of eggs (Specification:
81 units)
APPEARANCE (visual test-Pass or Fail)
The retrospective data collected on the above variables are given in Table 16.
64
Obs 1
74.030
73.995
73.988
74.002
73.992
74.009
73.995
73.985
74.008
73.998
73.994
74.004
73.983
74.006
74.012
74.000
73.994
74.006
73.984
74.000
73.988
74.004
74.010
74.015
73.982
obs 2
74.002
73.992
74.024
73.996
74.007
73.994
74.006
74.003
73.995
74.000
73.998
74.000
74.002
73.967
74.014
73.984
74.012
74.010
74.002
74.010
74.001
73.999
73.989
74.008
73.984
obs 3
74.019
74.001
74.021
73.993
74.015
73.997
73.994
73.993
74.009
73.990
73.994
74.007
73.998
73.994
73.998
74.005
73.986
74.018
74.003
74.013
74.009
73.990
73.990
73.993
73.995
obs 4
73.992
74.011
74.005
74.015
73.989
73.985
74.000
74.015
74.005
74.007
73.995
74.000
73.997
74.000
73.999
73.998
74.005
74.003
74.005
74.020
74.005
74.006
74.009
74.000
74.017
obs 5
74.008
74.004
74.002
74.009
74.014
73.993
74.005
73.988
74.004
73.995
73.990
73.996
74.012
73.984
74.007
73.996
74.007
74.000
73.997
74.003
73.996
74.009
74.014
74.010
74.013
Egg
weight
66.06
65.92
63.13
64.75
64.39
64.91
66.29
65.25
65.60
63.50
65.67
66.58
65.66
64.41
65.42
64.20
63.62
65.62
64.53
64.99
Haugh
Unit
83.16
82.94
87.19
87.39
86.07
87.20
85.86
84.59
87.05
88.03
82.99
82.93
86.12
87.47
86.37
86.87
83.15
84.94
87.71
84.51
Appearance
Subgroup
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Fail
Pass
Pass
Pass
6
6
6
6
7
7
7
7
8
8
8
8
9
9
9
9
10
10
10
10
65
Egg
weight
65.11
63.94
65.28
65.07
64.91
65.74
67.11
64.40
65.50
65.61
64.09
63.96
63.44
64.45
63.64
63.30
67.17
64.89
63.81
65.30
Haugh
Unit
85.13
85.68
85.46
85.16
83.98
83.73
82.87
83.83
86.40
84.63
84.66
83.56
81.86
85.17
88.02
85.31
83.67
83.02
84.63
88.00
Appearance
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Pass
Fail
Pass
time
9AM
9:30AM
10AM
10:30AM
11AM
11:30AM
12Noon
12:30PM
1PM
1:30PM
2PM
2:30PM
3PM
3:30PM
4PM
4:30PM
5PM
5:30PM
Obs1
50.006
50.007
49.999
49.995
49.996
49.996
50.003
50.000
50.003
50.003
50.000
50.002
49.997
49.990
50.001
50.000
49.998
49.994
Obs2
49.995
49.999
50.006
50.000
49.994
49.999
50.002
50.001
49.999
50.000
49.999
50.004
49.997
49.997
49.995
49.999
50.003
49.997
Obs3
50.001
50.000
50.001
49.994
50.004
49.999
49.999
50.004
49.996
49.999
50.002
50.001
49.999
49.994
49.995
49.995
49.999
49.998
Obs4
49.999
50.000
49.997
49.998
50.000
50.002
50.004
49.998
49.995
50.001
50.004
49.997
49.999
49.994
49.995
49.999
49.995
49.998
5
2
3
4
4
4
8
3
2
3
8
2
2
3
5
4
4
4
2
4
2
3
2
3
6
2
0
6
3
1
2
2
4
5
0
1
2
1
4
5
1
2
2
2
8
2
3
1
4
3
2
3
1
6
4
2
3
2
6
1
5
3
1
5
2
2
4
2
5
2
4
5
1
4
3
3
3
7
4
3
4
4
2
5
2
1
5
1
4
5
3
3
5
3
4
3
4
1
3
5
3
1
2
0
3
1
4
1
3
4
6
1
3
2
5
1
2
1
3
0
2
0
3
3
0
1
0
1
0
2
1
1
1
0
1
1
1
4
1
1
3
0
1
1
0
2
3
1
0
0
0
1
0
1
3
1
0
2
3
1
0
2
0
4
0
2
2
1
1
0
1
1
0
3
2
1
3
1
0
2
2
2
0
1
2
0
1
1
1
0
1
1
1
0
2
2
2
0
0
2
a. Consider the data for the first 80 days and establish a suitable control
procedure for the Phase I analysis.
b. Apply the standard to the last 20 subgroups and interpret the results.
11.17 Table 20 gives the nonconformities (d) observed in the daily inspection of
certain number of disk-drive assemblies (n) . Does the process appear to
be in control?
Table 20: Disk-drive Assembly data
Day
1
2
3
4
5
6
7
8
9
10
n
17
19
17
16
18
19
17
19
18
16
d
13
25
0
7
14
18
10
21
16
3
11.18 Table 21 provides the data on the number of joints welded (n) and number
of nonconforming joints (d).
a. Set up an appropriate control chart procedure and discuss whether
the welding process was in control?
b. Establish the appropriate control limits for future monitoring.
68
n
165
85
65
165
85
161
85
61
103
405
29
33
60
119
61
37
65
49
103
113
107
d
11
7
5
9
5
9
5
1
2
36
2
2
2
3
1
3
1
5
3
3
3
11.19 Suppose that a company is applying a single sampling plan with sample
size 160 and acceptance number 1 for lots of size 100,000.
a. Draw the OC curve of the plan.
b. Find the incoming or submitted quality that will be rejected 90% of
the time.
c. If the AQL is fixed at 0.1% nonconforming, find the probability of
acceptance at AQL.
11.20 Obtain the OC function for the following sampling plan:
Plan
From a large lot, only two units are randomly drawn. If both are conforming, the lot is accepted. If both are nonconforming, the lot is rejected. If
only one unit is conforming, then one more unit is taken from the remainder of the lot. If this unit is conforming, the lot is accepted; otherwise, the
lot is rejected.
11.21 Let there be a single sampling plan with sample size n and acceptance
number Ac. Why not we fix the Acceptance Quality Limit (AQL) as AQL
= (Ac/n)?
11.22 Compare the performance of the single sampling plans (n = 20, Ac = 0),
and (n = 50, Ac =1) using the OC curves. Which plan provides a better
discrimination between good and bad lots? Explain why.
11.23 A sweet corn processing factory is procuring cobs from farmers. The
export specifications require a cob to be at least 18 cm long with no distinct
off-coloured, crushed, dimpled or insect damaged kernels. Consider each
truck load of cobs delivered as a lot for inspection purposes. Assume that
69
70