You are on page 1of 11

UNIT III

TQM TOOLS & TECHNIQUES I

Seven traditional tools of quality OLD SEVEN TOOLS The seven tools discussed below represent those generally accepted as the basic total quality tools. A case can be made that just-in-time, statistical process control, and quality function deployment are total quality tools. But these are more than tools: they are complete systems under the total quality umbrella. A tool, like a hammer, exists to help do a job. If the job includes continuous improvement, problem solving, or decision making, then these seven tools fit the definition. Each of these tools is some form of chart for the collection and display of specific kinds of data. Through the collection and display facility, the data become useful information-information that can be used to solve problems, enhance decision making, keep track of work being done, even predict future performance and problems. The beauty of the charts is that they organize data so that we can immediately comprehend the message. This would be all but impossible without the charts, given the mountains of data flooding today's workplace. CAUSE ANALYSIS TOOLS: Cause and Effect diagram (Ishikawa or fishbone chart): DESCRIPTION: The fishbone diagram identifies many possible causes for an effect or problem. It can be used to structure a brainstorming session. It immediately sorts ideas into useful categories. When a teams thinking tends to fall into ruts, and then this tool is to be used.

This fishbone diagram was drawn by a manufacturing team to try to understand the source of periodic iron contamination. The team used the six generic headings to prompt ideas. Layers of branches show thorough thinking about the causes of the problem. Pareto Chart (or) Pareto diagram (or)Pareto analysis A Pareto chart is a bar graph. The lengths of the bars represent frequency or cost (time or money), and are arranged with longest bars on the left and the shortest to the right. To identify the VITAL FEW FROM TRIVIAL MANY and to concentrate on the vital few for improvement. A Pareto diagram indicates which problem we should solve first in eliminating defects and improving the operation. Scatter diagram: The scatter diagram graphs pairs of numerical data, with one variable on each axis, to look for a relationship between them. If the variables are correlated, the points will fall along a line or curve. The better the correlation, the tighter the points will hug the line.

EVALUATION AND DECISION MAKING TOOLS Decision matrix Multivoting DATA COLLECTION AND ANALYSIS TOOLS Check sheet (or) defect concentration diagram A check sheet is a structured, prepared form for collecting and analyzing data. This is a generic tool that can be adapted for a wide variety of purposes

Control Chart (or) Statistical process control VARIATIONS Different types of control charts can be used, depending upon the type of data. The two broadest groupings are for variable data and attribute data. Variable data are measured on a continuous scale. For example: time, weight, distance or temperature can be measured in fractions or decimals. The possibility of measuring to greater precision defines variable data. Attribute data: These datas are counted and cannot have fractions or decimals. Attribute data arise when you are determining only the presence or absence of something: success or failure, accept or reject, correct or not correct. For example, a report can have four errors or five errors, but it cannot have four and a half errors. Commonly Used Control Charts Variables data x-bar and R-charts x-bar and s-charts Charts for individuals (x-charts) Attribute data For defectives (p-chart, np-chart) For defects (c-chart, u-chart) Lecture/discussion Variable control charts Control charts are decision-making tools - they provide an economic basis for deciding whether to alter a process or leave it alone Control charts are problem-solving tools - they provide a basis on which to formulate improvement actions Control charts are powerful aids to understanding the performance of a process over time. Control charts identify variation Chance causes - common cause inherent to the process or random and not controllable if only common cause present, the process is considered stable or in control Assignable causes - special cause variation due to outside influences if present, the process is out of control Control charts help us learn more about processes Separate common and special causes of variation Determine whether a process is in a state of statistical control or out-of-control

Estimate the process parameters (mean, variation) and assess the performance of a process or its capability Control chart for variables Variables are the measurable characteristics of a product or service. Measurement data is taken and arrayed on charts. X-bar and R charts The X-bar chart - used to detect changes in the mean between subgroups tests central tendency or location effects The R chart - used to detect changes in variation within subgroups tests dispersion effects Use other quality tools to help determine the general problem thats occurring and the process thats suspected of causing it. brainstorm using cause and effect diagram, why-why, Pareto charts, etc. Identify a characteristic to study - for example, part length or any other variable affecting performance typically choose characteristics which are creating quality problems possible characteristics include: length, height, viscosity, color, temperature, velocity, weight, volume, density, etc Choose homogeneous subgroups Homogeneous subgroups are produced under the same conditions, by the same machine, the same operator, the same mold, at approximately the same time. Try to maximize chance to detect differences between subgroups, while minimizing chance for difference with a group. The larger the subgroup size, the more sensitive the chart becomes to small variations. This increases data collection costs. Destructive testing may make large subgroup sizes infeasible. Subgroup sizes smaller than 4 arent representative of the distribution averages. Subgroups over 10 should use S chart Run the process untouched to gather initial data for control limits. Generally, collect 20-25 subgroups (100 total samples) before calculating the control limits. Each time a subgroup of sample size n is taken, an average is calculated for the subgroup and plotted on the control chart. The normal curve displays the distribution of the sample averages. A control chart is a time-dependent pictorial representation of a normal curve. Processes that are considered under control will have 99.73% of their graphed averages fall within six standard deviations.
UCL X 3 LCL X 3 standard deviation

Determining an alternative value for the standard deviation

R
i 1

UCL X A 2 R LCL X A 2 R
The range chart shows the spread or dispersion of the individual samples within the subgroup. If the product shows a wide spread, then the individuals within the subgroup are not similar to each other. Equal averages can be deceiving. Calculated similar to x-bar charts; Use D3 and D4 R-bar chart exceptions Range values cannot be negative, a value of 0 is given for the lower control limit of sample sizes of six or less. Examine the process A process is considered to be stable and in a state of control, or under control, when the performance of the process falls within the statistically calculated control limits and exhibits only chance, or common causes. Chart zones Based on our knowledge of the normal curve, a control chart exhibits a state of control when: Two thirds of all points are near the center value. The points appear to float back and forth across the centerline. The points are balanced on both sides of the centerline. No points beyond the control limits. No patterns or trends. Revise the charts In certain cases, control limits are revised because: out-of-control points were included in the calculation of the control limits. The process is in-control but the within subgroup variation significantly improves Interpret the original charts Isolate the causes Take corrective action Revise the chart

Only remove points for which you can determine an assignable cause

Achieve the purpose Our goal is to decrease the variation inherent in a process over time. As we improve the process, the spread of the data will continue to decrease. Quality improves!!

DOE scatter diagram Stratification, Flowchart (or) Run chart: Stratification is a technique used in combination with other data analysis tools. When data from a variety of sources or categories have been lumped together, the meaning of the data can be impossible to see Benefit from stratification : Always consider before collecting data whether stratification might be needed during analysis. Plan to collect stratification information. After the data are collected it might be too late. On your graph or chart, include a legend that identifies the marks or colors used. Histogram DESCRIPTION: The most commonly used graph for showing frequency distributions, or how often each different value in a set of data occurs The data are numerical values. To see the shape of the datas distribution, especially when determining whether the output of a process is distributed approximately normally. Analyzing whether a process can meet the customers requirements. Analyzing what the output from a suppliers process looks like. Whether a process change has occurred from one time period to another. To determine whether the outputs of two or more processes are different. To communicate the distribution of data quickly and easily to others. Survey. IDEA CREATION TOOLS Brainstorming Benchmarking Affinity diagram Normal group technique PROJECT PLANNING AND IMPLEMENTATIONTOOLS Gantt chart PDCA Cycle. New Management tools: Affinity Diagram

Interrelationship Digraph Tree Diagram Matrix Diagram Prioritization Matrices Process Decision Program Chart Activity Network diagram

Six-sigma Definition: Improvement opportunity with an emphasis on increasing customer satisfaction. Measure - determine process capability & defects per million opportunities. Analyze - identify the vital few process input variables that affect key product output variables (Finding the knobs). Improve - Make changes to process settings, redesign processes, etc. to reduce the number of defects of key output variables. Control - Implement process control plans, install real-time process monitoring tools, and standardize processes to maintain levels. Key Concepts: Critical to quality Defect Process capability Variation Stable operation Methodology It is described by the acronym DMAIC Where, D Define M Measure A Analyze I Improve C Control Analytical tool for implementing Six Sigma: Process mapping and modeling Measurement systems analysis and process capability Statistical tests, modelling and root cause analysis Brainstorming Design of experiments FMEA and validation Statistical process control Advantages of Six-sigma: Improved customer satisfaction Improved Quality, efficiency and cost of goods sold Self-sustaining infrastructure Commonality Applications to manufacturing Service sector including IT

APPLICATION OF SIX-SIGMA IN SERVICES Just as for manufacturing, defects found in a service process incur a cost to either scrap or to rework. Such service examples include the need to re-contact a customer in order to verify an order, providing an incorrect service, providing a substandard service, or even over servicing or providing more than what is required. Service organizations such as health care and finance have been implementing six sigma and are registering benefits. The breadth of applications is now expanding to other services including call centers (Hallowell and Gack), human resource (Bott, Keim, Kim, and Palser, 2000) and product support services (Schmidt and Aschkenase, 2004). Our literature review shows that most applications are limited to service industries in North America and the European Countries. It is the financial benefits that have been publicized, as opposed to discussing gain in terms of process improvement. Also important to note is that the applications emphasized the proper identification of critical success factors (CSFs), critical to quality (CTQ) characteristics, and key performance indicators (KPIs). These factors are now discussed (for a fuller discussion, see Ayon and Tan, 2006). Critical success factors Critical success factors are necessary in order that any six sigma effort may be successful. The literature review shows that top management commitment; education and training; culture change; and financial benefits are the most important CSFs for the successful application of six sigma in services. Other CSFs mentioned in a few of the literatureinclude customer focus; clear performance metrics; and organizational understanding of work processes. Critical to quality characteristics Critical to quality characteristics are the key measurable indicators of a product or process whose performance standards or specification limits must be met in order to satisfy the customer. In simple term, CTQs are what customers expect of a product or service. Irrespective of differences among services, there exist some common CTQs like, time (service time, waiting time, and cycle time), cost, employee behavior, and information (accurate and timely information).

APPLICATION OF SIX-SIGMA IN IT: Those applying Six Sigma to software design must proceed with caution. Changing the process in which it is designed can affect the fundamental design and completely destroy a product of this type altogether. There are common Six Sigma models (like DMAIC and PDSA) that should not be used when seeking outSix Sigma solutions for software design issues. These models have been found to either not work at all or work unpredictably in ways that make charting positive results very difficult. Those first attempting to work on software design using Six Sigma soon realized that applying the traditional business model methods wouldnt work, and instead, decided to forge ahead with a new plan that would work for developing their particular specialized product. During the rewrites of essential code, costly mistakes are often made. Without a process in place to track these changes and improve the entire design process, code may

be lost altogether. This is whereDesign for Six Sigma(DFSS) and the newly created Software Development Life Cycle improvement model can assist those in IT looking to improve design correctly. This new design method has allowed business with fully implemented 6 Sigma programs to design software with the DMPO of 3.4. This new method makes it easier to reach those strict standards, as well as those of the traditional development cycle. Reaching both of these standards required designers to create seven stages of software development. These include: Requirements, Architecture, Design, Implementation, Integration, Verification, and Validation. During the Requirements stage, the clients needs are evaluated. Software especially needs to be designed according to what a client requires. Once this is determined, the engineer decides the language and platform during the Architecture stage. The Design stage is when the engineer actually designs the interface. Each piece of code should be tested thoroughly before its placed in the program structure. Once designed, Implementation occurs when the program is introduced to the business processes, and then Integration follows once everyone understands how the software works and they are prepared to use it. Verification and Validation round out the process with testing and ensuring the software runs smoothly and without defects. To guarantee that all stages keep within Six Sigma standards, the engineers designing the software are provided with conclusive directions. Six-sigma in manufacturing: It is a known fact that Six Sigma has numerous practical uses in the manufacturing business. People who are operating a retail business will certainly require Six Sigma lean management rules. Implementing the strategies of this specific management program will allow many businesses to progress substantially. Six Sigma is considered to be highly successful and people from higher business ranks are now taking the initiative of pursuing this management course. Simply by implementing lean management rules, the manufacturing industry can maximize their profits. Benchmarking: Benchmarking is a systematic method by which organizations can measure themselves against the best industry practices The essence of benchmarking is the process of borrowing ideas and adapting them to gain competitive advantage. It is a tool for continuous improvement. Constantly emulating the best and bring about change Creates world class performer Identify core competency and improve on it - IT, miniaturization Reason to benchmark To establish an internal baseline

To identify performance gaps in various activities To identify the areas that needs improving To establish common practice and procedures To bring about effective communication process within the organization To promote an understanding of the nature of benchmarking To instill confidence in undertaking external benchmarking

Benchmarking process Steps Decide what to benchmark Understand current performance Determine benchmarking object Select partner Compare to dig up gaps Determine objectives and action plans Implement and review Types Internal Competitive Process FMEA Failure Mode Effect Analysis is an analytical technique It combines the technology and experience of people in identifying foreseeable failure modes of a product or process and planning for its elimination. Stages of FMEA Specifying possibilities Functions Possible failure modes Root causes Effects Detection/Prevention Quantifying risk Probability of cause Severity of effect Effectiveness of control to prevent cause Risk priority number Correcting high risk causes Prioritizing work Detailed action Assigning action responsibility Check points on completion Revaluation of risk Recalculation of risk priority number

Types of FMEA SFMEA:When it is applied to interaction of parts it is called System Failure Mode and Effects Analysis DFMEA: Applied to a product it is called a Design Failure Mode and Effects Analysis PFMEA: Applied to a process it is called a Process Failure Mode and Effects Analysis

You might also like