Professional Documents
Culture Documents
Quality and
Assurance
Code
ACeL
AMITY
UNIVERSITY
PREFACE
Software quality is a complex and multifaceted concept that can be described from different
perspectives depending on the context peculiarities and stakeholders. Though measuring quality
is not a new theme, asking a developer to measure the quality of a product may generally sound
like an unknown or even a new aspect to the software activities. In this book an attempt has been
made to describe various pertinent aspects of software quality from different points of view.
Quality is a dynamic attribute which keeps on changing over the life cycle time of the product,
product line and product family. Quality attributes must be sustained, preserved and improved.
Therefore, it appears high time to introduce software quality aspects to software engineers of
today rather than wait for them to learn through experience at a high cost.
Software quality assurance is now such a huge area that it is impossible to cover the whole
subject in one book. In addition I emphasize the importance of software quality assurance life
cycle, visualize the software quality assurance planning, monitoring, testing, understand and
establish the standards and procedures. I investigate the need of software quality metrics and
models, basic software quality assurance activities. It also includes the descriptions on the
benefits of software quality assurance for projects and software quality assurance planning,
established standards and evolution of standards. It also focuses on software measurements and
metrics together with needs, importance and significance of software metrics. Good testing
involves much more than just running the program a few times to see whether it works.
Thorough analysis of program helps us to test more systematically and more effectively. My
focus, therefore, is on key topics that are fundamental to all software development processes and
topics concerned with the software development process, Software requirements and
specifications, Software design techniques, Techniques for developing large software systems,
CASE tools and software development environments, Software testing, documentation and
maintenance. I need to combine the best of these approaches to build better software systems.
Time is compelling us to improve software development processes in order to provide good
quality maintainable software within reasonable cost and development time. As we know, quality
is easy to feel but difficult to define and impossible to measure. As a result, this book delivers a
comprehensive state-of-the-art overview and empirical results for researchers in academia and
industry in areas like software process management, empirical software engineering, and global
software development. Practitioners working in this area will also appreciate the detailed
descriptions and reports which can often be used as guidelines to improve their daily work.
The book is primarily intended as a student text for senior undergraduate and graduate students
studying computer science, software engineering or systems engineering.
In this course, chapters 1 and 2 may be used to provide an overview of software quality and
quality models.
A more extensive course, lasting a semester, might either develop this material with either a
process or a techniques focus. If the orientation of the course is towards processes, then chapters
3, 4, 5 and 6 which cover software quality assurance, software quality control, metrics and
measurement of quality and quality standards might be covered in addition to the introductory
material.
Nevertheless, I hope that all software engineers and software engineering students can find best
from here. Following is the syllabus provided for your reference:
SYLLABUS
Module I: Quality Concepts and Practices
Why Quality?, Cost of Quality, TQM concept, Quality Pioneers Approaches to Quality.
Module II: Software Quality
Software Development Process, S/w quality Attributes (Product Specific and Organization
Specific, Hierarchical Models of quality. Concept of Quality Assurance and Quality Control
Module III: Software Quality Assurance
Implementing an IT Quality function, Content of SQA Plan, Quality Tools, Quality baselines,
Model and assessment fundamentals, Internal Auditing and Quality assurance.
Module IV: Software Quality Control
Testing Concepts - ad hoc, white box, black box and integration, Cost Effectiveness of Software
Testing credibility & ROI, right methods, Developing Testing Methodologies- Acquire and
study the test strategy, building the system test plan and unit plan , Verification and Validation
methods, Software Change Control- SCM, change control procedure, Defect Management
causes, detection, removal and tracking,
Module V: Metrics and Measurement of Software Quality
Measuring Quality, measurement concepts- Standard unit of measure, software metrics, Metrics
Bucket, Problems with Metrics, Objective and subjective measurement, measure of central
tendency, attributes of good measurement, Installing measurement program, Risk Managementdefining, characterizing risk, managing risk, software risk management
Module VI: Quality Standards
Introduction to various Quality standards: ISO-9000 Series, Six Sigma, SEI CMMi Model.
Table of Contents
PREFACE ...................................................................................................................................... 2
SYLLABUS ................................................................................................................................... 4
CHAPTER 1 : QUALITY CONCEPTS AND PRACTICES ................................................. 13
1.1 INTRODUCTION ............................................................................................................... 13
1.1.1 Definition of Quality .................................................................................................... 14
1.2 COST OF QUALITY .......................................................................................................... 15
1.3 TOTAL QUALITY MANAGEMENT ............................................................................... 17
1.3.1 TQM Definition ............................................................................................................ 17
1.3.2 Principles of TQM ........................................................................................................ 19
1.3.3 The Concept of Continuous Improvement by TQM .................................................... 20
1.3.4 Implementation Principles and Processes of TQM ...................................................... 22
1.3.5 The building blocks of TQM ........................................................................................ 23
1.4 APPROACHES TO QUALITY .......................................................................................... 25
1.4.1 TQM Approach............................................................................................................. 25
1.4.2 Six Sigma ..................................................................................................................... 26
1.5 SUMMARY ........................................................................................................................ 27
Assignment-Module 1 ............................................................................................................... 28
Key - Module 1 ......................................................................................................................... 31
CHAPTER 2 : SOFTWARE QUALITY .................................................................................. 32
2.1 SOFTWARE DEVELOPMENT PROCESS ...................................................................... 32
2.1.1 System/Information Engineering and Modeling .......................................................... 32
2.1.2 Software Development Life Cycle ............................................................................... 33
2.1.3 Processes ....................................................................................................................... 33
2.1.4 Software development activities ................................................................................... 33
2.1.5 Process Activities/Steps ................................................................................................ 34
2.2 SOFTWARE DEVELOPMENT MODELS OR PROCESS MODEL ............................... 37
2.2.1 Waterfall Model ............................................................................................................ 37
2.2.2 Prototyping Model ........................................................................................................ 38
2.2.3 Spiral model .................................................................................................................. 38
2.2.4 Strength and Weakness of Waterfall, Prototype and Spiral Model .............................. 40
6.2.5 Origin and meaning of the term "six sigma process" ..................................................... 236
6.2.6 Role of the 1.5 sigma shift .......................................................................................... 237
6.2.7 Sigma levels ................................................................................................................ 237
6.2.8 Software used for Six Sigma ...................................................................................... 239
6.2.9 Application ................................................................................................................. 240
6.2.10 Criticism ................................................................................................................... 241
6.3 CAPABILITY MATURITY MODEL INTEGRATION (CMMI) ................................... 244
6.3.1 CMMI representation ................................................................................................. 246
6.3.2 Appraisal ..................................................................................................................... 248
6.4 SUMMARY ...................................................................................................................... 249
Assignment-Module 3 ............................................................................................................. 250
Key - Module 6 .................................................................................................................... 252
REFERENCES .......................................................................................................................... 253
To understand the landscape of software quality it is central to answer the so often asked
question: what is quality? Once the concept of quality is understood it is easier to understand the
different structures of quality available on the market. As many prominent authors and
researchers have provided an answer to that question, we do not have the ambition of introducing
yet another answer but we will rather answer the question by studying the answers that some of
the more prominent gurus of the quality management community have provided. By learning
from those gone down this path before us we can identify that there are two major camps when
discussing the meaning and definition of (software) quality:
i) Conformance to specification: Quality that is defined as a matter of products and services
whose measurable characteristics satisfy a fixed specification that is, conformance to an in
beforehand defined specification.
ii) Meeting customer needs: Quality that is identified independent of any measurable
characteristics. That is, quality is defined as the products or services capability to meet customer
expectations explicit or not.
Quality software saves good amount of time and money. Because software will have fewer
defects, this saves time during testing and maintenance phases. Greater reliability contributes to
an immeasurable increase in customer satisfaction as well as lower maintenance costs. Because
maintenance represents a large portion of all software costs, the overall cost of the project will
most likely be lower than similar projects.
Management must
understand these costs to create quality improvement strategy. An organizations main goal is to
survive and maintain high quality goods or services, with a comprehensive understanding of the
costs related to quality this goal can be achieved.
Costs are defined as the summation of costs over the life of a product. Customers prefer products
or services with a high quality and reasonable price. To ensure that customers will receive a
product or service that is worth the money they will spend firms should spend on prevention and
appraisal costs. Prevention costs are associated with preventing defects and imperfections from
occurring. Consider the Johnson and Johnson (J&J) safety seals that appear on all of their
products with the message, if this safety seal is open do not use. This is a preventive measure
because in the overall analysis it is least costly to purchase the safety seals in production than
undergo a possible cyanide scare. The focus of a prevention cost is to assure quality and
minimize or avoid the likelihood of an event with an adverse impact on the company goods,
services or daily operations. This also includes the cost of establishing a quality system. A
quality system should include the following three elements: training, process engineering, and
quality planning. Quality planning is establishing a production process in conformance with
design specification procedures, and designing of the proper test procedures and equipment.
Consider establishing training programs for employees to keep them efficient on emerging
technologies, such as updated computer languages and programs.
Appraisal costs are direct costs of measuring quality. In this case, quality is defined as the
conformance to customer expectations. This includes: lab testing, inspection, test equipment and
materials, costs associated with assessment for ISO 9000 or other quality award assessments. A
common example of appraisal costs is the expenses from inspections. An organization should
establish an inspection of their products and incoming goods from a supplier before they reach
the customer. This is also known as acceptance sampling, a technique used to verify that
products meet quality standards.
Failure Costs are separated into two different categories: internal and external. Internal failure
costs are expenses incurred from online failure. This includes cost of troubleshooting, loss of
production resulting from idle time either from manpower or during the production process.
External failure costs are associated with product failure after the completion of the production
process. An excellent example of external failure costs is the J&J cyanide scare. The company
incurred expenses in response to the customer fears of tampering with a purchased J&J product.
However, J&J managed to survive the incident, in part because of their method of corrective
action.
Understanding the cost of quality is extremely important in establishing a quality management
strategy. After defining the three major costs of quality and discussing their application we can
examine how they affect an organization. The more an organization invests in preventive
measures the more they are able to reduce failure costs. Furthermore, an investment in quality
improvement benefits the company image, performance and growth. This is basically summed
up by the Ludvall-Juran quality cost model, which applies the law of diminishing returns to these
costs. The model shows that prevention and appraisal costs have a direct relationship with
quality conformance, meaning they increase as quality conformance increases. Thus, quality
conformance should have an inverse relationship with failure costs - meaning as quality
conformance increases failure costs should decrease. Understanding these relationships and
applying the cost of quality process enables an organization to decrease failure costs and assure
that their products and services continue to meet customer expectations. Some companies that
have achieved this goal include Neiman-Marcus, Rolex, and Lexus.
Phillip Crosby states that quality is free. As discussed, the costs related to achieving quality are
traded off between the prevention and appraisal costs and the failure costs. Therefore, the
prevention and appraisal costs resulting from improved quality, allow an organization to
minimize or be free of the failure costs resulting from poor quality. In summation, understanding
cost of quality helps companies to develop quality conformance as a useful strategic business
tool that improves their products, services and image. This leverage is vital in achieving the
goals and mission of a successful organization.
This shows that TQM must be practiced in all activities, by all personnel, in Manufacturing,
Marketing, Engineering, R&D, Sales, Purchasing, HR, etc.
The core of TQM is the customer-supplier interfaces, both externally and internally, and at each
interface lie a number of processes. This core must be surrounded by commitment to quality,
communication of the quality message, and recognition of the need to change the culture of the
organization to create total quality. These are the foundations of TQM, and they are supported by
the key management functions of people, processes and systems in the organization.
ii.
Where mistakes can't be absolutely prevented, detecting them early to prevent them being
passed down the value added chain (Inspection at source or by the next operation).
iii.
Where mistakes recur, stopping production until the process can be corrected, to prevent
the production of more defects. (Stop in time).
The basis for TQM implementation is the establishment of a quality management system which
involves the organizational structure, responsibilities, procedures and processes. The most
frequently used guidelines for quality management systems are the ISO 9000 international
standards, which emphasize the establishment of a well- documented, standardized quality
system. The role of the ISO 9000 standards within the TQM circle of continuous improvement is
presented in the following figure.
phases. However, the ISO 9000 standards do not prescribe particular quality management
techniques or quality-control methods. Because it is a generic organizational standard, ISO 9000
does not define quality or provide any specifications of products or processes. ISO 9000
certification only assures that the organization has in place a well-operated quality system that
conforms to the ISO 9000 standards. Consequently, an organization may be certified but still
manufacture poor-quality products.
A crisis, if it is not too disabling, can also help create a sense of urgency which can mobilize
people to act. In the case of TQM, this may be a funding cut or threat, or demands from
consumers or other stakeholders for improved quality of service. After a crisis, a leader may
intervene strategically by articulating a new vision of the future to help the organization deal
with it.
A plan to implement TQM may be such a strategic decision. Such a leader may then become a
prime mover, who takes charge in championing the new idea and showing others how it will help
them get where they want to go. Finally, action vehicles are needed and mechanisms or
structures to enable the change to occur and become institutionalized.
The only point at which true responsibility for performance and quality can lie is with the people
who actually do the job or carry out the process, each of which has one or several suppliers and
customers.
An efficient and effective way to tackle process or quality improvement is through teamwork.
However, people will not engage in improvement activities without commitment and recognition
from the organizations leaders, a climate for improvement and a strategy that is implemented
thoughtfully and effectively. The section on People expands on these issues, covering roles
within teams, team selection and development and models for successful teamwork.
An appropriate documented Quality Management System will help an organization not only
achieve the objectives set out in its policy and strategy, but also, and equally importantly, sustain
and build upon them. It is imperative that the leaders take responsibility for the adoption and
documentation of an appropriate management system in their organization if they are serious
about the quality journey. The Systems section discusses the benefits of having such a system,
how to set one up and successfully implement it.
Once the strategic direction for the organizations quality journey has been set, it needs
Performance Measures to monitor and control the journey, and to ensure the desired level of
performance is being achieved and sustained. They can, and should be, established at all levels in
the organization, ideally being cascaded down and most effectively undertaken as team activities
and this is discussed in the section on Performance.
ii.
Strong leadership should ensure the organization understands its purpose and direction
iii.
People at all levels should be involved in the quality process for the organization to reap
the greatest benefit
iv.
v.
vi.
vii.
viii.
But standards alone are not often not enough for companies to reach their quality goals, hence
the development of more structured processes like six sigma.
improvements and results that are sustainable over time", and therefore advocates the use of the
six sigma management systems, which aligns management strategy with improvement efforts.
Companies which have successfully implemented six sigma, such as GE, have reported savings
running into millions of dollars and six sigma is now being combined with lean manufacturing
processes to great effect.
But it is highly unlikely any of these interpretations present the end goal for quality management,
which as the methodologies teach, must always strive for continuous improvement
1.5 SUMMARY
Quality plays very important role in every aspect of software development. It plays key role in
the successful implementation of software. As an attribute of an item, quality refers to
measurable characteristics - things we are able to compare to known standards such as length,
color, electrical properties, and malleability. However, software, largely an intellectual entity, is
more challenging to characterize than physical objects. Nevertheless, measures of a programs
characteristics do exist. These properties include cyclomatic complexity, cohesion, number of
function points, lines of code, and many others. When we examine an item based on its
measurable characteristics, two kinds of quality may be encountered: quality of design and
quality of conformance. TQM encourages participation amongst shop floor workers and
managers. TQM is an approach to improving the competitiveness, effectiveness and flexibility of
an organization for the benefit of all stakeholders. It is a way of planning, organizing and
understanding each activity, and of removing all the wasted effort and energy that is routinely
spent in organizations. It ensures the leaders adopt a strategic overview of quality and focus on
prevention not detection of problems. All senior managers must demonstrate their seriousness
and commitment to quality, and middle managers must, as well as demonstrating their
commitment, ensure they communicate the principles, strategies and benefits to the people for
whom they have responsibility. Only then will the right attitudes spread throughout the
organization.
Assignment-Module 1
1. Quality is __________
a. Conformance to specification
b. Meeting customer needs
c. Both of them
d. None of them
6. Mistakes may be made by people, but most of them are caused, or at least permitted, by
faulty systems and processes is the principle of __________ .
a. Quality
b. TQM
c. Six Sigma
d. ISO 9000
7.
The principles of TQM have been laid out to __________ principles made up
__________ standards.
a. Six, ISO 9000
b. Two, ISO 9126
c. Eight, ISO 9001
d. Eight, ISO 9000
10. Six Sigma philosophy is the ___________ model for process improvement.
a. DMAIC
b. ISO 9126
c. Mc call
d. ISO 9000
Key - Module 1
1. c
2. c
3. a
4. a
5. d
6. b
7. d
8. a
9. d
10. a
2.1.3 Processes
More and more software development organizations implement process methodologies. The
Capability Maturity Model (CMM) is one of the leading models. Independent assessments can be
used to grade organizations on how well they create software according to how they define and
execute their processes. There are dozens of others, with other popular ones being ISO 9000, ISO
15504, and Six Sigma.There are several models for such processes, each describing approaches
to a variety of tasks or activities that take place during the process.
The activities of the software development process are represented in the form of waterfall model
in above figure. There are several other models to represent this process.
unclear requirements at the start of development. If the development is done externally, this
document can be considered a legal document so that if there are ever disputes, any ambiguity of
what was promised to the client can be clarified.
2.1.5.3 Specification
Specification is the task of precisely describing the software to be written, in a mathematically
rigorous way. In practice, most successful specifications are written to understand and fine-tune
applications that were already well-developed, although safety-critical software systems are
often carefully specified prior to application development. Specifications are most important for
external interfaces that must remain stable.
2.1.5.5 Implementation
Reducing a design to code may be the most obvious part of the software engineering ob, but it is
not necessarily the largest portion.
2.1.5.6 Testing
Testing of parts of software, especially where code by two different engineers must work
together, falls to the software engineer. Different testing methodologies are available to unravel
the bugs that were committed during the previous phases. Different testing tools and
methodologies are already available. Some companies build their own testing tools that are tailor
made for their own development operations.
2.1.5.7 Documentation
An important task is documenting the internal design of software for the purpose of future
maintenance and enhancement. This may also include the writing of an API, be it external or
internal. The software engineering process chosen by the developing team will determine how
much internal documentation (if any) is necessary. Plan-driven models (e.g., Waterfall) generally
produce more documentation than agile models.
2.1.5.9 Maintenance
Maintaining and enhancing software to cope with newly discovered problems or new
requirements can take far more time than the initial development of the software. The software
will definitely undergo change once it is delivered to the customer. There can be many reasons
for this change to occur. Change could happen because of some unexpected input values into the
system. In addition, the changes in the system could directly affect the software operations. The
software should be developed to accommodate changes that could happen during the post
implementation period.
Not only may it be necessary to add code that does not fit the original design but just determining
how software works at some point after it is completed may require significant effort by a
software engineer. About 60% of all software engineering work is maintenance, but this statistic
can be misleading. A small part of that is fixing bugs. Most maintenance is extending systems to
do new things, which in many ways can be considered new work.
ii.
Software design
iii.
iv.
v.
vi.
Maintenance
In a strict Waterfall model, after each phase is finished, it proceeds to the next one. Reviews may
occur before moving to the next phase which allows for the possibility of changes (which may
involve a formal change control process). Reviews may also be employed to ensure that the
phase is indeed complete; the phase completion criteria are often referred to as a "gate" that the
project must pass through to move to the next phase. Waterfall discourages revisiting and
revising any prior phase once it's complete. This "inflexibility" in a pure Waterfall model has
been a source of criticism by supporters of other more "flexible" models.
The Spiral is visualized as a process passing through some number of iterations, with the four
quadrant diagram representative of the following activities:
i.
formulate plans to: identify software targets, selected to implement the program, clarify
the project development restrictions;
ii.
iii.
Risk-driven spiral model, emphasizing the conditions of options and constraints in order to
support software reuse, software quality can help as a special goal of integration into the product
development. However, the spiral model has some restrictive conditions, as follows:
i.
The spiral model emphasizes risk analysis, and thus requires customers to accept this
analysis and act on it. This requires both trust in the developer as well as the willingness
to spend more to fix the issues, which is the reason why this model is often used for
large-scale internal software development.
ii.
If the implementation of risk analysis will greatly affect the profits of the project, the
spiral model should not be used.
iii.
Software developers have to actively look for possible risks, and analyze it accurately for
the spiral model to work.
The first stage is to formulate a plan to achieve the objectives with these constraints, and then
strive to find and remove all potential risks through careful analysis and, if necessary, by
constructing a prototype. If some risks can not be ruled out, the customer has to decide whether
to terminate the project or to ignore the risks and continue anyway. Finally, the results are
evaluated and the design of the next phase begins.
Agile software development processes are built on the foundation of iterative development. To
that foundation they add a lighter, more people-centric viewpoint than traditional approaches.
Agile processes use feedback, rather than planning, as their primary control mechanism. The
feedback is driven by regular tests and releases of the evolving software.
Agile processes seem to be more efficient than older methodologies, using less programmer time
to produce more functional, higher quality software, but have the drawback from a business
perspective that they do not provide long-term planning capability. In essence, they say that they
will provide the most bang for the buck, but won't say exactly when that bang will be.
Extreme Programming, XP, is the best-known agile process. In XP, the phases are carried out in
extremely small (or "continuous") steps compared to the older, "batch" processes. The
(intentionally incomplete) first pass through the steps might take a day or a week, rather than the
months or years of each complete step in the Waterfall model. First, one writes automated tests,
to provide concrete goals for development. Next is coding (by a pair of programmers), which is
complete when all the tests pass, and the programmers can't think of any more tests that are
needed. Design and architecture emerge out of refactoring, and come after coding. Design is
done by the same people who do the coding. The incomplete but functional system is deployed
or demonstrated for the users (at least one of which is on the development team). At this point,
the practitioners start again on writing tests for the next most important part of the system.
While Iterative development approaches have their advantages, software architects are still faced
with the challenge of creating a reliable foundation upon which to develop. Such a foundation
often requires a fair amount of upfront analysis and prototyping to build a development model.
The development model often relies upon specific design patterns and entity relationship
diagrams (ERD). Without this upfront foundation, Iterative development can create long term
challenges that are significant in terms of cost and quality.
Critics of iterative development approaches point out that these processes place what may be an
unreasonable expectation upon the recipient of the software: that they must possess the skills and
experience of a seasoned software developer. The approach can also be very expensive, akin to...
"If you don't know what kind of house you want, let me build you one and see if you like it. If
you don't, we'll tear it all down and start over." A large pile of building-materials, which are now
scrap, can be the final result of such a lack of up-front discipline. The problem with this criticism
is that the whole point of iterative programming is that you don't have to build the whole house
before you get feedback from the recipient. Indeed, in a sense conventional programming places
more of this burden on the recipient, as the requirements and planning phases take place entirely
before the development begins, and testing only occurs after development is officially over.
based system architectures. Component Assembly Model leads to software reusability. The
integration/assembly of the already existing software components accelerate the development
process. Nowadays many component libraries are available on the Internet. If the right
components are chosen, the integration aspect is made much simpler.
2.3.1 Introduction
Quality attributes are the overall factors that affect run-time behavior, system design, and user
experience. They represent areas of concern that have the potential for application wide impact
across layers and tiers. Some of these attributes are related to the overall system design, while
others are specific to run time, design time, or user centric issues. The extent to which the
application possesses a desired combination of quality attributes such as usability, performance,
reliability, and security indicates the success of the design and the overall quality of the software
application.
When designing applications to meet any of the quality attributes requirements, it is necessary to
consider the potential impact on other requirements. You must analyze the tradeoffs between
multiple quality attributes. The importance or priority of each quality attribute differs from
system to system; for example, interoperability will often be less important in a single use
packaged retail application than in a line of business (LOB) system.
This chapter lists and describes the quality attributes that you should consider when designing
your application. To get the most out of this chapter, use the table below to gain an
understanding of how quality attributes map to system and application quality factors, and read
the description of each of the quality attributes. Then use the sections containing key guidelines
for each of the quality attributes to understand how that attribute has an impact on your design,
and to determine the decisions you must make to addresses these issues. Keep in mind that the
list of quality attributes in this chapter is not exhaustive, but provides a good starting point for
asking appropriate questions about your architecture.
Category
Quality
attribute
Description
Design
Conceptual
Integrity
Qualities
Maintainability is the ability of the system to undergo changes
Maintainability with a degree of ease. These changes could impact components,
services, features, and interfaces when adding or changing the
functionality,
fixing
errors,
and
meeting
new
business
requirements.
Reusability defines the capability for components and subsystems
Reusability
Availability
successfully
by
communicating
and
exchanging
Manageability
Performance
Reliability
Security
Supportability
System
Qualities
the system and its components, and to execute these tests in order
Testability
User
Qualities
Usability
The following sections describe each of the quality attributes in more detail, and provide
guidance on the key issues and the decisions you must make for each one:
Availability
Conceptual Integrity
Interoperability
Maintainability
Manageability
Performance
Reliability
Reusability
Scalability
Security
Supportability
Testability
User Experience / Usability
Availability
Availability defines the proportion of time that the system is functional and working. It can be
measured as a percentage of the total system downtime over a predefined period. Availability
will be affected by system errors, infrastructure problems, malicious attacks, and system load.
The key issues for availability are:
A physical tier such as the database server or application server can fail or become
unresponsive, causing the entire system to fail. Consider how to design failover support for
the tiers in the system. For example, use Network Load Balancing for Web servers to
distribute the load and prevent requests being directed to a server that is down. Also, consider
using a RAID mechanism to mitigate system failure in the event of a disk failure. Consider if
there is a need for a geographically separate redundant site to failover to in case of natural
disasters such as earthquakes or tornados.
Denial of Service (DoS) attacks, which prevent authorized users from accessing the system,
can interrupt operations if the system cannot handle massive loads in a timely manner, often
due to the processing time required, or network configuration and congestion. To minimize
interruption from DoS attacks, reduce the attack surface area, identify malicious behavior,
use application instrumentation
to
and implement
comprehensive data validation. Consider using the Circuit Breaker or Bulkhead patterns to
increase system resiliency.
Inappropriate use of resources can reduce availability. For example, resources acquired too
early and held for too long cause resource starvation and an inability to handle additional
concurrent user requests.
Bugs or faults in the application can cause a system wide failure. Design for proper exception
handling in order to reduce application failures from which it is difficult to recover.
Frequent updates, such as security patches and user application upgrades, can reduce the
availability of the system. Identify how you will design for run-time upgrades.
A network fault can cause the application to be unavailable. Consider how you will handle
unreliable network connections; for example, by designing clients with occasionallyconnected capabilities.
Consider the trust boundaries within your application and ensure that subsystems employ
some form of access control or firewall, as well as extensive data validation, to increase
resiliency and availability.
Conceptual Integrity
Conceptual integrity defines the consistency and coherence of the overall design. This includes
the way that components or modules are designed, as well as factors such as coding style and
variable naming. A coherent system is easier to maintain because you will know what is
consistent with the overall design. Conversely, a system without conceptual integrity will
constantly be affected by changing interfaces, frequently deprecating modules, and lack of
consistency in how tasks are performed. The key issues for conceptual integrity are:
Mixing different areas of concern within your design. Consider identifying areas of
concern and grouping them into logical presentation, business, data, and service layers as
appropriate.
Inconsistent or poorly managed development processes. Consider performing an
Application Lifecycle Management (ALM) assessment, and make use of tried and tested
development tools and methodologies.
Lack of collaboration and communication between different groups involved in the
application lifecycle. Consider establishing a development process integrated with tools
to facilitate process workflow, communication, and collaboration.
Lack of design and coding standards. Consider establishing published guidelines for
design and coding standards, and incorporating code reviews into your development
process to ensure guidelines are followed.
Existing (legacy) system demands can prevent both refactoring and progression toward a
new platform or paradigm. Consider how you can create a migration path away from
legacy technologies, and how to isolate applications from external dependencies. For
example, implement the Gateway design pattern for integration with legacy systems.
Interoperability
Interoperability is the ability of a system or different systems to operate successfully by
communicating and exchanging information with other external systems written and run by
external parties. An interoperable system makes it easier to exchange and reuse information
internally as well as externally. Communication protocols, interfaces, and data formats are the
key considerations for interoperability. Standardization is also an important aspect to be
considered when designing an interoperable system. The key issues for interoperability are:
Interaction with external or legacy systems that use different data formats. Consider how you
can enable systems to interoperate, while evolving separately or even being replaced. For
example, use orchestration with adaptors to connect with external or legacy systems and
translate data between systems; or use a canonical data model to handle interaction with a
large number of different data formats.
Boundary blurring, which allows artifacts from one system to defuse into another. Consider
how you can isolate systems by using service interfaces and/or mapping layers. For example,
expose services using interfaces based on XML or standard types in order to support
interoperability with other systems. Design components to be cohesive and have low
coupling in order to maximize flexibility and facilitate replacement and reusability.
Lack of adherence to standards. Be aware of the formal and de facto standards for the domain
you are working within, and consider using one of them rather than creating something new
and proprietary.
Maintainability
Maintainability is the ability of the system to undergo changes with a degree of ease. These
changes could impact components, services, features, and interfaces when adding or changing
the applications functionality in order to fix errors, or to meet new business requirements.
Maintainability can also affect the time it takes to restore the system to its operational status
following a failure or removal from operation for an upgrade. Improving system maintainability
can increase availability and reduce the effects of run-time defects. An applications
maintainability is often a function of its overall quality attributes but there a number of key
issues that can directly affect maintainability:
Excessive dependencies between components and layers, and inappropriate coupling to
concrete classes, prevents easy replacement, updates, and changes; and can cause changes to
concrete classes to ripple through the entire system. Consider designing systems as welldefined layers, or areas of concern, that clearly delineate the systems UI, business processes,
and data access functionality. Consider implementing cross-layer dependencies by using
abstractions (such as abstract classes or interfaces) rather than concrete classes, and minimize
dependencies between components and layers.
The use of direct communication prevents changes to the physical deployment of
components and layers. Choose an appropriate communication model, format, and protocol.
Consider designing a pluggable architecture that allows easy upgrades and maintenance, and
improves testing opportunities, by designing interfaces that allow the use of plug-in modules
or adapters to maximize flexibility and extensibility.
Manageability
Manageability defines how easy it is for system administrators to manage the application, usually
through sufficient and useful instrumentation exposed for use in monitoring systems and for
debugging and performance tuning. Design your application to be easy to manage, by exposing
sufficient and useful instrumentation for use in monitoring systems and for debugging and
performance tuning. The key issues for manageability are:
Lack of health monitoring, tracing, and diagnostic information. Consider creating a health
model that defines the significant state changes that can affect application performance, and
Performance
Performance is an indication of the responsiveness of a system to execute specific actions in a
given time interval. It can be measured in terms of latency or throughput. Latency is the time
taken to respond to any event. Throughput is the number of events that take place in a given
amount of time. An applications performance can directly affect its scalability, and lack of
scalability can affect performance. Improving an applications performance often improves its
scalability by reducing the likelihood of contention for shared resources. Factors affecting
system performance include the demand for a specific action and the systems response to the
demand. The key issues for performance are:
Increased client response time, reduced throughput, and server resource over utilization.
Ensure that you structure the application in an appropriate way and deploy it onto a system or
systems that provide sufficient resources. When communication must cross process or tier
boundaries, consider using coarse-grained interfaces that require the minimum number of
calls (preferably just one) to execute a specific task, and consider using asynchronous
communication.
Increased memory consumption, resulting in reduced performance, excessive cache misses
(the inability to find the required data in the cache), and increased data store access. Ensure
that you design an efficient and appropriate caching strategy.
Increased database server processing, resulting in reduced throughput. Ensure that you
choose effective types of transactions, locks, threading, and queuing approaches. Use
efficient queries to minimize performance impact, and avoid fetching all of the data when
only a portion is displayed. Failure to design for efficient database processing may incur
unnecessary load on the database server, failure to meet performance objectives, and costs in
excess of budget allocations.
Increased network bandwidth consumption, resulting in delayed response times and
increased load for client and server systems. Design high performance communication
between tiers using the appropriate remote communication mechanism. Try to reduce the
number of transitions across boundaries, and minimize the amount of data sent over the
network. Batch work to reduce calls over the network.
Reliability
Reliability is the ability of a system to continue operating in the expected way over time.
Reliability is measured as the probability that a system will not fail and that it will perform its
intended function for a specified time interval. The key issues for reliability are:
The system crashes or becomes unresponsive. Identify ways to detect failures and
automatically initiate a failover, or redirect load to a spare or backup system. Also, consider
implementing code that uses alternative systems when it detects a specific number of failed
requests to an existing system.
Output is inconsistent. Implement instrumentation, such as events and performance counters,
that detects poor performance or failures of requests sent to external systems, and expose
information through standard systems such as Event Logs, Trace files, or WMI. Log
performance and auditing information about calls made to other systems and services.
The system fails due to unavailability of other externalities such as systems, networks, and
databases. Identify ways to handle unreliable external systems, failed communications, and
failed transactions. Consider how you can take the system offline but still queue pending
requests. Implement store and forward or cached message-based communication systems that
allow requests to be stored when the target system is unavailable, and replayed when it is
online. Consider using Windows Message Queuing or BizTalk Server to provide a reliable
once-only delivery mechanism for asynchronous requests.
Reusability
Reusability is the probability that a component will be used in other components or scenarios to
add new functionality with little or no change. Reusability minimizes the duplication of
components and the implementation time. Identifying the common attributes between various
components is the first step in building small reusable components for use in a larger system.
The key issues for reusability are:
The use of different code or components to achieve the same result in different places; for
example, duplication of similar logic in multiple components, and duplication of similar logic
in multiple layers or subsystems. Examine the application design to identify common
functionality, and implement this functionality in separate components that you can reuse.
Examine the application design to identify crosscutting concerns such as validation, logging,
and authentication, and implement these functions as separate components.
The use of multiple similar methods to implement tasks that have only slight variation.
Instead, use parameters to vary the behavior of a single method.
Using several systems to implement the same feature or function instead of sharing or
reusing functionality in another system, across multiple systems, or across different
subsystems within an application. Consider exposing functionality from components, layers,
and subsystems through service interfaces that other layers and systems can use. Use
platform agnostic data types and structures that can be accessed and understood on different
platforms.
Scalability
Scalability is ability of a system to either handle increases in load without impact on the
performance of the system, or the ability to be readily enlarged. There are two methods for
improving scalability: scaling vertically (scale up), and scaling horizontally (scale out). To scale
vertically, you add more resources such as CPU, memory, and disk to a single system. To scale
horizontally, you add more machines to a farm that runs the application and shares the load. The
key issues for scalability are:
Applications cannot handle increasing load. Consider how you can design layers and tiers for
scalability, and how this affects the capability to scale up or scale out the application and the
database when required. You may decide to locate logical layers on the same physical tier to
reduce the number of servers required while maximizing load sharing and failover
capabilities. Consider partitioning data across more than one database server to maximize
scale-up opportunities and allow flexible location of data subsets. Avoid stateful components
and subsystems where possible to reduce server affinity.
Users incur delays in response and longer completion times. Consider how you will handle
spikes in traffic and load. Consider implementing code that uses additional or alternative
systems when it detects a predefined service load or a number of pending requests to an
existing system.
The system cannot queue excess work and process it during periods of reduced load.
Implement store-and-forward or cached message-based communication systems that allow
requests to be stored when the target system is unavailable, and replayed when it is online.
Security
Security is the capability of a system to reduce the chance of malicious or accidental actions
outside of the designed usage affecting the system, and prevent disclosure or loss of information.
Improving security can also increase the reliability of the system by reducing the chances of an
attack succeeding and impairing system operation. Securing a system should protect assets and
prevent unauthorized access to or modification of information. The factors affecting system
security are confidentiality, integrity, and availability. The features used to secure systems are
authentication, encryption, auditing, and logging. The key issues for security are:
Spoofing of user identity. Use authentication and authorization to prevent spoofing of user
identity. Identify trust boundaries, and authenticate and authorize users crossing a trust
boundary.
Damage caused by malicious input such as SQL injection and cross-site scripting. Protect
against such damage by ensuring that you validate all input for length, range, format, and
type using the constrain, reject, and sanitize principles. Encode all output you display to
users.
Data tampering. Partition the site into anonymous, identified, and authenticated users and use
application instrumentation to log and expose behavior that can be monitored. Also use
secured transport channels, and encrypt and sign sensitive data sent across the network
Repudiation of user actions. Use instrumentation to audit and log all user interaction for
application critical operations.
Information disclosure and loss of sensitive data. Design all aspects of the application to
prevent access to or exposure of sensitive system and application information.
Interruption of service due to Denial of service (DoS) attacks. Consider reducing session
timeouts and implementing code or hardware to detect and mitigate such attacks.
Supportability
Supportability is the ability of the system to provide information helpful for identifying and
resolving issues when it fails to work correctly. The key issues for supportability are:
Lack of diagnostic information. Identify how you will monitor system activity and
performance. Consider a system monitoring application, such as Microsoft System Center.
Lack of troubleshooting tools. Consider including code to create a snapshot of the systems
state to use for troubleshooting, and including custom instrumentation that can be enabled to
provide detailed operational and functional reports.
Lack of tracing ability. Use common components to provide tracing support in code, perhaps
though Aspect Oriented Programming (AOP) techniques or dependency injection. Enable
tracing in Web applications in order to troubleshoot errors.
Lack of health monitoring. Consider creating a health model that defines the significant state
changes that can affect application performance, and use this model to specify management
instrumentation requirements. Implement instrumentation, such as events and performance
counters, that detects state changes, and expose these changes through standard systems such
as Event Logs, Trace files, or Windows Management Instrumentation (WMI). Capture and
report sufficient information about errors and state changes in order to enable accurate
monitoring, debugging, and management.
Testability
Testability is a measure of how well system or components allow you to create test criteria and
execute tests to determine if the criteria are met. Testability allows faults in a system to be
isolated in a timely and effective manner. The key issues for testability are:
Complex applications with many processing permutations are not tested consistently, perhaps
because automated or granular testing cannot be performed if the application has a
monolithic design. Design systems to be modular to support testing. Provide instrumentation
or implement probes for testing, mechanisms to debug output, and ways to specify inputs
easily. Design components that have high cohesion and low coupling to allow testability of
components in isolation from the rest of the system.
Lack of test planning. Start testing early during the development life cycle. Use mock objects
during testing, and construct simple, structured test solutions.
Poor test coverage, for both manual and automated tests. Consider how you can automate
user interaction tests, and how you can maximize test and code coverage.
Input and output inconsistencies; for the same input, the output is not the same and the output
does not fully cover the output domain even when all known variations of input are provided.
Consider how to make it easy to specify and understand system inputs and outputs to
facilitate the construction of test cases.
has generally been recorded under a series of headings, usually subject areas such as Science,
English, Maths and Humanities.
A qualitative assessment is generally made, along with a more quantified assessment. These
measures may be derived from a formal test of examination, continuous assessment of
coursework or a quantified teacher assessment. In practice, the resulting scores are derived from
a whole spectrum of techniques. They range from those which may be regarded as objective and
transferable to those which are simply a more convenient representation of qualitative
judgements. In the past, these have been gathered together to form a traditional school report.
(Table 2.1)
The traditional school report often had an overall mark and grade, a single figure, generally
derived from the mean of the component figures, intended to provide a single measure of
success. In recent years, the assessment of pupils has become considerably more sophisticated
and the model on which the assessment is based has become more complicated. Subjects are
now broken down into skills, each of which is measured and the collective results used to give a
more detailed overall picture. For example, in English, pupils oral skills are considered
alongside their ability to read; written English is further subdivided into an assessment of style,
content and presentation. The hierarchical model requires another level of sophistication in order
to accommodate the changes (Figure 2.1). Much effort is currently being devoted to producing a
broader-based assessment, and in ensuring that qualitative judgements are as accurate and
consistent as possible. The aim is for every pupil to emerge with a broad-based Record of
Achievement alongside their more traditional examination results.
Teachers comments
(%)
English
Maths
Science
Humanities
Languages
Technology
OVERALL
A hierarchical model of software quality is based upon a set of quality criteria, each of which has
a set of measures or metrics associated with it. This type of model is illustrated schematically in
Figure 2.2.
Examples of quality criteria typically employed include reliability, security and adaptability.
The issues relating to the criteria of quality are:
What criteria of quality should be employed?
How do they inter-relate?
How may the associated metrics be combined into a meaningful overall measure of quality?
Product revision
The product revision perspective identifies quality factors that influence the ability to change the
software product, these factors are:Maintainability, the ability to find and fix a defect.
Flexibility, the ability to make changes required as dictated by the business.
Testability, the ability to Validate the software requirements
Product transition
The product transition perspective identifies quality factors that influence the ability to adapt the
software to new environments:Portability, the ability to transfer the software from one environment to another.
Reusability, the ease of using existing software components in a different context.
Interoperability, the extent, or ease, to which software components work together.
Product operations
The product operations perspective identifies quality factors that influence the extent to which
the software fulfils its specification:Correctness, the functionality matches the specification.
Reliability, the extent to which the system fails.
Efficiency, system resource (including cpu, disk, memory, network) usage.
Integrity, protection from unauthorized access.
Usability, ease of use.
The McCall model, illustrated in Figure 2.4, identifies three areas of software work: product
operation, product revision and product transition. These are summarized in Table 2.2
Product revision
Product transition
McCalls model forms the basis for much quality work even today. For example, the MQ model
published by Watts (1987) is heavily based upon the McCall model. The quality characteristics
in this model are described as follows:
Utility is the ease of use of the software.
Integrity is the protection of the program from unauthorized access.
Efficiency is concerned with the use of resources, e.g. processor time, storage.
It falls into two categories: execution efficiency and storage efficiency.
Correctness is the extent to which a program fulfills its specification.
Reliability is its ability not to fail.
Maintainability is the effort required to locate and fix a fault in the program within its
operating environment.
Flexibility is the ease of making changes required by changes in the operating
environment.
Testability is the ease of testing the program, to ensure that it is error-free and meets its
specification.
Portability is the effort required to transfer a program from one environment to another.
Reusability is the ease of reusing software in a different context.
Interoperability is the effort required to complete the system to another system.
This study carried out by the National Computer Centre (NCC). The characteristics and subcharacteristics of McCall model is shown in following figure.
The idea behind McCalls Quality Model is that the quality factors synthesized should provide a
complete software quality picture. The actual quality metric is achieved by answering yes and no
questions that then are put in relation to each other. That is, if answering equally amount of yes
and no on the questions measuring a quality criteria you will achieve 50% on that quality
criteria1. The metrics can then be synthesized per quality criteria, per quality factor, or if relevant
per product or service
ease
of
changing
software
to
accommodate
new
environment.
These three primary uses had quality factors associated with them , representing the next level of
Boehm's hierarchical model. These quality factors are further broken down into Primitive
constructs that can be measured, for example Testability is broken down into:- accessibility,
communicativeness, structure and self descriptiveness. As with McCall's Quality Model, the
intention is to be able to measure the lowest level of the model.
Correctness was seen as an umbrella property encompassing other attributes. Two types of
correctness were consistently identified. Developers talked in terms of technical correctness,
which included factors such as reliability, maintainability and the traditional software virtues.
Computer users, however, talked of business correctness, of meeting business needs and criteria
such as timeliness, value for money and ease of transition.
This reinforced the existence of different views of quality. It suggests that these developers
emphasized conformance to specification, while users sought fitness for purpose. There was
remarkable agreement between the different organizations as to some of the basic findings.
In particular:
A basic distinction between business and technical correctness.
A recognition that different aspect of quality would influence each other.
The study confirmed that the relationships were often context and even project dependent.
The studies demonstrated that the relationships were often not commutative. Thus although
property A may reinforce property B, property B may not reinforce property A.
Table 2.4 Software quality criteria elicited from a large manufacture in company
Criteria
Definition
Technical
correctness
User correctness
Reliability
without failure.
Efficiency
Integrity
Security
Understandability
Flexibility
Ease of interfacing
Portability
User consultation
Accuracy
Timeliness
The extent to which delivery fits with the deadlines and practices
of users.
Time to use
Appeal
User flexibility
Cost/benefit
User friendliness
The time to learn how to use the system and ease of use once
learned.
defined in the plan. The quality management team is totally responsible to build up the primary
design of the plan. To develop this plan, certain steps are followed, which are described below.
Step 1: To define the quality goals for the processes. These goals will be accepted
unconditionally by the developer and the customer, both. These objectives are to be clearly
described in the plan, so that both the parties can understand easily the scope of the processes.
The developers might also set a standard to define the goals. If possible, the plan can also
describe the quality goals in terms of measurement. This will ultimately help to measure the
performance of the processes in terms of gradation.
Step 2: To define the organization and the roles and responsibilities of the participant activities.
It should include the reporting system for the outcome of the quality reviews. The quality team
should know where to submit the reports, directly to the developers or somebody else. In many
cases, the reports are submitted to the project review team, who in turn delivers the report to the
subsequent departments and keeps it in storage for records. Whatever is the process of reporting,
it should be well defined in the plan to avoid disputes or complications in the submission process
for reviews and audits.
Step 3: The subsidiary quality assurance plan: It includes the list of other related plans
describing project standards, which have references in any of the process. These subsidiary plans
are related to the quality standards of several business components and how they are related to
each other in achieving the collective qualitative objective. This information also helps to
determine the different types of reviews to be done and how often they will be performed.
Normally, the included referenced plans are identified below.
a. Documentation Plan
b. Measurement Plan
c. Risk Measurement Plan
d. Problem Resolution Plan
e. Configuration Management Plan
f. Product Development Plan
g. Test Plan
h. Subcontractor Management Plan etc.
Step 4: To identify the task and activities of the quality control team. Generally, this will include
following reviews:
a. Reviewing project plans to ensure that the project abide by the defined process.
b. Reviewing project to ensure the performance according to the plans.
c. Endorsement of variation from the standard process.
d. Assessing the improvement of the processes.
It is the responsibility of the quality manager, to fix the schedule for the reviews and audits to
conduct quality control. This schedule is also documented within the plan, so that task control
can be done at an individual level. Thus, the entire process of quality control is documented
within the plan. This helps as a guideline for the reviewers and developers, simultaneously.
Controls include product inspection, where every product is examined visually, and often using a
stereo microscope for fine detail before the product is sold into the external market. Inspectors
will be provided with lists and descriptions of unacceptable product defects such as cracks or
surface blemishes for example. Quality control emphasizes testing of products to uncover defects
and reporting to management who make the decision to allow or deny product release, whereas
quality assurance attempts to improve and stabilize production (and associated processes) to
avoid, or at least minimize, issues which led to the defect(s) in the first place.
standards,
and
the
action
taken
when
non-conformance
is
detected.
These activities begin at the start of the software development process with reviews of
requirements, and continue until all application testing is complete.
It is possible to have quality control without quality assurance. A testing team may be in a place
to conduct system testing at the end of development.
Following are some of the QC activities:
a. Relates to specific product or service.
b. Implements the process
c. Verifies Specific attributes are there or not in product/service.
d. Identifies for correcting defects.
e. Detects, Reports and corrects defects
f. Concerned with specific product.
Quality Control identifies defects for the primary purpose of correcting defects and also
verifies weather specific attribute(s) are in, or are not in, a specific product or service.
While Quality Assurance identifies weaknesses in processes and improves them. Quality
Assurance sets up measurement programs to evaluate processes.
Quality Control is the responsibility of the Tester. Quality Assurance is a management
responsibility, frequently performed by a staff function.
Quality Assurance is sometimes called quality control because it evaluates whether
quality control is working. While Quality Assurance personnel should never perform
quality control unless it is to validate Quality Control.
Quality Assurance is preventing in Nature while Quality Control is detective in nature.
2.6 SUMMARY
All the different software development models have their own advantages and disadvantages.
Nevertheless, in the contemporary commercial software development world, the fusion of all
these methodologies is incorporated. Timing is very crucial in software development. If a delay
happens in the development phase, the market could be taken over by the competitor. Also if a
bug filled product is launched in a short period of time (quicker than the competitors), it may
affect the reputation of the company. So, there should be a tradeoff between the development
time and the quality of the product. Customers dont expect a bug free product but they expect a
user-friendly product that they can give a thumbs-up to.
The better understanding about quality can be achieved by study of quality models. The initial
quality models were in hierarchical order. These hierarchies provide better perspective about
quality characteristics. The model proposed by McCall and Bohem fall in above category. The
perspectives in McCall model are- Product revision (ability to change), Product transition
(adaptability to new environments) and Product operations (basic operational characteristics). In
total McCall identified the 11 quality factors broken down by the 3 perspectives, as listed above.
For each quality factor McCall defined one or more quality criteria (a way of measurement), in
this way an overall quality assessment could be made of a given software product by evaluating
the criteria for each factor. Boehms model was defined to provide a set of well-defined, welldifferentiated characteristics of software quality.
hierarchy is extended, so that quality criteria are subdivided. There are two levels of quality
criteria, the intermediate level being further split into primitive characteristics, which are
amenable to measurement in this model.
Assignment-Module 2
1.
ISO/IEC 9126
c. IEEE
d. ISO 9000
2.
3.
4.
5.
6.
7.
8.
9.
10.
Key - Module 2
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
Place ultimate responsibility for quality with line organizations, and mobilize quality
networks or communities within these organizations.
Make quality a shared responsibility.
Create clear standards and measurements, e.g., "dashboard measurements," which
provide quality status information clearly and quickly.
Make use of existing process measures and checkpoints wherever possible rather than
introduce new measures.
Incorporate and align quality measures and business objectives.
Do not limit interventions to identifying failures to meet standards; require corrective
action plans based on root cause analysis.
Focus on correcting the process that contributed to failure rather than installing shortterm fixes to problems.
The main challenge lies in leveraging and incorporating these concepts into the critical
components of an IT quality function. The following approach helps define an IT quality
function.
The quality function should be comprised of a small, focused team within the IT community.
The key is to avoid creating a large, bureaucratic entity, but rather employ a small team that
represents an extended community in the business functions.
The IT quality function should be led by an influential executive reporting directly to the CIO or
the chief financial officer. This will ensure that the new function has the required influence and
can manage across the organization effectively. The small team of quality advocates will report
directly to the quality executive.
The IT quality function should focus on broad, cross-functional quality issues that are high
priority and critical in nature to resolve. From an IT perspective, the scope should include such
areas as application development, networking, databases, data centers and end-user support (help
desk). From a business perspective, the function's responsibilities should include virtually the
entire organization because most business areas will likely have some sort of IT infrastructure or
application.
The IT quality leader will work with business executives and the CIO, while the quality
advocates will work with the extended quality community. The leader's key responsibilities are:
Provide overall leadership in achieving IT quality objectives.
Represent an end-to-end perspective of IT quality issues.
Ensure linkage of IT quality and process improvement activities across the organization.
Communicate clearly the function's mission, objectives, issues, measures, etc.
Include IT quality objectives and initiatives in the IT strategy.
The IT quality function calls for a high-powered, extremely talented team of "A" players.
Therefore, the quality leader must be able to build and sustain an excellent executive network.
The leader should consistently demonstrate a high sense of urgency and motivate people to
address issues that concern the entire organization. For their part, quality advocates should be
adept at communicating with superiors and peers, analyzing issues and working in crossfunctional teams. The business executives, the CIO and the IT quality leader must agree to a set
of measurements that will track the progress of IT quality initiatives and issues. While
consistency between groups is desirable, it is more important to relate the measures logically to
the activities involved. The quality measures should reflect the items that remain important to
users and those that drive user satisfaction. Each measure should include a target and time frame.
An example of a user-focused measure: User's perception of IT performance (measure), increase
to 75 percent (target), by second quarter 1999 (time frame). User-focused measures should be
based on the user's view of IT quality. However, the IT quality function must also measure the
internal drivers affecting user measures. For example: Number of defects per user (measure),
reduce by 10 percent (target), by fourth quarter 1999 (time frame).
knowledge, productivity, and quality and reduce costs, product development time, and engineering
changes.
Quality Function Deployment was developed by Yoji Akao in Japan in 1966. By 1972 the power
of the approach had been well demonstrated at the Mitsubishi Heavy Industries Kobe Shipyard
and in 1978 the first book on the subject was published in Japanese and then later translated into
English in 1994. In Akaos words, QFD "is a method for developing a design quality aimed at
satisfying the consumer and then translating the consumer's demand into design targets and
major quality assurance points to be used throughout the production phase. [QFD] is a way to
assure the design quality while the product is still in the design stage." As a very important side
benefit he points out that, when appropriately applied, QFD has demonstrated the reduction of
development time by one-half to one-third.
The 3 main goals in implementing QFD are:
i.
ii.
iii.
Build and deliver a quality product or service by focusing everybody toward customer
satisfaction.
Since its introduction, Quality Function Deployment has helped to transform the way many
companies:
Plan new products
Design product requirements
Determine process characteristics
Control the manufacturing process
Document already existing product specifications
Product planning
Part development
Process planning
Production planning
Service
Quality function deployment is a team-based management tool in which the customer expectations
are used to drive the product development process. Conflicting characteristics or requirements are
identified early in the QFD process and can be resolved before production. Ultimately the goal of
QFD is to translate often subjective quality criteria into objective ones that can be quantified and
measured and which can then be used to design and manufacture the product. It is a
complimentary method for determining how and where priorities are to be assigned in product
development. The intent is to employ objective procedures in increasing detail throughout the
development of the product.
Organizations today use market research to decide on what to produce to satisfy customer
requirements. Some customer requirements adversely affect others, and customers often cannot
explain their expectations. Confusion and misinterpretation are also a problem while a product
moves from marketing to design to engineering to manufacturing. This activity is where the voice
of the customer becomes lost and the voice of the organization adversely enters the product design.
Instead of working on what the customer expects, work is concentrated on fixing what the
customer does not want. In other words, it is not productive to improve something the customer did
not want initially. By implementing QFD, an organization is guaranteed to implement the voice of
the customer in the final product.
Quality function deployment helps identify new quality technology and job functions to carry out
operations. This tool provides a historic reference to enhance future technology and prevent design
errors. QFD is primarily a set of graphically oriented planning matrices that are used as the basis
for decisions affecting any phase of the product development cycle. Results of QFD are measured
based on the number of design and engineering changes, time to market, cost, and quality. It is
considered by many experts to be a perfect blueprint for concurrent engineering. Quality function
deployment enables the design phase to concentrate on the customer requirements, thereby
spending less time on redesign and modifications. The saved time has been estimated at onethird to one-half of the time taken for redesign and modification using traditional means. This
saving means reduced development cost and also additional income because the product enters
the market sooner.
QFD uses some principles from Concurrent Engineering in that cross-functional teams are
involved in all phases of product development. Each of the four phases in a QFD process uses a
matrix to translate customer requirements from initial planning stages through production
control. Each phase, or matrix, represents a more specific aspect of the product's requirements.
Relationships between elements are evaluated for each phase. Only the most important aspects
from each phase are deployed into the next matrix.
Phase 1: Product Planning: Building the House of Quality. Led by the marketing department,
Phase 1, or product planning, is also called The House of Quality. Many organizations only get
through this phase of a QFD process. Phase 1 documents customer requirements, warranty data,
competitive opportunities, product measurements, competing product measures, and the
technical ability of the organization to meet each customer requirement. Getting good data from
the customer in Phase 1 is critical to the success of the entire QFD process.
Phase 2: Product Design: This phase 2 is led by the engineering department. Product design
requires creativity and innovative team ideas. Product concepts are created during this phase and
part specifications are documented. Parts that are determined to be most important to meeting
customer needs are then deployed into process planning, or Phase 3.
Phase 3: Process Planning: Process planning comes next and is led by manufacturing
engineering. During process planning, manufacturing processes are flowcharted and process
parameters (or target values) are documented.
Phase 4: Process Control: And finally, in production planning, performance indicators are
created to monitor the production process, maintenance schedules, and skills training for
operators. Also, in this phase decisions are made as to which process poses the most risk and
controls are put in place to prevent failures. The quality assurance department in concert with
manufacturing leads Phase 4.
manufacturers of the late 1980s to early 1990s need an average of five years to put a product on the
market, from drawing board to showroom, whereas Honda can put a new product on the market in
two and a half years and Toyota does it in three years. Both organizations credit this reduced time
to the use of QFD. Product quality and, consequently, customer satisfaction improves with QFD
due to numerous factors depicted in Figure 111.
CUSTOMER
DRIVEN
REDUCES
IMPLEMENTATION
TIME
Based on concensus
Creates communication at interfaces
Identifies actions at interfaces
Creates global view out of details
PROMOTES
TEAMWORK
PROVIDES
DOCUMENTATION
orderly manner to serve future needs. This data base also serves as a training tool for new
engineers. Quality function deployment is also very flexible when new information is introduced
or things have to be changed on the QFD matrix.
There are many different types of customer information and ways that an organization can
collect data, as shown in Figure 32. The organization can search (solicited) for the information,
or the information can be volunteered (unsolicited) to the organization. Solicited and unsolicited
information can be further categorized into measurable (quantitative) or subjective (qualitative)
data. Furthermore, qualitative information can be found in a routine (structured) manner or
haphazard (random) manner.
Solicited
Unsolicited
Quantitative
Qualitative
Structured
Random
Focus Groups
Complaint Reports
Organizations Standards
Government Regulations
Lawsuits
Trade Visits
Customer Visits
Consultants
Hot Lines
Surveys
Customer Tests
Trade Trials
Preferred Customers
OM Testing
Product Purchase Survey
Customer Audits
Lagging
Sales Force
Training Programs
Conventions
Trade Journals
Trade Shows
Vendors
Suppliers
Academic
Employees
Leading
Customer information, sources, and ways an organization can collect data can be briefly stated as
follows:
Solicited, measurable, and routine data are typically found by customer surveys, market
surveys, and trade trials, working with preferred customers, analyzing products from other
manufacturers, and buying back products from the field. This information tells an
organization how it is performing in the current market.
Unsolicited, measurable, and routine data tend to take the form of customer complaints or
lawsuits. This information is generally disliked; however, it provides valuable learning
information.
Solicited, subjective, and routine data are usually gathered from focus groups. The object of
these focus groups is to find out the likes, dislikes, trends, and opinions about current and
future products.
Solicited, subjective, and haphazard data are usually gathered from trade visits, customers
visits, and independent consultants. These types of data can be very useful; however, they
can also be misleading, depending on the quantity and frequency of information.
Unsolicited, subjective, and haphazard data are typically obtained from conventions, vendors,
suppliers, and employees. This information is very valuable and often relates the true voice
of the customer.
The goal of QFD is not only to meet as many customer expectations and needs as possible, but
also to exceed customer expectations. Each QFD team must make its product either more
appealing than the existing product or more appealing than the product of a competitor. This
situation implies that the team has to introduce an expectation or need in its product that the
customer is not expecting but would appreciate. For example, cup holders were put into
automobiles as an extra bonus, but customers liked them so well that they are now expected in all
new automobiles.
The affinity diagram is a tool that gathers a large amount of data and subsequently organizes the
data into groupings based on their natural interrelationships. An affinity diagram should be
implemented when
Thoughts are too widely dispersed or numerous to organize.
New solutions are needed to circumvent the more traditional ways of problem solving.
Support for a solution is essential for successful implementation.
This method should not be used when the problem is simple or a quick solution is needed. The
team needed to accomplish this goal effectively should be a multidisciplinary one that has the
needed knowledge to delve into the various areas of the problem. A team of six to eight members
should be adequate to assimilate all of the thoughts. Constructing an affinity diagram requires
four simple steps:
Phrase the objective.
Record all responses.
Group the responses.
Organize groups in an affinity diagram.
The first step is to phrase the objective in a short and concise statement. It is imperative that the
statement be as generalized and vague as possible.
The second step is to organize a brainstorming session, in which responses to this statement are
individually recorded on cards and listed on a pad. It is sometimes helpful to write down a
summary of the discussion on the back of cards so that, in the future when the cards are
reviewed, the session can be briefly explained.
Next, all the cards should be sorted by placing the cards that seem to be related into groups.
Then, a card or word is chosen that best describes each related group, which becomes the
heading for each group of responses. Finally, lines are placed around each group of responses
and related clusters are placed near each other with a connecting line.
Interrelationship
between
Technical Descriptors
Relationship between
Requirements and Descriptors
Prioritized Technical
Descriptors
Prioritized Customer
Requirements
Customer Requirements
(Voice of the Customer)
Technical Descriptors
(Voice of the organization)
operation of the SQA team depends on how well their planning is done. In smaller businesses ,
planning might not really dictate the flow of SQA but in larger businesses, SQA Planning takes
on center stage. Without it, each component or department that works on the application will be
affected and will never function.
SQA Planning tackles almost every aspect of SQAs operation. Through planning, each member
and even non-member of the SQA team is clearly defined. The reason for this is very simple:
when everyone knows their role and boundaries, there is no overlapping of responsibilities and
everyone could concentrate on their roles.
But SQA Planning is not only a document that tells who gets to do the specific task. The stages
in are also detailed. The whole SQA team will be very busy once the actual testing starts but with
SQA, everyones work is clearly laid out. Through planning, the actual state of the application
testing is known.
Again in smaller businesses, the planning maybe limited to the phase of the application testing
but when outlined for corporations, the scenario changes and only through planning that
everyone will know where they are and where they are going in terms of SQA.
SQA Planning is not just a simple document where objectives are written and stages are clearly
stated. Because of the need to standardize software development ensuring the limitation of error,
a scientific approach is recommended in developing an SQA plan. Certain standards such as
IEEE Std 730 or 983.
In the first phase, the SQA team should write in detail the activities related for software
requirements. In this stage, the team will be creating steps and stages on how they will analyze
the software requirements. They could refer to additional documents to ensure the plan works
out.
The second stage of SQA Plan or the SQAP for AD (Architectural Design) the team should
analyze in detail the preparation of the development team for detailed build-up. This stage is a
rough representation of the program but it still has to go through rigorous scrutiny before it
reaches the next stage.
The third phase which tackles the quality assurance plan for detailed design and actual product is
probably the longest among phases. The SQA team should write in detail the tools and approach
they will be using to ensure that the produced application is written according to plan. The team
should also start planning on the transfer phase as well.
The last stage is the QA plan for transfer of technology to the operations. The SQA team should
write their plan on how they will monitor the transfer of technology such as training and support.
ii.
Check sheet: A structured, prepared form for collecting and analyzing data; a generic
tool that can be adapted for a wide variety of purposes.
iii.
Control charts: Graphs used to study how a process changes over time.
iv.
Histogram: The most commonly used graph for showing frequency distributions, or how
often each different value in a set of data occurs.
v.
Pareto chart: Shows on a bar graph which factors are more significant.
vi.
Scatter diagram: Graphs pairs of numerical data, one variable on each axis, to look for a
relationship.
vii.
Stratification: A technique that separates data gathered from a variety of sources so that
patterns can be seen (some lists replace stratification with flowchart or run chart).
i. CauseandEffect Diagram
Also Called:, Fishbone Diagram, Ishikawa Diagram
Variations: cause enumeration diagram, process fishbone, timedelay fishbone, CEDAC (cause
andeffect diagram with the addition of cards), desiredresult fishbone, reverse fishbone diagram
The fishbone diagram identifies many possible causes for an effect or problem. It can be used to
structure a brainstorming session. It immediately sorts ideas into useful categories.
When to Use a Fishbone Diagram
When identifying possible causes for a problem.
Especially when a teams thinking tends to fall into ruts.
Fishbone Diagram Procedure
Materials needed: flipchart or whiteboard, marking pens.
Agree on a problem statement (effect). Write it at the center right of the flipchart or whiteboard.
Draw a box around it and draw a horizontal arrow running to it.
Brainstorm the major categories of causes of the problem. If this is difficult use generic
headings:
Methods
Machines (equipment)
People (manpower)
Materials
Measurement
Environment
Write the categories of causes as branches from the main arrow.
Brainstorm all the possible causes of the problem. Ask: Why does this happen? As each idea is
given, the facilitator writes it as a branch from the appropriate category. Causes can be written in
several places if they relate to several categories.
Again ask why does this happen? about each cause. Write subcauses branching off the
causes. Continue to ask Why? and generate deeper levels of causes. Layers of branches
indicate causal relationships. When the group runs out of ideas, focus attention to places on the
chart where ideas are few.
Fishbone Diagram Example
This fishbone diagram was drawn by a manufacturing team to try to understand the source of
periodic iron contamination. The team used the six generic headings to prompt ideas. Layers of
branches show thorough thinking about the causes of the problem.
For example, under the heading Machines, the idea materials of construction shows four
kinds of equipment and then several specific machine numbers.
Note that some ideas appear in two different places. Calibration shows up under Methods as
a factor in the analytical procedure, and also under Measurement as a cause of lab error. Iron
tools can be considered a Methods problem when taking samples or a Manpower problem
with maintenance personnel.
Control charts for variable data are used in pairs. The top chart monitors the average, or the
centering of the distribution of data from the process. The bottom chart monitors the range, or the
width of the distribution. If your data were shots in target practice, the average is where the shots
are clustering, and the range is how tightly they are clustered. Control charts for attribute data are
used singly.
Out-of-control signals
A single point outside the control limits. In Figure 3-4, point sixteen is above the UCL (upper
control limit).
Two out of three successive points are on the same side of the centerline and farther than 2
from it. In Figure 3-4, point 4 sends that signal.
Four out of five successive points are on the same side of the centerline and farther than 1 from
it. In Figure 3-4, point 11 sends that signal.
A run of eight in a row are on the same side of the centerline. Or 10 out of 11, 12 out of 14 or 16
out of 20. In Figure 3-4, point 21 is eighth in a row above the centerline.
Obvious consistent or persistent patterns that suggest something unusual about your data and
your process.
When you start a new control chart, the process may be out of control. If so, the control limits
calculated from the first 20 points are conditional limits. When you have at least 20 sequential
points from a period when the process is operating in control, recalculate control limits.
iv Histogram
A frequency distribution shows how often each different value in a set of data occurs. A
histogram is the most commonly used graph to show frequency distributions. It looks very much
like a bar chart, but there are important differences between them.
When to Use a Histogram
When the data are numerical.
When you want to see the shape of the datas distribution, especially when determining whether
the output of a process is distributed approximately normally.
When analyzing whether a process can meet the customers requirements.
When analyzing what the output from a suppliers process looks like.
When seeing whether a process change has occurred from one time period to another.
When determining whether the outputs of two or more processes are different.
When you wish to communicate the distribution of data quickly and easily to others.
Histogram Construction
Collect at least 50 consecutive data points from a process.
Use the histogram worksheet to set up the histogram. It will help you determine the number of
bars, the range of numbers that go into each bar and the labels for the bar edges. After calculating
W in step 2 of the worksheet, use your judgment to adjust it to a convenient number. For
example, you might decide to round 0.9 to an even 1.0. The value for W must not have more
decimal places than the numbers you will be graphing.
Draw x- and y-axes on graph paper. Mark and label the y-axis for counting data values. Mark
and label the x-axis with the L values from the worksheet. The spaces between these numbers
will be the bars of the histogram. Do not allow for spaces between bars.
For each data point, mark off one count above the appropriate bar with an X or by shading that
portion of the bar.
Histogram Analysis
Before drawing any conclusions from your histogram, satisfy yourself that the process was
operating normally during the time period being studied. If any unusual events affected the
process during the time period of the histogram, your analysis of the histogram shape probably
cannot be generalized to all time periods.
Analyze the meaning of your histograms shape.
v Pareto Chart
Also called: Pareto diagram, Pareto analysis
Variations: weighted Pareto chart, comparative Pareto charts
A Pareto chart is a bar graph. The lengths of the bars represent frequency or cost (time or
money), and are arranged with longest bars on the left and the shortest to the right. In this way
the chart visually depicts which situations are more significant.
When there are many problems or causes and you want to focus on the most significant.
When analyzing broad causes by looking at their specific components.
When communicating with others about your data.
Calculate and draw cumulative sums: Add the subtotals for the first and second categories, and
place a dot above the second bar indicating that sum. To that sum add the subtotal for the third
category, and place a dot above the third bar for that new sum. Continue the process for all the
bars. Connect the dots, starting at the top of the first bar. The last dot should reach 100 percent
on the right scale.
When determining whether two effects that appear to be related both occur with the same cause.
When testing for autocorrelation before constructing a control chart.
If Q is greater than or equal to the limit, the pattern could have occurred from random chance.
When the data are plotted, the more the diagram resembles a straight line, the stronger the
relationship.
If a line is not clear, statistics (N and Q) determine whether there is reasonable certainty that a
relationship exists. If the statistics say that no relationship exists, the pattern could have occurred
by random chance.
If the scatter diagram shows no relationship between the variables, consider whether the data
might be stratified.
If the diagram shows no relationship, consider whether the independent (x-axis) variable has
been varied widely. Sometimes a relationship is not apparent because the data dont cover a wide
enough range.
Think creatively about how to use scatter diagrams to discover a root cause.
Drawing a scatter diagram is the first step in looking for a relationship between variables.
vii. Stratification
Stratification is a technique used in combination with other data analysis tools. When data from a
variety of sources or categories have been lumped together, the meaning of the data can be
impossible to see. This technique separates the data so that patterns can be seen.
When to Use Stratification
Before collecting data.
When data come from several sources or conditions, such as shifts, days of the week, suppliers
or population groups.
When data analysis may require separating different sources or conditions.
Stratification Procedure
Before collecting data, consider which information about the sources of the data might have an
effect on the results. Set up the data collection so that you collect that information as well.
When plotting or graphing the collected data on a scatter diagram, control chart, histogram or
other analysis tool, use different marks or colors to distinguish data from various sources. Data
that are distinguished in this way are said to be stratified.
Analyze the subsets of stratified data separately. For example, on a scatter diagram where data
are stratified into data from source 1 and data from source 2, draw quadrants, count points and
determine the critical value only for the data from source 1, then only for the data from source 2.
Stratification Example
The ZZ400 manufacturing team drew a scatter diagram to test whether product purity and iron
contamination were related, but the plot did not demonstrate a relationship. Then a team member
realized that the data came from three different reactors. The team member redrew the diagram,
using a different symbol for each reactors data:
Now patterns can be seen. The data from reactor 2 and reactor 3 are circled. Even without doing
any calculations, it is clear that for those two reactors, purity decreases as iron increases.
However, the data from reactor 1, the solid dots that are not circled, do not show that
relationship. Something is different about reactor 1.
Stratification Considerations
Here are examples of different sources that might require data to be stratified:
Equipment
Shifts
Departments
Materials
Suppliers
Day of the week
Time of day
Products
Survey data usually benefit from stratification.
Always consider before collecting data whether stratification might be needed during analysis.
Plan to collect stratification information. After the data are collected it might be too late.
On your graph or chart, include a legend that identifies the marks or colors used.
Publicly-traded corporations typically have an internal auditing department, led by a Chief Audit
Executive ("CAE") who generally reports to the Audit Committee of the Board of Directors,
with administrative reporting to the Chief Executive Officer.
Evaluate organizational structure, staffing, and internal audit approach of the department.
Determine how internal auditing is perceived through interviews and surveys with customers,
including governance personnel.
Examine techniques and methodology for testing controls. Identify ways to enhance the
department's policies and practices.
Evaluate whether the department conforms to The IIA's International Standards for the
Professional Practice of Internal Auditing (ISPPIA).
3.9 SUMMARY
The
scope
of Software Quality
Assurance
or
SQA
starts
from
the
planning
of
the application until it is being distributed for the actual operations. To successfully monitor the
application build up process, the SQA team also has their written plan. In a regular SQA plan,
the team will have enumerated all the possible functions, tools and metrics that will be expected
from the application. SQA planning will be the basis of everything once the actual SQA starts.
Without SQA planning, the team will never know what the scope of their function is. Through
planning, the clients expectations are detailed and from that point, the SQA team will know how
to build metrics and the development team could start working on the application.
Quality function deploymentspecifically, the house of qualityis an effective management tool
in which customer expectations are used to drive the design process. QFD forces the entire
organization to constantly be aware of the customer requirements. Every QFD chart is a result of
the original customer requirements which are not lost through misinterpretation or lack of
communication. Marketing benefits because specific sales points, that have been identified by the
customer, can be stressed. Most importantly, implementing QFD results in a satisfied customer.
Most of the organizations use quality tools for various purposes related to controlling and
assuring quality. Although there are a good number of quality tools specific to certain domains,
fields, and practices, some of the quality tools can be used across such domains. These quality
tools are quite generic and can be applied to any condition. There are various basic quality tools
used in organizations. These tools can provide much information about problems in the
organization assisting to derive solutions for the same. A brief training, mostly a self-training, is
sufficient for someone to start using the tools.
Auditing is an independent, objective assurance and consulting activity designed to add value
and improve an organization's operations. It helps an organization accomplish its objectives by
bringing a systematic, disciplined approach to evaluate and improve the effectiveness of risk
management, control, and governance processes. Internal auditing is a catalyst for improving an
organizations effectiveness and efficiency by providing insight and recommendations based on
analyses and assessments of data and business processes.
Assignment-Module 3
1.
a.
b.
c.
d.
2.
QFD focuses on
a.
Product Transition
b.
Product operation
c.
d.
3.
Benefits of QFD
a.
Customer satisfaction
b.
Conformance to specification
c.
d.
None of them
4.
a.
Tree diagram
b.
c.
Affinity diagram
d.
None of them
5.
a.
House of Quality
b.
Quality assurance
c.
Customer satisfaction
d.
Quality planning
6.
a.
Bar chart
b.
Ishikawa diagram
c.
None of them
d.
Both of them
7.
A ___________ always has a central line for the average, an upper line for the upper
control limit and a lower line for the lower control limit.
a.
Histogram
b.
Pareto chart
c.
Bar chart
d.
Control chart
8.
a.
Staged
b.
Continuous
c.
Industry
d.
None of them
9.
a.
b.
c.
d.
None of them
10.
a.
Internal auditors
b.
External auditors
c.
Both of them
d.
None of them
Key - Module 3
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
ii.
works as expected;
iii.
iv.
The view of software testing has evolved towards a more constructive one. Testing is no longer
seen as an activity which starts only after the coding phase is complete, with the limited purpose
of detecting failures. Software testing is now seen as an activity which should encompass the
whole development and maintenance process and is itself an important part of the actual product
construction. Indeed, planning for testing should start with the early stages of the requirement
process, and test plans and procedures must be systematically and continuously developed, and
possibly refined, as development proceeds. These test planning and designing activities
themselves constitute useful input for designers in highlighting potential weaknesses (like design
oversights or contradictions, and omissions or ambiguities in the documentation). Software
testing, depending on the testing method employed, can be implemented at any time in the
development process.
Different software development models will focus the test effort at different points in the
development process. Newer development models, such as Agile, often employ test-driven
development and place an increased portion of the testing in the hands of the developer, before it
reaches a formal team of testers. In a more traditional model, most of the test execution occurs
after the requirements have been defined and the coding process has been completed.
A primary purpose of testing is to detect software failures so that defects may be discovered and
corrected. Testing cannot establish that a product functions properly under all conditions but can
only establish that it does not function properly under specific conditions. The scope of software
testing often includes examination of code as well as execution of that code in various
environments and conditions as well as examining the aspects of code: does it do what it is
supposed to do and do what it needs to do. In the current culture of software development, a
testing organization may be separate from the development team. There are various roles for
testing team members. Information derived from software testing may be used to correct the
process by which software is developed.
conformance and nonconformance, he said. Conformance costs are those accrued when an
organization creates and tests quality software and can be broken down into prevention and
protection costs.
needed to get the coverage they want. Combinatorial test design enables users to get greater test
coverage with fewer tests.
4.2.3 Economics
A study conducted by NIST in 2002 reports that software bugs cost the U.S. economy $59.5
billion annually. More than a third of this cost could be avoided if better software testing was
performed. It is commonly believed that the earlier a defect is found the cheaper it is to fix it.
4.2.4 Roles
Software testing can be done by software testers. Until the 1980s the term "software tester" was
used generally, but later it was also seen as a separate profession. Regarding the periods and the
different goals in software testing, different roles have been established: manager, test lead, test
designer, tester, automation developer, and test administrator.
4.3.7 Testability
The term software testability has two related but different meanings: on the one hand, it refers
to the degree to which it is easy for software to fulfill a given test coverage criterion, as in
(Bac90); on the other hand, it is defined as the likelihood, possibly measured statistically, that the
software will expose a failure under testing, if it is faulty, as in (Voa95, Ber96a). Both meanings
are important.
Code coverage tools can evaluate the completeness of a test suite that was created with any
method, including black-box testing. This allows the software team to examine parts of a system
that are rarely tested and ensures that the most important function points have been tested. Code
coverage as a software metric can be reported as a percentage for:
Function coverage, which reports on functions executed
Statement coverage, which reports on the number of lines executed to complete the test
100% statement coverage ensures that all code paths, or branches (in terms of control flow) are
executed at least once. This is helpful in ensuring correct functionality, but not sufficient since
the same code may process different inputs correctly or incorrectly.
Specification-based testing aims to test the functionality of software according to the applicable
requirements. This level of testing usually requires thorough test cases to be provided to the
tester, who then can simply verify that for a given input, the output value (or behavior), either
"is" or "is not" the same as the expected value specified in the test case. Test cases are built
around specifications and requirements, i.e., what the application is supposed to do. It uses
external descriptions of the software, including specifications, requirements, and designs to
derive test cases. These tests can be functional or non-functional, though usually functional.
Specification-based testing may be necessary to assure correct functionality, but it is insufficient
to guard against complex or high-risk situations. One advantage of the black box technique is
that no programming knowledge is required. Whatever biases the programmers may have had,
the tester likely has a different set and may emphasize different areas of functionality. On the
other hand, black-box testing has been said to be "like a walk in a dark labyrinth without a
flashlight." Because they do not examine the source code, there are situations when a tester
writes many test cases to check something that could have been tested by only one test case, or
leaves some parts of the program untested.
This method of test can be applied to all levels of software testing: unit, integration, system and
acceptance. It typically comprises most if not all testing at higher levels, but can also dominate
unit testing as well.
changes added late in the release or deemed to be risky, to very shallow, consisting of positive
tests on each feature, if the changes are early in the release or deemed to be of low risk.
A smoke test is used as an acceptance test prior to introducing a new build to the main
Acceptance testing performed by the customer, often in their lab environment on their
own hardware, is known as user acceptance testing (UAT). Acceptance testing may be
performed as part of the hand-off process between any two phases of development.
methodologies work from use cases or user stories. Functional tests tend to answer the question
of "can the user do this" or "does this particular feature work."
Non-functional testing refers to aspects of the software that may not be related to a specific
function or user action, such as scalability or other performance, behavior under certain
constraints, or security. Testing will determine the flake point, the point at which extremes of
scalability or performance leads to unstable execution. Non-functional requirements tend to be
those that reflect the quality of the product, particularly in the context of the suitability
perspective of its users.
radically in size. Stress testing is a way to test reliability under unexpected or rare workloads.
Stability testing (often referred to as load or endurance testing) checks to see if the software can
continuously function well in or above an acceptable period.
There is little agreement on what the specific goals of performance testing are. The terms load
testing, performance testing, reliability testing, and volume testing, are often used
interchangeably.
4.6.12 Accessibility
Accessibility testing might include compliance with:
Americans with Disabilities Act of 1990
Section 508 Amendment to the Rehabilitation Act of 1973
Web Accessibility Initiative (WAI) of the World Wide Web Consortium (W3C)
Actual translation to human languages must be tested, too. Possible localization failures include:
Software is often localized by translating a list of strings out of context, and the translator
may choose the wrong translation for an ambiguous source string.
Technical terminology may become inconsistent if the project is translated by several
people without proper coordination or if the translator is imprudent.
Literal word-for-word translations may sound inappropriate, artificial or too technical in
the target language.
Untranslated messages in the original language may be left hard coded in the source
code.
Some messages may be created automatically at run time and the resulting string may be
ungrammatical, functionally incorrect, misleading or confusing.
Software may use a keyboard shortcut which has no function on the source language's
keyboard layout, but is used for typing characters in the layout of the target language.
Software may lack support for the character encoding of the target language.
Fonts and font sizes which are appropriate in the source language may be inappropriate in
the target language; for example, CJK characters may become unreadable if the font is
too small.
A string in the target language may be longer than the software can handle. This may
make the string partly invisible to the user or cause the software to crash or malfunction.
Software may lack proper support for reading or writing bi-directional text.
Software may display images with text that was not localized.
Localized operating systems may have differently-named system configuration files and
environment variables and different formats for date and currency.
To avoid these and other localization problems, a tester who knows the target language must run
the program with all the possible use cases for translation to see if the messages are readable,
translated correctly in context and do not cause failures.
Test guides
The testing phases could be guided by various aims, for example: in risk-based testing, which
uses the product risks to prioritize and focus the test strategy; or in scenario-based testing, in
which test cases are defined based on specified software scenarios.
different costs and yield different levels of confidence in product reliability. Termination A
decision must be made as to how much testing is enough and when a test stage can be
terminated. Thoroughness measures, such as achieved code coverage or functional completeness,
as well as estimates of fault density or of operational reliability, provide useful support, but are
not sufficient in themselves.
Planning
Like any other aspect of project management, testing activities must be planned. Key aspects of
test planning include coordination of personnel, management of available test facilities and
equipment (which may include magnetic media, test plans and procedures), and planning for
possible undesirable outcomes. If more than one baseline of the software is being maintained,
then a major planning consideration is the time and effort needed to ensure that the test
environment is set to the proper configuration.
Test-case generation
Generation of test cases is based on the level of testing to be performed and the particular testing
techniques. Test cases should be under the control of software configuration management and
include the expected results for each test.
Execution
Execution of tests should embody a basic principle of scientific experimentation: everything
done during testing should be performed and documented clearly enough that another person
could replicate the results. Hence, testing should be performed in accordance with documented
procedures using a clearly defined version of the software under test.
Defect tracking
Failures observed during testing are most often due to faults or defects in the software. Such
defects can be analyzed to determine when they were introduced into the software, what kind of
error caused them to be created (poorly defined requirements, incorrect variable declaration,
memory leak, programming syntax error, for example), and when they could have been first
observed in the software. Defect-tracking information is used to determine what aspects of
software engineering need improvement and how effective previous analyses and testing have
been.
Design. Activities in the design phase Revise test plan based on changes, revise test cycle
matrices and timelines, verify that test plan and cases are in a database or requisite, continue to
write test cases and add new ones based on changes, develop Risk Assessment Criteria,
formalize details for Stress and Performance testing, finalize test cycles (number of test case per
cycle based on time estimates per test case and priority), finalize the Test Plan, (estimate
resources to support development in unit testing).
Construction (Unit Testing Phase). Complete all plans, complete Test Cycle matrices and
timelines, complete all test cases (manual), begin Stress and Performance testing, test the
automated testing system and fix bugs, (support development in unit testing), run QA acceptance
test suite to certify software is ready to turn over to QA.
Test Cycle(s) / Bug Fixes (Re-Testing/System Testing Phase). Run the test cases (front and back
end), bug reporting, verification, revise/add test cases as required.
Final Testing and Implementation (Code Freeze Phase). Execution of all front end test cases
manual and automated, execution of all back end test cases manual and automated, execute all
Stress and Performance tests, provide on-going defect tracking metrics, provide on-going
complexity and design metrics, update estimates for test cases and test plans, document test
cycles, regression testing, and update accordingly.
Traceability matrix
A traceability matrix is a table that correlates requirements or design documents to test
documents. It is used to change tests when related source documents are changed, to select test
cases for execution when planning for regression tests by considering requirement coverage.
Test case
A test case normally consists of a unique identifier, requirement references from a design
specification, preconditions, events, a series of steps (also known as actions) to follow, input,
output, expected result, and actual result. Clinically defined a test case is an input and an
expected result. This can be as pragmatic as 'for condition x your derived result is y', whereas
other test cases described in more detail the input scenario and what results might be expected. It
can occasionally be a series of steps (but often steps are contained in a separate test procedure
that can be exercised against multiple test cases, as a matter of economy) but with one expected
result or expected outcome. The optional fields are a test case ID, test step, or order of execution
number, related requirement(s), depth, test category, author, and check boxes for whether the test
is automatable and has been automated. Larger test cases may also contain prerequisite states or
steps, and descriptions. A test case should also contain a place for the actual result. These steps
can be stored in a word processor document, spreadsheet, database, or other common repository.
Test script
A test script is a procedure, or programming code that replicates user actions. Initially the term
was derived from the product of work created by automated regression test tools. Test Case will
be a baseline to create test scripts using a tool or a program.
Test suite
The most common term for a collection of test cases is a test suite. The test suite often also
contains more detailed instructions or goals for each collection of test cases. It definitely contains
a section where the tester identifies the system configuration used during testing. A group of test
cases may also contain prerequisite states or steps, and descriptions of the following tests.
Test harness
The software, tools, samples of data input and output, and configurations are all referred to
collectively as a test harness.
c. Existing test cases should be enhanced and further test cases should be designed to show that
the software does not do anything that it is not specified to do i.e. Negative Testing [Suitable
techniques - Error guessing, Boundary value analysis, Internal boundary value testing, Statetransition testing]
d. Where appropriate, test cases should be designed to address issues such as performance,
safety requirements and security requirements [Suitable techniques - Specification derived
tests]
e. Further test cases can then be added to the unit test specification to achieve specific test
coverage objectives. Once coverage tests have been designed, the test procedure can be
developed and the tests executed [Suitable techniques - Branch testing, Condition testing,
Data definition-use testing, State-transition testing]
User Interface Errors: Missing/Wrong Functions, Doesnt do what the user expects, Missing
information, Misleading, Confusing information, Wrong content in Help text, Inappropriate error
messages. Performance issues Poor responsiveness, Cant redirect output, inappropriate use of
key board
Error Handling: Inadequate protection against corrupted data, tests of user input, version
control; Ignores overflow, data comparison, Error recovery aborting errors, recovery from
hardware problems.
Boundary related errors: Boundaries in loop, space, time, memory, mishandling of cases
outside boundary.
Calculation errors: Bad Logic, Bad Arithmetic, Outdated constants, Calculation errors,
Incorrect conversion from one data representation to another, Wrong formula, Incorrect
approximation.
Initial and Later states: Failure to set data item to zero, to initialize a loop-control variable, or
re-initialize a pointer, to clear a string or flag, Incorrect initialization.
Control flow errors: Wrong returning state assumed, Exception handling based exits, Stack
underflow/overflow, Failure to block or un-block interrupts, Comparison sometimes yields
wrong result, Missing/wrong default, Data Type errors.
Errors in Handling or Interpreting Data: Un-terminated null strings, Overwriting a file after
an error exit or user abort.
Race Conditions: Assumption that one event or task finished before another begins, Resource
races, Tasks starts before its prerequisites are met, Messages cross or dont arrive in the order
sent.
Load Conditions: Required resources are not available, No available large memory area, Low
priority tasks not put off, Doesnt erase old files from mass storage, Doesnt return unused
memory.
Hardware:
Wrong
Device,
Device
unavailable,
Underutilizing
device
intelligence,
Source, Version and ID Control: No Title or version ID, Failure to update multiple copies of
data or program files.
Testing Errors: Failure to notice/report a problem, Failure to use the most promising test case,
Corrupted data files, Misinterpreted specifications or documentation, Failure to make it clear
how to reproduce the problem, Failure to check for unresolved problems just before release,
Failure to verify fixes, Failure to provide summary report.
Posses people skills and tenacity. Testers can face a lot of resistance from programmers. Being
socially smart and diplomatic doesnt mean being indecisive. The best testers are both-socially
adept and tenacious where it matters.
Organized. Best testers very well realize that they too can make mistakes and dont take
chances. They are very well organized and have checklists, use files, facts and figures to support
their findings that can be used as an evidence and double-check their findings.
Objective and accurate. They are very objective and know what they report and so convey
impartial and meaningful information that keeps politics and emotions out of message. Reporting
inaccurate information is losing a little credibility. Good testers make sure their findings are
accurate and reproducible.
Defects are valuable. Good testers learn from them. Each defect is an opportunity to learn and
improve. A defect found early substantially costs less when compared to the one found at a later
stage. Defects can cause serious problems if not managed properly. Learning from defects helps
prevention of future problems, track improvements, improve prediction and estimation.
The terms verification and validation are commonly used interchangeably in the industry; it is
also common to see these two terms incorrectly defined.
Within the modeling and simulation community, the definitions of verification and validation are
similar:
Verification is the process of determining that a computer model, simulation, or
federation of models and simulations implementations and their associated data
accurately represents the developer's conceptual description and specifications.
Formal methods
ii.
iii.
Cleanroom method
iv.
Structured testing
v.
Software inspections are efficient. Projects can detect over 50% of the total number of defects
introduced in development by doing them Software inspections are economical because they
result in significant reductions in both the number of defects and the cost of their removal.
Detection of a defect as close as possible to the time of its introduction results in:
an increase in the developers' awareness of the reason for the defect's occurrence, so that the
likelihood that a similar defect will recur again is reduced;
reduced effort in locating the defect, since no effort is required to diagnose which
component, out of many possible components, contains the defect.
Software inspections are formal processes. They differ from walkthroughs by:
repeating the process until an acceptable defect rate (e.g. number of errors per thousand lines
of code) has been achieved;
analysing the results of the process and feeding them back to improve the production process,
and forward to give early measurements of software quality;
avoiding discussion of solutions;
including rework and follow-up activities.
(b) Organisation
There are five roles in a software inspection:
moderator;
secretary;
reader;
inspector;
author.
The moderator leads the inspection and chairs the inspection meeting. The person should have
implementation skills, but not necessarily be knowledgeable about the item under inspection. He
or she must be impartial and objective. For this reason moderators are often drawn from staff
outside the project. Ideally they should receive some training in inspection procedures.
The secretary is responsible for recording the minutes of inspection meetings, particularly the
details about each defect found.
The reader guides the inspection team through the review items during the inspection meetings.
Inspectors identify and describe defects in the review items under inspection. They should be
selected to represent a variety of viewpoints (e.g. designer, coder and tester).
The author is the person who has produced the items under inspection. The author is present to
answer questions about the items under inspection, and is responsible for all rework.
A person may have one or more of the roles above. In the interests of objectivity, no person may
share the author role with another role.
(c) Input
The inputs to an inspection are the:
review items;
specifications of the review items;
inspection checklist;
standards and guidelines that apply to the review items;
inspection reporting forms;
defect list from a previous inspection.
(d) Activities
A software inspection consists of the following activities:
i.
overview;
ii.
preparation;
iii.
review meeting;
iv.
rework;
v.
follow-up.
(i) Overview
The purpose of the overview is to introduce the review items to the inspection team. The
moderator describes the area being addressed and then the specific area that has been designed in
detail. For a re-inspection, the moderator should flag areas that have been subject to rework since
the previous inspection. The moderator then distributes the inputs to participants.
(ii) Preparation
Moderators, readers and inspectors then familiarize themselves with the inputs. They might
prepare for a code inspection by reading:
design specifications for the code under inspection;
coding standards;
checklists of common coding errors derived from previous inspections;
code to be inspected.
Any defects in the review items should be noted on RID forms and declared at the appropriate
point in the examination. Preparation should be done individually and not in a meeting.
(iii) Review meeting
The moderator checks that all the members have performed the preparatory activities. The
amount of time spent by each member should be reported and noted. The reader then leads the
meeting through the review items. For documents, the reader may summarize the contents of
some sections and cover others line-by-line, as appropriate. For code, the reader covers every
piece of logic, traversing every branch at least once. Data declarations should be summarized.
Inspectors use the checklist to find common errors. Defects discovered during the reading should
be immediately noted by the secretary. The defect list should cover the:
severity (e.g. major, minor);
technical area (e.g. logic error, logic omission, comment error);
location;
description.
Any solutions identified should be noted. The inspection team should avoid searching for
solutions and concentrate on finding defects. At the end of the meeting, the inspection team takes
one of the following decisions:
accept the item when the rework (if any) is completed;
make the moderator responsible for accepting the item when the rework is completed;
reinspect the whole item (usually necessary if more than 5% of the material requires rework).
The secretary should produce the minutes immediately after the review meeting, so that rework
can start without delay.
(iv) Rework
After examination, software authors correct the defects described in the defect list.
(v) Follow-up
After rework, follow-up activities verify that all the defects have been properly corrected and
that no secondary defects have been introduced. The moderator is responsible for follow-up.
Other follow-up activities are the:
updating of the checklist as the frequency of different types of errors change;
analysis of defect statistics, perhaps resulting in the redirection of SVV effort.
(e) Output
The outputs of an inspection are the:
defect list;
defect statistics;
inspection report.
Assertions are placed in the code as comments. Verification is achieved by arguing that the code
complies with the requirements present in the assertions.
A Software Version Control (SVC) system or Source Code Management (SCM) tool should
be used to control software changes and versions.
The ability to return to earlier states in the code should be built into the software change
control system.
Files should be locked while they are being worked on so only one developer may make
changes to specific files at a time. This will prevent overwriting of work.
All files associated with the code must be under version control including software
requirements files.
All developers should have home folders where they can place their own experimental code
outside the main project. This should only be used for building tools not directly required by
the project and will not be allowed to contain project code.
Each software change request should be assigned a unique tracking number.
Identify the person(s) who are essential for authorizing changes to software and have only
them approve the changes. This will prevent too much bureaucracy and cost.
Automate the change control process as much as possible and use a version control or code
management tool that includes change management if possible.
When the software change is comitted to the system, the description of the change and the
reason for it must be meaningful and useful.
Consider the environment and project phase in your change control process. If a project is
under development and has never gone to production, the change control process should be
simpler. But even in this case some change control is required so the program team is aware
of changes to other code which may impact what they are trying to do.
For production changes have someone with specific knowledge about the project and how
the application works review the changes before deployment.
Stakeholders must be aware of production changes and/or approve the change. The
stakeholder may approve the change before it is made and someone with detailed project
knowledge may approve the change (and communicate it to required management and staff)
when it is made.
Multiple changes to production software should be bundled into a single change when
possible.
When code is in the project development stage, programmers must check their change in
often. The team must be aware of all areas of code being changed and must meet at least
weekly.
Code change procedures should encourage frequent code check in. Code validation
procedures should not be an administrative nightmare. Code changes should be committed in
logical sections.
Create a process or tool with contact information about those who should be notified about
changes to each specific project. When projects have code changes applied, be sure those
people are contacted either manually or automatically using a tool.
Codelines should have policies specific to the reason for their existence. A production
codeline that has been released should have policy limiting changes to fixes to specific error
types.
Every codeline must have someone in charge of it to make decisions not covered by policies
or processes.
New codelines should be created only when necessary which includes when a different
codeline policy is required.
Track all changes and track all changes to each branch so code changes may be effectively
and efficiently propagated to code branches.
Implement the change control processes based on the Software change Management Policy
Many experts require a change control board to be used for change approval. However, a
change control board may or may not be neccessary or efficient. The need for one should
depend upon the purpose in having one, the environment (development,QA,production)
changes are being made in, the nature of your organization, considerations for efficiency, and
value added by the additional control. A change control board should be used when it adds
value to the change control process. The objectives of the change control process should be
kept in mind when setting up the process and deciding whether to use a change control board.
Objectives are:
Track changes
Ensure quality
Be sure changes are tested
Be sure a backout plan exists
Inform users
There should be many changes of similar type which allows for templates to be used during the
approval process. If a change control board improves the above objectives and it does not
significantly reduce efficiency, it should be used. The board, if structured correctly, could be
used to help users get ready or be aware of the change.
is not an isolated process. The project team must be clear on what, when, how, and why to carry
it out.
The relationship between change tracking and SCM is at the heart of change management. SCM
standards commonly define change control as a subordinated task after configuration
identification. This has led some developers to see SCM as a way to prevent changes rather than
facilitate them. By emphasizing the change tracking and SCM relationship, change management
focuses on selecting and making the correct changes as efficiently as possible. In this context,
SCM addresses versions, workspaces, builds, and releases.
A change data repository supports any change management process. When tracking changes,
developers, testers, and possibly users enter data on new change items and maintain their status.
SCM draws on the change data to document the versions and releases, also stored in a repository,
and updates the data store to link changes to their implementation. Software change management
is an integral part of project management. The only way for developers to accomplish their
project goals is to change their software.
4.11.2.3 Enhancements
All software projects are a research and development effort to some extent, so you will receive
enhancement ideas. Here is where project management is most significant: the idea could be a
brilliant shortcut to the project goal, or a wrong turn that threatens project success. As with
requirements or design errors, you need to document these types of changes. Adhere to your
development standards when implementing an enhancement to assure future maintainability.
Who will administer and enforce the procedures? Often this becomes a task for SCM or
the release manager, since it directly impacts their efforts.
You dont need to handle all issues at all project stages the same way. Think of the project as
consisting of concentric worlds starting with the development team, expanding to the test team,
the quality team, and finally the customer or user. As your team makes requirements, design, and
software available to wider circles, you need to include these circles in change decisions. For
example, accepting a change to a code module will require retesting the module. You must notify
the test team, who should at least have a say in the scheduling. The standard SCM baselines
represent an agreement between the customer and the project team about the product: initially the
requirements, then the design, and finally the product itself. The customer must approve any
change to the agreed-upon items. The change management process helps you maintain good faith
with the customer and good communication between project members.
can link a new version directly to the change request it implements and to tests completed against
it.
At the simple and inexpensive end of the tool scale are SCCS (part of most UNIX systems) and
RCS, which define the basics of version control. Various systems build on these, including CVS
and Suns TeamWare, adding functions such as workspace management, graphical user
interface, and (nearly) automatic merging. In the midrange are products such as Microsofts
SourceSafe, Merants PVCS, MKS Source Integrity, and Continuus/CM, which generally
provide features to organize artifacts into sets and projects. Complete SCM environments are
represented by Platinums CCC/Harvest and Rationals ClearCase, giving full triggering and
integration capabilities.
organization, verify that the vendor can support your implementation or recommend a consultant
who can.
address. Some defects address security or database issues while others may refer to functionality
or UI issues.
Security Defects: Application security defects generally involve improper handling of data sent
from the user to the application. These defects are the most severe and given highest priority for
a fix.
Examples:
- Authentication: Accepting an invalid username/password
- Authorization: Accessibility to pages though permission not given
Data Quality/Database Defects: Deals with improper handling of data in the database.
Examples:
- Values not deleted/inserted into the database properly
- Improper/wrong/null values inserted in place of the actual values
Critical Functionality Defects: The occurrence of these bugs hampers the crucial functionality
of the application.
Examples:- Exceptions
Functionality Defects: These defects affect the functionality of the application.
Examples:
- All Javascript errors
- Buttons like Save, Delete, Cancel not performing their intended functions
- A missing functionality (or) a feature not functioning the way it is intended to
- Continuous execution of loops
User Interface Defects: As the name suggests, the bugs deal with problems related to UI are
usually considered less severe.
Examples:
- Improper error/warning/UI messages
- Spelling mistakes
- Alignment problems
4.13.3.3 Defect Discovery - Identification and reporting of defects for development team
acknowledgment. A defect is only termed discovered when it has been documented and
acknowledged as a valid defect by the development team member(s) responsible for the
component(s) in error.
4.13.3.4 Defect Resolution - Work by the development team to prioritize, schedule and fix a
defect, and document the resolution. This also includes notification back to the tester to ensure
that the resolution is verified.
4.13.3.5 Process Improvement - Identification and analysis of the process in which a defect
originated to identify ways to improve the process to prevent future occurrences of similar
defects. Also the validation process that should have identified the defect earlier is analyzed to
determine ways to strengthen that process.
resolution has not introduced side effects or regressions. Once all affected branches of
development have been verified as resolved, the defect can be closed.
4.14.5 Communication This encompasses automatic generation of defect metrics for
management reporting and process improvement purposes, as well as visibility into the presence
and status of defects across all disciplines of the software development team.
4.15 SUMMARY
Software testing is the process of testing software product. Effectiveness software testing will
contribute to the delivery of higher quality software products, more satisfied users, lower
maintenance costs, more accurate and reliable results. However, ineffective testing will lead to
the opposite results; low quality products, unhappy users, increased maintenance costs,
unreliable and inaccurate results. Hence, software testing is necessary and important activity of
software development process. Good testing involves much more than just running the program a
few times to see whether it works. Through analysis of a program helps us to test more
systematically and more effectively. Change is inevitable in all stages of a software project.
Change management will help you direct and coordinate those changes so they can enhance
not hinderyour software. There is very much need to control software change. Software
change management provides much guidelines in this way. Software verification and validation
should show that the product conforms to all the requirements. Users will have more confidence
in a product that has been through a rigorous verification programme than one subjected to
minimal examination and testing before release.
Assignment-Module 4
1.
2.
3.
4.
The tester is only aware of what the software is supposed to do, not how it does it.
a. White box testing
b. Black box testing
c. Static testing
d. Dynamic testing
5.
6.
function level.
a. Integration testing
b. System testing
c. Functional Testing
d. Component testing
7.
8.
9.
Artifacts include
a. Requirements documentation
b. Coding
c. Both of them
d. None of them
10.
Test suite is
a. Collection of test cases
b. Collection of inputs
c. Collection of outputs
d. None of them
11.
12.
During validation
a. Process is checked
b. Product is checked
c. Developers performance evaluated
d. Customer checks product
13.
Verification is
a. Checking product with respect to customers expectation.
b. Checking product with respect to specification.
c. Checking product with respect to constraints of the project.
d. All of the above
14.
Validation is
a. Checking product with respect to customers expectation.
b. Checking product with respect to specification.
c. Checking product with respect to constraints of the project.
d. All of the above
15.
Key - Module 4
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
Rather than using metrics such as cyclomatic complexity to indirectly tell us the quality of code,
we rely on actionable, easy to verify defect cases that pinpoint the root cause and exact path to a
software problem. Compare the two approaches here:
Cyclomatic complexity framework
(1) Function foo has too many paths through it.
Coverity framework
(2) Function foo has a memory leak on line 73 that is the result of an allocation on line 34 and
the following path decisions on lines 38, 54, and 65 ..
Our belief is that a metric based on the latter is much more valuable in measuring source code
quality. Today, many open source packages rely on our static source code analysis as a key
indicator of reliability and security. For example, MySQL, PostgreSQL, and Berkeley DB have
certified versions of their software that contain zero Coverity defects.
Products Metrics: Product metrics are also known as quality metrics and is used to measure the
properties of the software. Product metrics includes product non reliability metrics, functionality
metrics, performance metrics, usability metrics, cost metrics, size metrics, complexity metrics
and style metrics. Products metrics help in improving the quality of different system component
& comparisons between existing systems.
Most of the predictive models rely on estimates of certain variables which are often not
known exactly.
Most of the software development models are probabilistic and empirical.
Token Count:
In this metrics, a computer program is considered to be a collection of tokens, which may be
classified as either operators or operands. All software science metrics can be defined in terms of
these basic symbols. These symbols are called as token. The basic measures are
n1 = count of unique operators.
n2 = count of unique operands.
N1 = count of total occurrences of operators.
N2 = count of total occurrence of operands.
In terms of the total tokens used, the size of the program can be expressed as N = N1 + N2
Function Count:
The size of a large software product can be estimated in better way through a larger unit
called module. A module can be defined as segment of code which may be compiled
independently.
For example, let a software product require n modules. It is generally agreed that the size of
module should be about 50-60 line of code. Therefore size estimate of this Software product
is about n x 60 line of code.
Stetters Program Complexity Measure: Stetters metric accounts for the data flow along with
the control flow of the program which can be calculated from the source code. So it may be view
as a sequence of declaration and statements. It is given as
P = (d1, d2, -------- , dk s1 , s2, ---------------, sm)
Where ds are declarations
ss are statements
P is a program
Here, the notion of program graph has been extend to the notion of flow graph. A flow graph of a
program P can be defined as a set of nodes and a set of edges. A node represents a declaration or
a statement while an edge represents one of the following:
1 Flow of control from one statement node say si to another sj.
2 Control flow from a statement node dj to a statement node si which is declared in dj.
3 Flow from a declaration node dj to statement node si through a read access of a variable or a
constant in si which is declared in dj.
This measure is defined as F(P) = E ns + nt
Where ns = number of entry nodes
nt = number of exit nodes.
procedures included in the module. A procedure contributes complexity due to the following
two factors.
1. The complexity of the procedure code itself.
2. The complexity due to procedures connections to its environment. The effect of the first
factor has been included through LOC (Lin Of Code) measure. For the quantification of second
factor, Henry and Kafura have defined two terms, namely FAN-IN and FAN-OUT.
FAN-IN of a procedure is the number of local flows into that procedure plus the number of data
structures from which this procedure retrieve information.
FAN OUT is the number of local flows from that procedure plus the number of data structures
which that procedure updates.
Procedure Complexity = Length * (FAN-IN * FAN-OUT) **2
Where the length is taken as LOC and the term FAN-IN *FAN-OUT represent the total number
of input output combinations for the procedure.
Metrics, for both process and software, tell us to what extent a desired characteristic is present in
our processes or our software systems. Maintainability is a desired characteristic of a software
component and is referenced in all the main software quality models (including the ISO 9126).
One good measure of maintainability would be time required to fix a fault. This gives us a handle
on maintainability but another measure that would relate more to the cause of poor
maintainability would be code complexity. A method for measuring code complexity was
developed by Thomas McCabe and with this method a quantitative assessment of any piece of
code can be made. Code complexity can be specified and can be known by measurement,
whereas time to repair can only be measured after the software is in support. Both time to repair
and code complexity are software metrics and can both be applied to software process
improvement.
A good measure
Description
Is quantitative
Is easy to understand
Encourages
appropriate behaviour
Is visible
Is
defined
and The measure has been defined by and/or agreed to by all
mutually understood
key process participants (internally and externally)
Encompasses
both The measure integrates factors from all aspects of the
outputs and inputs
process measured
Measures only what is The measure focuses on a key performance indicator that
important
is of real value to managing the process
Is multidimensional
Uses economies
effort
Facilitates trust
Choosing the right metrics is critical to success, but the road to good metrics is fraught with
pitfalls. As your endeavours to become more metrics-driven, beware of errors in the design and
use of metrics.
Although there may never be a single perfect measure, it is certainly possible to create a measure
or even multiple measures which reflect the performance of your system. If the metrics are
chosen carefully, then, in the process of achieving their metrics, managers and employees will
make the right decisions and take the right actions that enable the organization to maximize its
performance. These guidelines will make sure you pick the right indicators and measure them
well.
Dont mistake metrics for what were actually trying to measure: metrics are proxies
especially if we are trying to measure something abstract like innovation, or the quality of
universities. So dont get too hung up on your metrics concentrate on your overall goal.
ii.
Align metrics with strategy: no one really wants twitter followers. You want something
else influence, or interaction, or something that one way or another actually does you
some good. The interim steps are important, but dont only measure these. You also need
to figure out a way to measure the outcomes of your strategy.
iii.
Use multiple measures of success: this follows from the first two points. Most of the
things that we really care about are hard to actually measure. If we are going to try, we
need to use multiple measures so that we can triangulate on our desired objectives.
Valid: clearly related to the feature being measured e.g. monotonically increases as the
feature increases
ii.
iii.
iv.
v.
vi.
Comparable: highly correlated with other metrics measuring the same feature
vii.
Universal: can be translated into sub-metrics for lower parts of the product or process
Economical: does not consume significant resources for collection; preferably a biproduct of other activities
ii.
iii.
Sustainable: likely to be valid in the future so that trend forecasts based on the metric will
be effective
iv.
Cost-Effective: benefits from the data obtained justify the cost of gathering that data
v.
ii.
directly recorded data resulting from users actions, registered by the investigator or by
some remote means, such as video or automatic event recording,
iii.
data measured directly from the product on the completion of or during the trial.
Many kinds of objective data can be measured when, for instance, all the components of a
balanced system are considered. This system is applicable to both working and living contexts in
the field. The same fact is often relevant in simulations.
The typical methods used in subjective measurement are:
ranking methods,
rating methods,
questionnaire methods
interviews
checklists.
However, subjective data and preference data must be interpreted with caution. Following points
should be considered when evaluating subjective data:
If the subjects in experiments and tests do not fit the user profile compiled during the planning
phase, their opinions and preferences may not accurately reflect those of the intended users of the
product. Conclusions based on data obtained from inappropriate subjects may not be valid.
Attitude measures and self-reports may be distorted by biasing factor.
Subjects preferences are affected by events in the recent past.
Collection of both objective and subjective data during experiments and tests whenever
feasible.
Collecting subjective data will add little to the cost of the study, but may provide
significant insights not obtainable by objective methods.
Subjective data may be particularly useful if objective measurements fail to detect any
differences between conditions.
perceived quality. In this case, these metrics are closer to the human perceived quality than
the Data metrics method.
Mean: Mean is the most common measure of central tendency. It is simply the sum of the
numbers divided by the number of numbers in a set of data. This is also known as
average.
ii.
Median: Median is the number present in the middle when the numbers in a set of data
are arranged in ascending or descending order. If the number of numbers in a data set is
even, then the median is the mean of the two middle numbers.
iii.
Mode: Mode is the value that occurs most frequently in a set of data.
Mean =
Median = 5
Mode = 5
[Formula.]
Step 3:
Step 4:
Step 5: The data set in the ascending order is 3, 4, 4, 5, 6, 7, 7, 9, and 9. So, Median of the set is
6.
Step 6: Mode is/are the data value(s) that appear most often in the data set. So, the modes of the
data set are 4, 7 and 9.
Step 7: So, the measures of central tendency of the given set of data are mean = 6, median = 6
and modes are 4, 7, and 9.
ii.
iii.
iv.
v.
vi.
Define the objectives for the measurement program - how it is to be used. Consider how
to implement the four uses of measurement, given the maturity level of the organization.
The use of measurement should be tied to the organizations mission, goals and
objectives.
ii.
implemented.
iii.
Define the measurement hierarchy, which has three levels of quantitative data: measures,
metrics, and a strategic results dashboard (also called key indicators). This measurement
hierarchy maps to a three-level IT organizational tier: staff, line management and senior
management. IT staff collects basic measures, such as product size, cycle time, or defect
count. IT line management uses fundamental metrics, such as variance between actual
and budgeted cost, user satisfaction or defect rates per LOC to manage a project or part of
the IT function. Senior management uses a strategic results dashboard, where the metrics
represent the quantitative data needed to manage the IT function and track to the mission,
vision, or goals. For example, a mission with a customer focus should have a customer
satisfaction metric. A metric of the number of projects completed on time gives insight
into the function's ability to meet short and long-term business goals.
iv.
made in a manner that will facilitate achieving those results. This is particularly critical when the
third phase is implemented, as the process results should link to the desired business results.
i.
Identify desired business results, beginning with a mission or vision statement. Turn
operative phrases in the mission or vision (such as deliver on time or satisfy
customer) into specific objectives (such as "all software will be delivered to the
customer by the date agreed upon with the customer"), and then rank these objectives in
order of importance. When objectives are written with a subject, action, target value, and
time frame it is much easier to identify the actual metric that will serve as the results
metric or key indicator.
ii.
Identify current baselines by determining the current operational status for each of the
desired business results/objectives.
iii.
Select a measure or metric for each desired business result or objective, and determine
whether it has been standardized by the IT industry (such as cycle time, which is
measured as elapsed calendar days from the project start date to the project end date). If
not, explore the attributes of the result or objective and define a measure or metric that is
quantitative, valid, reliable, attainable, easy to understand and collect, and a true
representation of the intent. Ideally there should be three to five metrics, with no more
than seven. Convert the business results metrics into a strategic dashboard of key
indicators. Examples of indicators includes productivity, customer satisfaction,
motivation, skill sets, and defect rates.
iv.
Consider trade-offs between the number one ranked business result and the other desired
results. For example, the #1 result to complete on time will affect other desired results,
such as minimize program size and develop easy-to-read documentation.
v.
Based on the baseline and desired business result or objective, determine a goal for each
result metric. Goals typically specify a subject (such as financial, customer, process or
product, or employee) and define an action that is change or control related (such as
improve or reduce, increase or decrease or control or track). If a baseline for on time
projects is 60%, the goal might be to increase to 80% by next year. Benchmarking can
also be useful prior to setting goals, as it allows an understanding of what is possible
given a certain set of circumstances.
Develop a matrix of process results and contributors to show which contributors drive
which results. The results should come from the process policy statement. The
contributors can be positive or negative, and involve process, product, or resource
attributes. Process attributes include characteristics such as time, schedule, and
completion. Product attributes include characteristics such as size, correctness, reliability,
usability, and maintainability. Resource attributes include characteristics such as amount,
skill, and attitude. A cause-and-effect diagram is often used to graphically illustrate the
relationship between results and contributors.
ii.
Assure process results are aligned to business results. Processes should help people
accomplish their organizations mission. Alignment is subjective in many organizations,
but the more objective it is, the greater the chance that processes will drive the mission.
iii.
Rank the process results and the contributors from a management perspective. This will
help workers make trade offs and identify where to focus management attention.
iv.
Select metrics for both the process results and contributors, and create two tactical
process dashboards: one for process results and one for contributors. These dash boards
are used to manage the projects and to control and report project status. Normally results
are measured subjectively and contributors are measured objectively. For example, for a
result of customer satisfaction, contributors might include competent resources, an
available process, and a flexible and correct product. Sometimes, as with customer
satisfaction, factors that contribute to achieving the result can actually be used to develop
the results metric. In other words, first determine what contributes to customer
satisfaction or dissatisfaction and then it can be measured.
ii.
The strategies to manage risk typically include transferring the risk to another party, avoiding the
risk, reducing the negative effect or probability of the risk, or even accepting some or all of the
potential or actual consequences of a particular risk.
Certain aspects of many of the risk management standards have come under criticism for having
no measurable improvement on risk, whether the confidence in estimates and decisions seem to
increase.
Risk management is a process for identifying, assessing, and prioritizing risks of different kinds.
Once the risks are identified, the risk manager will create a plan to minimize or eliminate the
impact of negative events. A variety of strategies is available, depending on the type of risk and
the type of business. There are a number of risk management standards, including those
developed by the Project Management Institute, the International Organization for
Standardization (ISO), the National Institute of Science and Technology
Budget Risk
Wrong budget estimation.
Cost overruns
Project scope expansion
Operational Risks
Risks of loss due to improper process implementation, failed system or some external events
risks.
Causes of Operational risks:
Failure to address priority conflicts
Failure to resolve the responsibilities
Insufficient resources
No proper subject training
No resource planning
No communication in team.
Technical risks
Technical
risks
generally
lead
to
failure
of
functionality
and
performance.
Programmatic Risks
These are the external risks beyond the operational limits. These are all uncertain risks
are outside the control of the program.
These external events can be:
Running out of fund.
Market development
Changing customer product strategy and priority
Government rule changes.
Avoiding software project disasters, including run away budgets and schedules, defectridden software products, and operational failures.
ii.
iii.
iv.
Stimulating a win-win software solution where the customer receives the product they
need and the vendor makes the profits they expect.
experienced C++ programmers will be hired by the time coding starts. If these assumptions
prove to be false, we could have major problems.
Critical Path Analysis: As we perform critical path analysis for our project plan, we must
remain on the alert to identify risks. Any possibility of schedule slippage on the critical path
must be considered a risk because it directly impacts our ability to meet schedule.
Risk Taxonomies: Risk taxonomies are lists of problems that have occurred on other projects
and can be used as checklists to help ensure all potential risks have been considered. An example
of a risk taxonomy can be found in the Software Engineering Institutes Taxonomy -Based Risk
Identification report that covers 13 major risk areas with about 200 questions.
all of the lost testing time can be made up in overtime (loss estimated at two week schedule
slippage).
Boehm defines the Risk Exposure equation to help quantitatively establish risk priorities. Risk
Exposure measures the impact of a ris k in terms of the expected value of the loss. Risk Exposure
(RE) is defined as the probability of an undesired outcome times the expected loss if that
outcome occurs.
RE = Probability(UO) * Loss (UO), where UO = Unexpected outcome
Given the example above, the Risk Exposure is 10% x $100,000 = $10,000 and 10% x 2 calendar
week = 0.2 calendar week. Comparing the Risk Exposure measurement for various risks can help
identify those risks with the greatest probable negative impact to the project or product and thus
help establish which risks are candidates for further action. The list of risks is then prioritized
based on the results of our risk analysis. Since resource limitations rarely allow the consideration
of all risks, the prioritized list of risks is used to identify risks requiring additional planning and
action. Other risks are documented and tracked for possible future consideration. Based on
changing conditions, additional information, the identification of new risks, or the closure of
existing risks, the list of risks requiring additional planning and action may require periodic
updates.
Is it too big a risk? If the risk is too big for us to be willing to accept, we can avoid the risk by
changing our project strategies and tactics to choose a less risky alternate or we may decide not
to do the project at all. For example, if our project has tight schedule constraints and includes
state of the art technology, we may decide to wait until a future project to implement our newly
purchased CASE tools.
Things to remember about avoiding risks include:
Avoiding risks may also mean avoiding opportunities
Not all risks can be avoided
Avoiding a risk in one part of the project may create risks in other parts of the project.
Identify: Before risks can be managed its must be identified before adversely affecting the
project. Establishing an environment that encourages people to raise concerns and issues and
conducting quality reviews throughout all phases of a project are common techniques for
identifying risks.
Analyze: Analysis is the conversion of risk data into risk decision-making information. It
includes reviewing, prioritizing, and selecting the most critical risks to address. The Software
Risk Evaluation (SRE) Team analyzes each identified risk in terms of its consequence on cost,
schedule, performance, and product quality.
Plan: Planning turns risk information into decisions and actions for both the present and future.
Planning involves developing actions to address individual risks, prioritizing risk actions and
creating a Risk Management Plan. The key to risk action planning is to consider the future
consequences of a decision made today.
Track: Tracking consists of monitoring the status of risks and the actions taken against risks to
mitigate them.
Control: Risk control relies on project management processes to control risk action plans,
correct for variations from plans, respond to triggering events, and improve risk management
processes. Risk control activities are documented in the Risk Management Plan.
Communicate: Communication happens throughout all the functions of risk management.
Without effective communication, no risk management approach can be viable. It is an integral
part of all the other risk management activities.
review is represented as risk indication and identifies a particular risk, the involved project
stakeholder, timestamp, the identification technique and possible comments. After the
identification and analysis, the risk assessment report is generated. It can then be used as input
for risk mitigation related activities. It may also be taken as an input to the next risk review
action. The output of the risk assessment process helps to identify appropriate controls for
reducing or eliminating risk during the risk mitigation process. The risk assessment methodology
encompasses nine primary steps such as System Characterization, Threat Identification,
Vulnerability Identification, Control Analysis, Likelihood Determination, Impact Analysis, Risk
Determination, Control Recommendations, and Results Documentation.
an active review to summarize the effects of risk identification activities. The risk assessment
report is generated at the end of an active review. We assume that the process has the active and
continuous reviews interleaved, their extent (in time) and scope (in terms of inputs and
participants) being controlled by the risk manager. This way we achieve the following benefits:
The communication channel is constantly open.
The identification actions are being planned (active and continuous reviews).
All communicated risk-related information is being memorized.
The identified risks are periodically reviewed and assessed and the frequency and scope
of those assessments is under control of the risk manager.
The results of the analyses are kept in the form of reports and are available downstream
of the process (can support further identification and analysis).
Identified factor: Context of the identified risk extracted from the risk knowledge base.
5.20 SUMMARY
Metrics should always be seen as indicators, not as absolute truth. It is possible to score well on
all metrics, but still have an unsatisfactory design. The application of simple product metrics to
entire programs can only indicate certain problems but does not relate measurement results back
to design principles. It can be very difficult for developer to decide on the right action to take
upon receipt of a particular metrics value. Design metrics may be used to relate knowledge about
good design to characteristic structural system properties. Software developers should be able to
infer more about the software they are developing during the design process.
Assignment-Module 5
1.
a.
Process metric
b.
Product metric
c.
Project metric
d.
People metric
2.
a.
M. Halstead
b.
B. Littlewood
c.
T. J. Mc Cabe
d.
G. Rothermal
3.
a.
= 1 + 2
b.
= 1 - 2
c.
= 1 * 2
d.
= 1 / 2
4.
a.
Person-months
b.
Hours
c.
d.
None of them
5.
Types of risk
a.
Technical risk
b.
Operational risk
c.
None of them
d.
Both of them
6.
Fan-In of a procedure is
a.
Number of local flows into that procedure plus the number of data structures.
b.
c.
d.
None of them
7.
Fan-Out of a procedure is
a.
Number of local flows from that procedure plus the number of data structures
b.
c.
d.
None of them
8.
a.
LOC
b.
Program length
c.
Function count
d.
Cyclomatic complexity
9.
a.
Vocabulary
b.
Level
c.
Volume
d.
Logic
10.
a.
LOC
b.
Program length
c.
Function count
d.
None of them
11.
a.
Brainstorming
b.
FAST
c.
Use Case
d.
None of them
12.
a.
b.
c.
d.
13.
a.
V= N log2 n
b.
V = (N/2) log2 n
c.
V = 2N log2 n
d.
V = N log2 n + 1
14.
a.
b.
c.
d.
15.
a.
Nz = n [0.5772 + ln (n) ]
b.
Nz = n [0.5772 - ln (n) ]
c.
Nz = n [0.5772 * ln (n) ]
d.
Nz = n [0.5772 / ln (n) ]
Key - Module 5
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
6.1.2.2 Disadvantages
costly,
time consuming to document and maintain,
requires employee buy-in
To achieve maximum benefit from ISO 9000 the focus must be on documenting, understanding
and improving your systems and processes.
The ISO 9000 standards require:
Organizations choose the standards to which they want to become registered, based on their
structure, their products, services and their specific function. Selecting the appropriate standards
is an important decision.
(TQM), and Zero Defects, based on the work of pioneers such as Shewhart, Deming, Juran,
Crosby, Ishikawa, Taguchi, and others.
Like its predecessors, Six Sigma doctrine asserts that:
Continuous efforts to achieve stable and predictable process results (i.e., reduce process
variation) are of vital importance to business success.
Manufacturing and business processes have characteristics that can be measured,
analyzed, improved and controlled.
Achieving sustained quality improvement requires commitment from the entire
organization, particularly from top-level management.
Features that set Six Sigma apart from previous quality improvement initiatives include:
A clear focus on achieving measurable and quantifiable financial returns from any Six
Sigma project.
An increased emphasis on strong and passionate management leadership and support.
A special infrastructure of "Champions", "Master Black Belts", "Black Belts", "Green
Belts", "Red Belts" etc. to lead and implement the Six Sigma approach.
A clear commitment to making decisions on the basis of verifiable data, rather than
assumptions and guesswork.
The term "Six Sigma" comes from a field of statistics known as process capability studies.
Originally, it referred to the ability of manufacturing processes to produce a very high proportion
of output within specification. Processes that operate with "six sigma quality" over the short term
are assumed to produce long-term defect levels below 3.4 defects per million opportunities
(DPMO). Six Sigma's implicit goal is to improve all processes to that level of quality or better.
Six Sigma is a registered service mark and trademark of Motorola Inc. As of 2006 Motorola
reported over US$17 billion in savings from Six Sigma. Other early adopters of Six Sigma who
achieved well-publicized success include Honeywell (previously known as AlliedSignal) and
General Electric, where Jack Welch introduced the method. By the late 1990s, about two-thirds
of the Fortune 500 organizations had begun Six Sigma initiatives with the aim of reducing costs
and improving quality.
In recent years, some practitioners have combined Six Sigma ideas with lean manufacturing to
create a methodology named Lean Six Sigma. The Lean Six Sigma methodology views lean
manufacturing, which addresses process flow and waste issues, and Six Sigma, with its focus on
variation and design, as complementary disciplines aimed at promoting "business and
operational excellence". Companies such as IBM use Lean Six Sigma to focus transformation
efforts not just on efficiency but also on growth. It serves as a foundation for innovation
throughout the organization, from manufacturing and software development to sales and service
delivery functions.
6.2.1 Methods
Six Sigma projects follow two project methodologies inspired by Deming's Plan-Do-Check-Act
Cycle. These methodologies, composed of five phases each, bear the acronyms DMAIC and
DMADV.
DMAIC is used for projects aimed at improving an existing business process. DMAIC is
pronounced as "duh-may-ick".
DMADV is used for projects aimed at creating new product or process designs. DMADV
is pronounced as "duh-mad-vee".
Control the future state process to ensure that any deviations from target are corrected
before they result in defects. Implement control systems such as statistical process
control, production boards, visual workplaces, and continuously monitor the process.
Some organizations add a Recognize step at the beginning, which is to recognize the right
problem to work on, thus yielding an RDMAIC methodology.
Pareto analysis
Pareto chart
Axiomatic design
Pick chart
Process capability
Check sheet
Chi-squared test of independence and
systems
Regression analysis
fits
Control chart
Correlation
Run charts
Cost-benefit analysis
Scatter diagram
CTQ tree
SIPOC
Design of experiments
Failure
mode
and
effects
analysis
analysis
(Suppliers,
Inputs,
(FMEA)
General linear model
Histograms
Stratification
Taguchi methods
Taguchi Loss Function
Champions take responsibility for Six Sigma implementation across the organization in
an integrated manner. The Executive Leadership draws them from upper management.
Champions also act as mentors to Black Belts.
Master Black Belts, identified by champions, act as in-house coaches on Six Sigma. They
devote 100% of their time to Six Sigma. They assist champions and guide Black Belts
and Green Belts. Apart from statistical tasks, they spend their time on ensuring consistent
application of Six Sigma across various functions and departments.
Black Belts operate under Master Black Belts to apply Six Sigma methodology to
specific projects. They devote 100% of their time to Six Sigma. They primarily focus on
Six Sigma project execution, whereas Champions and Master Black Belts focus on
identifying projects/functions for Six Sigma.
Green Belts are the employees who take up Six Sigma implementation along with their
other job responsibilities, operating under the guidance of Black Belts.
Some organizations use additional belt colours, such as Yellow Belts, for employees that
have basic training in Six Sigma tools and generally participate in projects and 'white
belts' for those locally trained in the concepts but do not participate in the project team.
6.2.4 Certification
Corporations such as early Six Sigma pioneers General Electric and Motorola developed
certification programs as part of their Six Sigma implementation, verifying individuals'
command of the Six Sigma methods at the relevant skill level (Green Belt, Black Belt etc.).
Following this approach, many organizations in the 1990s started offering Six Sigma
certifications to their employees. Criteria for Green Belt and Black Belt certification vary; some
companies simply require participation in a course and a Six Sigma project. There is no standard
certification body, and different certification services are offered by various quality associations
and other providers against a fee. The American Society for Quality for example requires Black
Belt applicants to pass a written exam and to provide a signed affidavit stating that they have
completed two projects, or one project combined with three years' practical experience in the
body of knowledge. The International Quality Federation offers an online certification exam that
organizations can use for their internal certification programs; it is statistically more demanding
than the ASQ certification. Other providers offering certification services include the the Juran
Institute, Six Sigma Qualtec, Air Academy Associates and many others.
Capability studies measure the number of standard deviations between the process mean and the
nearest specification limit in sigma units. As process standard deviation goes up, or the mean of
the process moves away from the center of the tolerance, fewer standard deviations will fit
between the mean and the nearest specification limit, decreasing the sigma number and
increasing the likelihood of items outside specification.
Graph of the normal distribution, which underlies the statistical assumptions of the Six Sigma
model. The Greek letter (sigma) marks the distance on the horizontal axis between the mean,
, and the curve's inflection point. The greater this distance, the greater is the spread of values
encountered. For the green curve shown above, = 0 and = 1. The upper and lower
specification limits (USL and LSL, respectively) are at a distance of 6 from the mean. Because
of the properties of the normal distribution, values lying that far away from the mean are
extremely unlikely. Even if the mean were to move right or left by 1.5 at some point in the
future (1.5 sigma shift, coloured red and blue), there is still a good safety cushion. This is why
Six Sigma aims to have processes where the mean is at least 6 away from the nearest
specification limit.
quality by signaling when quality professionals should investigate a process to find and eliminate
special-cause variation.
The table below gives long-term DPMO values corresponding to various short-term sigma levels.
It must be understood that these figures assume that the process mean will shift by 1.5 sigma
toward the side with the critical specification limit. In other words, they assume that after the
initial study determining the short-term sigma level, the long-term Cpk value will turn out to be
0.5 less than the short-term Cpk value. So, for example, the DPMO figure given for 1 sigma
assumes that the long-term process mean will be 0.5 sigma beyond the specification limit (Cpk =
0.17), rather than 1 sigma within it, as it was in the short-term study (Cpk = 0.33). Note that the
defect percentages indicate only defects exceeding the specification limit to which the process
mean is nearest. Defects beyond the far specification limit are not included in the percentages.
Sigma level DPMO Percent defective Percentage yield Short-term Cpk Long-term Cpk
1
691,462 69%
31%
0.33
0.17
308,538 31%
69%
0.67
0.17
66,807 6.7%
93.3%
1.00
0.5
6,210
0.62%
99.38%
1.33
0.83
233
0.023%
99.977%
1.67
1.17
3.4
0.00034%
99.99966%
2.00
1.5
0.019
0.0000019%
99.9999981%
2.33
1.83
Analysis tools
Arena
ARIS Six Sigma
Bonita Open Solution BPMN2 standard and KPIs for statistic monitoring
JMP
Microsoft Visio
Minitab
R language (The R Project for Statistical Computing). Open source software: statistical and
graphic functions from the base installation can be used for Six Sigma projects. Furthermore,
some contributed packages at CRAN contain specific tools for Six Sigma: SixSigma,
qualityTools, qcc and IQCC.
SDI Tools
Sigma XL
Software AG web Methods BPM Suite
SPC XL
Stat graphics
STATISTICA
6.2.9 Application
Six Sigma mostly finds application in large organizations. An important factor in the spread of
Six Sigma was GE's 1998 announcement of $350 million in savings thanks to Six Sigma, a
figure that later grew to more than $1 billion. According to industry consultants, companies with
fewer than 500 employees are less suited to Six Sigma implementation, or need to adapt the
standard approach to make it work for them. This is due both to the infrastructure of Black Belts
that Six Sigma requires, and to the fact that large organizations present more opportunities for
the kinds of improvements Six Sigma is suited to bringing about.
In healthcare
Six Sigma strategies were initially applied to the healthcare industry in March 1998. The
Commonwealth Health Corporation (CHC) was the first health care organization to successfully
implement the efficient strategies of Six Sigma. Substantial financial benefits were claimed, for
example in their radiology department throughput improved by 33% and costs per radiology
procedure decreased by 21.5%; Six Sigma has subsequently been adopted in other hospitals
around the world.
Critics of Six Sigma believe that while Six Sigma methods may have translated fluidly in a
manufacturing setting, they would not have the same result in service-oriented businesses, such
as the health industry.
6.2.10 Criticism
6.2.10.1 Lack of originality
Noted quality expert Joseph M. Juran has described Six Sigma as "a basic version of quality
improvement", stating that "there is nothing new there. It includes what we used to call
facilitators. They've adopted more flamboyant terms, like belts with different colors. I think that
concept has merit to set apart, to create specialists who can be very helpful. Again, that's not a
new idea. The American Society for Quality long ago established certificates, such as for
reliability engineers."
process)." The summary of the article is that Six Sigma is effective at what it is intended to do,
but that it is "narrowly designed to fix an existing process" and does not help in "coming up with
new products or disruptive technologies." Advocates of Six Sigma have argued that many of
these claims are in error or ill-informed.
A more direct criticism is the "rigid" nature of Six Sigma with its over-reliance on methods and
tools. In most cases, more attention is paid to reducing variation and less attention is paid to
developing robustness (which can altogether eliminate the need for reducing variation).
Articles featuring critics have appeared in the November-December 2006 issue of USA Army
Logistician regarding Six-Sigma: "The dangers of a single paradigmatic orientation (in this case,
that of technical rationality) can blind us to values associated with double-loop learning and the
learning organization, organization adaptability, workforce creativity and development,
humanizing the workplace, cultural awareness, and strategy making."
A Business Week article says that James McNerney's introduction of Six Sigma at 3M had the
effect of stifling creativity and reports its removal from the research function. It cites two
Wharton School professors who say that Six Sigma leads to incremental innovation at the
expense of blue skies research. This phenomenon is further explored in the book Going Lean,
which describes a related approach known as lean dynamics and provides data to show that
Ford's "6 Sigma" program did little to change its fortunes.
illustrated on websites, and are, at best, sketchy. They provide no mention of any specific Six
Sigma methods that were used to resolve the problems. It has been argued that by relying on the
Six Sigma criteria, management is lulled into the idea that something is being done about quality,
whereas any resulting improvement is accidental (Latzko 1995). Thus, when looking at the
evidence put forward for Six Sigma success, mostly by consultants and people with vested
interests, the question that begs to be asked is: are we making a true improvement with Six
Sigma methods or just getting skilled at telling stories? Everyone seems to believe that we are
making true improvements, but there is some way to go to document these empirically and
clarify the causal relations.
The 1.5 sigma shift has also become contentious because it results in stated "sigma levels" that
reflect short-term rather than long-term performance: a process that has long-term defect levels
corresponding to 4.5 sigma performance is, by Six Sigma convention, described as a "six sigma
process. The accepted Six Sigma scoring system thus cannot be equated to actual normal
distribution probabilities for the stated number of standard deviations, and this has been a key
bone of contention over how Six Sigma measures are defined. The fact that it is rarely explained
that a "6 sigma" process will have long-term defect rates corresponding to 4.5 sigma
performance rather than actual 6 sigma performance has led several commentators to express the
opinion that Six Sigma is a confidence trick.
2.
and
3.
CMMI was developed by a group of experts from industry, government, and the Software
Engineering Institute (SEI) at Carnegie Mellon University. CMMI models provide guidance for
developing or improving processes that meet the business goals of an organization. A CMMI
model may also be used as a framework for appraising the process maturity of the organization.
CMMI originated in software engineering but has been highly generalized over the years to
embrace other areas of interest, such as the development of hardware products, the delivery of all
kinds of services, and the acquisition of products and services. The word "software" does not
appear in definitions of CMMI. This generalization of improvement concepts makes CMMI
extremely abstract. It is not as specific to software engineering as its predecessor, the Software
CMM.
CMMI was developed by the CMMI project, which aimed to improve the usability of maturity
models by integrating many different models into one framework. The project consisted of
members of industry, government and the Carnegie Mellon Software Engineering Institute (SEI).
The main sponsors included the Office of the Secretary of Defense (OSD) and the National
Defense Industrial Association.
CMMI is the successor of the capability maturity model (CMM) or Software CMM. The CMM
was developed from 1987 until 1997. In 2002, CMMI Version 1.1 was released, Version 1.2
followed in August 2006, and CMMI Version 1.3 in November 2010. Some of the major changes
in CMMI V1.3 are the support of Agile Software Development, improvements to high maturity
practices and alignment of the representation (staged and continuous).
benefit from CMMI; this view is supported by the process maturity profile (page 10). Of the
small organizations (<25 employees), 70.5% are assessed at level 2: Managed, while 52.8% of
the organizations with 10012000 employees are rated at the highest level (5: Optimizing).
Interestingly, Turner & Jain (2002) argue that although it is obvious there are large differences
between CMMI and agile methods, both approaches have much in common. They believe
neither way is the 'right' way to develop software, but that there are phases in a project where one
of the two is better suited. They suggest one should combine the different fragments of the
methods into a new hybrid method. Sutherland et al. (2007) assert that a combination of Scrum
and CMMI brings more adaptability and predictability than either one alone. David J. Anderson
(2005) gives hints on how to interpret CMMI in an agile manner. Other viewpoints about using
CMMI and Agile development are available on the SEI website.
CMMI Roadmaps, which are a goal-driven approach to selecting and deploying relevant process
areas from the CMMI-DEV model, can provide guidance and focus for effective CMMI
adoption. There are several CMMI roadmaps for the continuous representation, each with a
specific set of improvement goals. Examples are the CMMI Project Roadmap, CMMI Product
and Product Integration Roadmaps and the CMMI Process and Measurements Roadmaps. These
roadmaps combine the strengths of both the staged and the continuous representations.
The combination of the project management technique earned value management (EVM) with
CMMI has been described (Solomon, 2002). To conclude with a similar use of CMMI, Extreme
Programming (XP), a software engineering method, has been evaluated with CMM/CMMI
(Nawrocki et al., 2002). For example, the XP requirements management approach, which relies
on oral communication, was evaluated as not compliant with CMMI.
CMMI can be appraised using two different approaches: staged and continuous. The staged
approach yields appraisal results as one of five maturity levels. The continuous approach yields
one of six capability levels. The differences in these approaches are felt only in the appraisal; the
best practices are equivalent and result in equivalent process improvement results.
6.3.2 Appraisal
An organization cannot be certified in CMMI; instead, an organization is appraised. Depending
on the type of appraisal, the organization can be awarded a maturity level rating (1-5) or a
capability level achievement profile.
Appraisals of organizations using a CMMI model must conform to the requirements defined in
the Appraisal Requirements for CMMI (ARC) document. There are three classes of appraisals,
A, B and C, which focus on identifying improvement opportunities and comparing the
organizations processes to CMMI best practices. Of these, class A appraisal is the most formal
and is the only one that can result in a level rating. Appraisal teams use a CMMI model and
ARC-conformant appraisal method to guide their evaluation of the organization and their
reporting of conclusions. The appraisal results can then be used (e.g., by a process group) to plan
improvements for the organization.
The Standard CMMI Appraisal Method for Process Improvement (SCAMPI) is an appraisal
method that meets all of the ARC requirements. Results of an SCAMPI appraisal may be
published (if the appraised organization approves) on the CMMI Web site of the SEI: Published
SCAMPI Appraisal Results. SCAMPI also supports the conduct of ISO/IEC 15504, also known
as SPICE (Software Process Improvement and Capability Determination), assessments etc.
6.4 SUMMARY
Software quality has been a principle concern. In the early days of trading, acceptable quality
was generally decided by agreement between developers and end users. With the wider extent of
technology, some traditional means of effective acceptable quality became important. Standards
performs basis of understanding, provides degree of definiteness and precision for new methods,
permits accurate results comparison, reduce cost of design and of maintenance by use of
developed, proven products and techniques and assists in the coordination of development by
establishing methods and direction. ISO 9000 is necessary but not enough to guarantee software
quality.
Assignment-Module 3
1.
a.
QMC
b.
QMS
c.
QA
d.
QC
2.
a.
ISO 9126
b.
IEEE
c.
ISO 9000
d.
None of them
3.
Six
b.
Five
c.
Fourteen
d.
Nine
4.
a.
ISO 9000
b.
ISO 9001
c.
ISO 9004
d.
Six sigma
5.
Phases of DMAIC
a.
b.
c.
d.
None of them
6.
a.
b.
c.
None of them
d.
Both of them
7.
a.
IS0 9000
b.
Six sigma
c.
CMM
d.
None of them
8.
a.
One
b.
Five
c.
Six
d.
Seven
9.
a.
b.
c.
d.
10.
a.
Initial
b.
Defined
c.
Managed
d.
Repeatable
Key - Module 6
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
REFERENCES
1. Software Engineering: A Practitioner's Approach, Pressman, R, S, , 6th Ed., PrenticeHall, 2000.
2. Software Quality Engineering: Testing, Quality
Improvement, Jeff Tian, John Wiley & Sons, 2005.
Assurance
and
Quantifiable
3. CSTE Common Body Of Knowledge, V6.1 and CSQA Common Body Of Knowledge,
V6.2.
4. Managing Quality an Integrative Approach, Foster, S. Thomas, Upper Saddle River:
Prentice Hall, 2001.
5. Quality Function Deployment - A Practitioners Approach, James L. Brossert,
(Milwaukee, Wisc.: ASQC Quality Press, 1991.
6. Software Quality: Concepts and Evidences, Luis Fernndez Sanz, Departamento de
Sistemas Informticos Universidad Europea de Madrid.
7. Software Testing, Antonia Bertolino, Istituto di Elaborazione della Informazione
Consiglio Nazionale delle RicercheResearch Area of S. Cataldo 56100 PISA Italy.
8. The Quality Toolbox, Nancy R. Tague, ASQ Quality Process, Second Edition, 2004.
9. www.acw.mit.edu
10. www.aleanjourney.com
11. www.asq.org
12. www.cnx.org
13. www.cs.colostate.edu
14. www.csbdu.in
15. www.defectmanagement.com
16. www.drdobbs.com
17. www.ehow.com
18. www.exforsys.com
19. www.herkules.oulu.fi
20. www.icoachmath.com
21. www.ipcc-nggip.iges.org.jp
22. www.johnsonandjohnson.com.
23. www.mhhe.com
24. www.mks.com
25. www.msdn.microsoft.com
26. www.msqaa.org
27. www.peart.ucs.indiana.edu
28. www.physicaltherapyjournal.com
29. www.processexcellencenetwork.com
30. www.processexcellencenetwork.com
31. www.qualitydigest.com
32. www.selectbs.com
33. www.softwaretestingdiary.com
34. www.sqa.net
35. www.stylusinc.com
36. www.timkastelle.org
37. www.undergraduate.cse.uwa.edu.au
38. www.westfallteam.com
39. www.wikipedia.com
40. www.zarate-consult.de
41. http://dissertations.ub.rug.nl
42. http://msdn.microsoft.com