Professional Documents
Culture Documents
Abstract
Successful improvement of the development process and product quality assurance should
take advantages of complementary use of both quantitative and qualitative methods. In the
paper, experience of such integrated activities during students quality lab is presented.
1. Introduction
One of the recommendations of the TQM (Total Quality Management) says: Measure
your success! [4] However things defined by business case or external problems such as
cancelled projects, late deliveries, exceeded budget, and useless software are just a tip of the
iceberg containing issues of the complex project reality. In order to make improvements
effectively good understanding of those subtitles is needed. One can argue that quantitative
methods are not sufficient for the purpose of successful software quality assurance. Intentional
and systematic use of qualitative methods can provide missing part of the solution.
Quantitative methods
Measurement
Quantification
Computations
Applications
Statistical results
Control and management
Comparison of similar products
Integration with other indicators
Qualitative methods
Comments, notes and schemas
Discussions and observations,
Questionnaires, interviews
Analysis, reasoning
Applications
Problem identification and removal
Theory generation and improvement
Understanding of project reality
Enhancement of the measurement structure
Cross-case analysis
Focus on details, complexity
Non-technical aspects
The improvements gave satisfactory results. In the second iteration the customers
satisfaction level was higher, there was no late deliveries of system design, but there were still
problems with component design. It was possible to decrease the time of work (see large
deviation in the second iteration), but average results are so large because of a very good
group that worked a lot, but also delivered excellent products. The list of problems found in
the second iteration and improvement activities for the next iteration are presented in table 3.
Table 3. Problems collected in the second iteration and improvements in the third
iteration
Problems
Improvement activities
Is there possibility to make the Simplify of Requirements Specification document: Vision of
process more efficient? Still the system is made during the first week and non-functional
large time of work was requirements selected to fit best this type of system are
reported.
delivered together with the analysis phase.
Late deliveries in component Introduce two phases of component design together with
design phase
component prototyping, implementation and writing
technical documentation (the second one includes also
system integration) instead of the component design phase
and implementation phase.
Different understanding of the Introduce meeting of all participants after system design
system by the quality expert, review and user interface prototype review to make
the customer and the developer compromise for the design and implementation.
after system design review and
user interface prototype review.
After those improvements there were no late deliveries, time of work decreased, and level
of customers satisfaction was good. This optimal process enabled to make observations on
human factors, e.g. adequacy of assigning persons with certain competencies and personality
to the development tasks.
Presented work describes activities taken to achieve the optimal process with no late
deliveries, unified time of work during all the semester, good infrastructure supporting
development process and optimal product templates. Each iteration allowed to remove more
detailed problems. Removal of the most visible problems resulted in appearance of more
hidden ones, e.g. different views of an expert and customer would not be possible to find
without review of user interface prototype, or without split for system and component design
it would be difficult to locate the problem in component design. It is also interesting that
problems indicated by measurements usually have their explanations, e.g. late deliveries in
component design are caused by poor knowledge of implementation environment and
difficulty to imagine how the system works.
reasoning about quality were designed in order to support reviews. Tests were made by both
the quality experts and the customers at the end of the project.
4.2. Results
One of the constraints of this research is that participants are students, and not real
customers, developers and quality experts. In order to achieve possibly the most reliable
results ten best projects are analysed.
In the table 4 metrics of diagrams and defects are presented. Then average value (AV) and
deviation (d) are calculated. Since analysis consists of the use case model, class diagram and
sequence diagrams with descriptions, the metrics and defects are concerned with them. For the
use case diagram, there are the following metrics:
number of use cases (# use cases),
number of actors (# actors),
number of elements including use cases, actors and relationships (# elements),
number of elements incoherent with Software Requirements Specification (# incoherent),
number of inadequate elements (# inadequate), which includes missing, wrong scope
elements, and not needed elements,
number of elements difficult to understand (# dif. to understand), which includes ambiguous
and surprising elements and statements.
Table 4. Metrics of diagrams and defects of the analysis phase for ten projects (P1-P10)
Description
P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 AV d
# use cases
8
11 13
5
6
14 14 12 17
9 10.9 3.12
# actors
2
2
3
4
1
4
2
3
2
3 2.60 0.80
# elements
20 30 37 16 17 38 36 31 36 34 29.5 7.10
# incoherent
2
7
7
6
3
8
1
3
2
2 4.10 2.32
# inadequate
3
8
4
1
2
9
3
2
2
6 4.00 2.20
# dif. To understand
2
7
3
2
3
4
1
3
3
0 2.80 1.24
% inadequate
15% 27% 11% 6% 12% 24% 8% 6% 6% 18% 13% 6%
% dif. to understand
10% 23% 8% 13% 18% 11% 3% 10% 8% 0% 10% 5%
# classes
13 11
4
14 10 24 16 13
9
9 12.3 3.70
Size of the class diagr. 30 24
8
35 19 48 41 24 19 17 26.5 9.60
# imprecision
3
8
4
3
4
4
1
4
3
1 3.50 1.30
Precision evaluation
3
3 3,5 5
4
4 3,5 4 3,5 3 3.65 0.48
# inadequate
2
6
5
2
2
6
4
1
2
3 3.30 1.56
# dif. to understand
2
4
2
0
2
2
1
1
0
2 1.60 0.88
% imprecision
10% 33% 50% 9% 21% 8% 2% 17% 16% 6% 17% 11%
% inadequate
7% 25% 63% 6% 11% 13% 10% 4% 11% 18% 17% 11%
% dif. to understand
7% 17% 25% 0% 11% 4% 2% 4% 0% 12% 8% 6%
# sequence diagr.
8
11
8
5
2
14 10 11 11
1 8.10 3.30
Size (# interactions)
48 49 134 36
6
84 37 20 33 17 46.4 25.9
# incorrect interactions 4
3
3
0
2
3
3
2
2
2 2.40 0.80
# m_exceptions
x
3
5
1
4
4
2
x
x
x
x
x
%incorrect interactions 8% 6% 2% 0% 33% 4% 8% 10% 6% 12% 9% 6%
# inco A-SRS
3
4
3
0
2
1
0
2
0
0 1.50 1.30
# inco CD-UCD
0
3
5
0
1
0
0
0
2
3 1.40 1.48
# inco CD-SD
0
3
3
1
0
0
2
1
0
1 1.10 0.94
# inco UCD-SD
2
6
6
1
4
1
3
2
6
5 3.60 1.80
Defect metrics are calculated by division of the number of inadequate elements by the number
of elements (% inadequate), and the number of elements difficult to understand by the number
of elements (% dif. to understand). The number of elements stand here for the size of use case
diagram. Similar metrics are collected for the class diagram, but in this case precision is also
important and it is difficult to be captured by metrics, so apart from discovering imprecise
elements (# imprecise) there is precision evaluation in the scale of [1..5]. The size of the class
diagram is here defined as a sum of classes and relationships. And for sequence diagrams there
are the following metrics:
number of sequence diagrams (# sequence diagr.),
size counted as a number of interactions,
missing exceptions (# m_exceptions) - x is used when there is a lot of exceptions missing.
And finally coherency between the diagrams is checked:
incoherence between analysis diagrams and requirements specification (#inco A-SRS),
incoherence between class diagram and use case diagram (# inco CD-UCD),
incoherence between class diagram and sequence diagrams (# inco CD-SD), and
incoherence between use case diagram and sequence diagrams (# inco UCD-SD).
In the table 5, quality metrics for the same projects being result of the reasoning on the basis
of the metrics described above together with evaluations and comments are presented.
Description
Completeness
Adequacy
Precision
Functionality
P9
5
3
4
4
P10
4
3
2
3
AV
3,9
3,4
3,7
4,0
The questionnaires support finding defects and help to integrate information taken from
different sources. For more precise reasoning it would be useful to introduce some standard
rules. Concrete defects are closely connected with the domain of application and they are used
during the project to improve quality, so other considerations then classified defect metrics
have no sense in this analysis. Statistics of the diagram metrics can be applied for gathering
expected values. Let us assume that average value is an expected value, and deviation sets the
range of accepted values. In this case, range of expected values for the number of classes is
[8..16], and it is possible to find automatically anomalies in project P3 with too little classes
(4), and project P6 with too many (24).
5. Conclusions
In the paper, experience of process improvement and product quality assessment in early
phases of software development with use of quantitative and qualitative methods is presented.
Defined process and collected data plan were the basis for making improvements. They
enabled to rely on facts, not opinions during revisions. Quantitative methods were used to
verify concepts, and qualitative ones for problem identification and removal, and deeper
analysis of project issues. After the change concerned with introduction of the new method
with reviews and measurements, the development process was optimised over three iterations.
Every iteration allowed to identify and remove more and more detailed problems.
In product quality assessment qualitative and quantitative methods were used by the quality
experts to data collection, reasoning about quality factors and criteria, and generation of
recommendations for improvement. They were also used in questionnaire optimisation.
6. References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
Bobkowska A., Training on High Quality Software Development with the 3RolesPlaying
method, in SCI98/ISAS98 conference proceedings, Orlando, USA, July 1998.
Briand, L. C., Differding C. M., Rombach D. H., Practical Guidelines for Measurement Based Process Improvement, Technical Report of the International Software Engineering
Network (ISERN-96-05), 1996.
Fenton N.E., Software Metrics. A Rigorous Approach, Chapman&Hall, 1993.
Grudowski P., Kolman R., Meller A., Preihs J., Zarzdzanie jakoci (Quality Management),
Wydawnictwo Politechniki Gdaskiej, Gdask, 1996.
Rational Software Corporation, UML Notation Guide, www.rational.com
Rational Software Corporation, Rational Unified Process, www.rational.com
Rumbaugh J., Blaha M., Premerliani W., Eddy F., Lorensen W., Object-oriented modelling and
design, Prentice-Hall, 1991.
Seaman C. B., Qualitative methods in Empirical Studies of Software Engineering.,
Transactions on Software Engineering volume 25, 1999, pages 557-572.