Professional Documents
Culture Documents
Abstract:
This paper presents the results derived from our survey on metrics used in objectoriented environments. Our survey includes a small set of the most well known and commonly applied traditional software metrics which could be applied to objectoriented programming and a set of objectoriented metrics (i.e. those designed specifically for objectoriented programming). Given the central role that software development plays in the delivery and application of information technology, managers are increasingly focusing on process improvement in the software development area. This demand has spurred the provision of a number of new and/or improved approaches to software development, with perhaps the most prominent being objectorientation (OO). In addition, the focus on process improvement has increased the demand for software measures, or metrics with which to manage the process. The need for such metrics is particularly acute when an organization is adopting a new technology for which established practices have yet to be developed. It is a measure of some property of a piece of software or its specifications. Since quantitative measurements are essential in all sciences, there is a continuous effort by theoreticians to bring similar approaches to software development. The goal is obtaining objective, reproducible and quantifiable measurements, which may have numerous valuable applications in schedule and budget planning, cost estimation, quality assurance testing, software debugging, software performance optimization, and optimal personnel task assignments.
Key words:
Object Oriented, Design, Inheritance, Metric, Measure, Coupling, Cohesion , Test metrics, Size estimation, Effort estimation, Test effectiveness evaluation.
I. II.
INTRODUCTION
Object-Oriented Analysis and Design of software provide many benefits such as reusability, decomposition of problem
into easily understood object and the aiding of future correction of problems make them important in software. modifications. But the OOAD software Software metrics are all about measurements which, in turn, development life cycle is not easier than the typical involve numbers, the use of numbers to make things better, procedural approach. Therefore, it is necessary to provide to improve the process of developing software and to dependable guidelines that one may follow to help ensure improve all aspects of the management of that process. good OO programming practices and Importance of Metrics write reliable code. Object-Oriented programming metrics is Metrics is used to improve the quality and an aspect to be considered. Metrics to be a set productivity of products and services thus achieving of standards against which one can measure the Customer Satisfaction. effectiveness of Object-Oriented Analysis techniques in the Easy for management to digest one number and drill design of a system. down, if required. Five characteristics of Object Oriented Metrics are as Different Metric(s) trend act as monitor when the following: process is going out-of-control. Localization operations used in many classes Metrics provides improvement for current process. Encapsulation metrics for classes, not modules Software metrics hold importance in testing phase, as software testing Information Hiding should be measured & improved metrics acts as indicators of software quality and fault proneness. In order Inheritance adds complexity, should be measured to measure the actual values such as software size, defect density, verification effectiveness and productivity, records Object Abstraction metrics represent level of of these values must be maintained. Ideally, these actual abstraction values will be tracked against estimates that are made at Software metrics are used to evaluate the software the start of a project and updated during project execution. development process and the quality of the resulting product. Software metrics aid evaluation of the testing II.TABLE process and the software product by providing objective criterion and measurements for management decision making. Their association with early detections and The following table gives a brief description of the OO metrics,definition,formula and its author.
TABLE 1.1 LIST OF OO METRICS AND THEIR FORMULA
Name
Average Service State Dependency (ASSD)
Formula ASSD=1/n
Author
ck
k =1
Ck =1 otherwise Ck =0. ASPD shows the average participation times for a service component to tie with other service components indirectly. The lower the ASPD is, the looser the coupling may be. We can see that one composite component may consist of basic components and composite components recursively until it reaches a basic component. This is the way to composite a composite service component either by aggregation or containment composition.
ASPD=1/n *
0 <i n 0< j m
P (i , j)
n= total no.of components m=no.of repositories P(i,j)=1, if service component i participates the
persistent data j otherwise 0
---
ARSD= 1/n *
Ri
i 1
ASIC = 1/n *
IC i
i =1
ICi = W nb* Nnb + Wb* Nb + Wsys * Nsyn Nnb =no.of non blocking asynchronous operations Nb =no.of blocking asynchronous operations Nsyn =no.of synchronous operations
attributes.
It is the ratio of no.of superclasses to the total no.of classes.It must be high. It is the ratio of no.of subclasses to the total no.of superclasses. The PF, metric is proposed as a measure of polymorphism. It measures the degree of method overriding in the class inheritance tree.
U=
S=
B.Henderson-sellers, ObjectOriented Metrics, Measures of Complexity. Prentice Hall, 1996. B.Henderson-sellers, ObjectOriented Metrics, Measures of Complexity. Prentice Hall, 1996 R.Harrison, S.J.Counsell, and R.V.Nithi, An Evaluation of MOOD set of ObjectOriented Software Metrics. IEEE Trans. Software Engineering
Number of Attributes per Class (NOA) Number of methods per Class (NOM)
B.Henderson-sellers, ObjectOriented Metrics, Measures of Complexity. Prentice Hall, 1996 B.Henderson-sellers, ObjectOriented Metrics, Measures of Complexity. Prentice Hall, 1996 S.R.Chidamber and C.F.Kamerer, A metrics Suite for Object-Oriented
in a class
Weighted methods
per
methods in a class The response set of a class (RFC) is defined as set of methods that can be potentially executed in response to a message received by an object of that class CBO for a class is count of the number of other classes to which it is coupled. Two classes are coupled when methods declared in one class use methods or instance variables defined by the other class Data Abstraction is a technique of creating new data types suited for an application to be programmed. It provides the ability to create user-defined data types called Abstract Data Types (ADTs). It is the number of send statements defined in a class. It is the ratio of the maximum possible number of couplings in the system to the actual number of coupling is not
n=no of methods in a class Ci =complexity of method i RFC = Mi U all j{ Rij } Mi = set of all methods in a class { Rij }= set of methods called by Mi
Software
S.R.Chidamber and C.F.Kamerer, A metrics Suite for Object-Oriented Design. IEEE Trans. Software Engineering
----
S.R.Chidamber and C.F.Kamerer, A metrics Suite for Object-Oriented Design. IEEE Trans. Software Engineering,1994
B.Henderson-sellers, ObjectOriented Metrics, Measures of Complexity. Prentice Hall, 1996 R.Harrison, S.J.Counsell, and R.V.Nithi, An Evaluation of MOOD set of ObjectOriented Software Metrics. IEEE Trans. Software Engineering
imputable to inheritance Lack of Cohesion (LCOM) measures the dissimilarity of methods in a class by looking at the instance variable or attributes used by methods. positive high value of LCOM implies that classes are less cohesive. So a low value of LCOM is desirable The measure TCC is defined as the percentage of pairs of public methods of the class with common attribute usage
Ij 0 }
L.Briand , W.Daly and J. Wust, A Unified Framework for Coupling Measurement in Object-Oriented Systems. IEEE Transactions on software Engineering L.Briand , W.Daly and J. Wust, A Unified Framework for Coupling Measurement in Object-Oriented Systems. IEEE Transactions on software Engineering
The measure LCC is defined as the percentage of pairs of public methods of the class, which are directly or indirectly connected.It is same as TCC but it also considers indirect methods. ICH for a class is defined as the number of invocations of other methods of the sameclass, weighted by the number of parameters of the invoked method. same as IFC,but only counts methods invocation of ancestors of classes
---
Y.Lee, B.Liang, S.Wu and F.Wang, Measuring the Coupling and Cohesion of an Object-Oriented program based on Information flow, 1995 Y.Lee, B.Liang, S.Wu and F.Wang, Measuring the Coupling and Cohesion of an Object-Oriented program based on Information flow, 1995
---
RFC -1
of
Number of children(NOC)
Same as RFC except that methods indirectly invoked are not included. The depth of a class within the inheritance hierarchy is the maximum number of steps from the class node to the root of the tree and is measured by the number of ancestor classes. In cases involving multiple inheritances, the DIT will be the maximum length from the node to the root of the tree The NOC is the number of immediate subclasses of a class in a hierarchy It is system level metrics. MIF is defined as the ratio of the sum of the inherited methods in all classes of the system
S.R.Chidamber and C.F.Kamerer, A metrics Suite for Object-Oriented Design. IEEE Trans. Software Engineering,1994 S.R.Chidamber and C.F.Kamerer, A metrics Suite for Object-Oriented Design. IEEE Trans. Software Engineering,1994
S.R.Chidamber and C.F.Kamerer, A metrics Suite for Object-Oriented Design. IEEE Trans. Software Engineering,1994 R.Harrison, S.J.Counsell, and R.V.Nithi, An Evaluation of MOOD set of ObjectOriented Software Metrics. IEEE Trans. Software Engineering
Ma(Ci)= Mi(Ci)+ Md(Ci) TC= total no of classes Md(Ci)=no of methods declared in the class Mi(Ci)=no of methods inherited in the class
AIF is defined as the ratio of the sum of inherited attributes in all classes of the system. AIF denominator is the total number of available attributes for all classes. It is defined in an analogous manner and
R.Harrison, S.J.Counsell, and R.V.Nithi, An Evaluation of MOOD set of ObjectOriented Software Metrics. IEEE Trans. Software Engineering
Aa(Ci)= Ai(Ci)+ Ad(Ci) TC= total no of classes Ad(Ci)=no of attributes declared in the class Ai(Ci)=no of attributes inherited in the class
DAC
provides an indication of the impact of inheritance in the object oriented software. The MHF metric states the sum of the invisibilities of all methods in all classes.The invisibility of a method is the percentage of the total class from which the method is hidden. If the value of MHF is high (100%), it means all methods are private which indicates very little functionality. Thus it is not possible to reuse methods with high MHF. The AHF metric shows the sum of the invisibilities of all attributes in all classes.The invisibility of an attribute is the percentage of the total classes from which this attribute is hidden. If the value of AHF is high (100%), it means all attributes are private. AHF with low (0%) value indicate all attributes are public. This counts the unique classes used
Md(Ci)=no of methods declared in the class Mv(Ci)=no of methods visible in the class Mh(Ci)=no of methods hidden in the class
R.Harrison, S.J.Counsell, and R.V.Nithi, An Evaluation of MOOD set of ObjectOriented Software Metrics. IEEE Trans. Software Engineering
Ad(Ci)=no of attributes declared in the class Av(Ci)=no of attributes visible in the class Ah(Ci)=no of attributes hidden in the class
R.Harrison, S.J.Counsell, and R.V.Nithi, An Evaluation of MOOD set of ObjectOriented Software Metrics. IEEE Trans. Software Engineering
S.R.Chidamber and C.F.Kamerer, A metrics Suite for Object-Oriented Design. IEEE Trans. Software
Engineering,1994
It directly measures the number of linearly independent paths through a program's source code. The concept, although not the method, is somewhat similar to that of general text complexity measured by the Flesch-Kincaid Readability Test.
SLOC is used to estimate the total effort that will be needed to develop a program, as well as to calculate approximate productivity. The SLOC metric measures the number of physical lines of active code, that is, no blank or commented lines code The CP metric is defined as the number of commented lines of code divided by the number of non-blank lines of code . The comment percentage is calculated by the total number of comments divided by the total lines of code
McCabe (December 1976). "A ComplexityMeasure". IEEE Transactions on Software Engineering: 308320.
Lorenz, Mark & Kidd Jeff: Object-Oriented Software Metrics, Prentice Hall, 1994.
Comment Percent(CP)
Laing V.,Coleman C., : Principal Components of Orthogonal OO Metrics",Software Assurance Technology Center (SATC),2001
of send
less the number of blank lines. The specialisation index measures the extent to which subclasses override (replace) behaviour of their superclasses. SIX provides a measure of the quality of subclassing. NOM measures the number of messages sent in a method, segregated by type of message. The types include: Unary, Messages with no arguments Binary, messages with one argument, separated by a special selector name. (concadenation and math functions). Keyword, messages with one or more arguments The conceptual similarity between methods mk M(C) and mj M(C), CSM(mk, mj), is computed as the cosine between the vectors vmk and vmj, corresponding to mk and mj in the semantic space constructed by
Lorenz, Mark & Kidd Jeff: Object-Oriented Software Metrics, Prentice Hall, 1994.
----
Lorenz, Mark & Kidd Jeff: Object-Oriented Software Metrics, Prentice Hall, 1994
Denys Poshyvanyk, Andrian Marcus,The Conceptual Coupling Metrics for Object-Oriented Systems
LSI. Let ck C and cj C be two distinct (ck cj) classes in the system. Each class has a set of methods M(ck) = {mk1, , mkr}, where r = |M(ck)| and M(cj) = {mj1, , mjt}, where t = |M(cj)|. Between every pair of methods (mk, mj) there is a similarity measure CSM(mk, mj). average of the similarity measures between all unordered pairs of methods from class ck and class cj. The definition ensures that the conceptual similarity between two classes is symmetrical, as CSBC(ck, cj) = CSBC(cj, ck). It is measured by the degree to which the methods of a class are conceptually related to the methods of other classes. It measures the Number of Associations between a class and its
Cj=class j
t = |M(cj)|
Denys Poshyvanyk, Andrian Marcus,The Conceptual Coupling Metrics for Object-Oriented Systems
Denys Poshyvanyk, Andrian Marcus,The Conceptual Coupling Metrics for Object-Oriented Systems
Cj=class j
r= |M(Ck)|
Denys Poshyvanyk, Andrian Marcus,The Conceptual Coupling Metrics for Object-Oriented Systems
Number of associations(NA S)
peer. It is the average of the Inheritance depth. AID=1,parent classes =0,classes without parents Maximum number of levels in the hierarchy that are below the class Number of classes that a given class directly inherits from.
CLD= Maximum number of levels in the hierarchy that are below the class
Number of Parents(NOP)
Number of classes directly/indirectly inherited from this class Number of classes from which the class is inherited (directly/indirectly) Number of methods in a class that the class inherits from its ancestors and does not override. Number of methods not inherited not overridden Number of function members that implement the same operator in ancestors and in the current
J. Bansiya and C.G. Davis, A Hierarchical Model for ObjectOriented Design Quality Assessment, IEEE Transactions on Software Engineering, Vol. 28, No. 1, 2002 J. Bansiya and C.G. Davis, A Hierarchical Model for ObjectOriented Design Quality Assessment, IEEE Transactions on Software Engineering, Vol. 28, No. 1, 2002 Li, W. and Henry, S., "Objectoriented metrics that predict maintainability", Journal of Systems and Software, vol. 23, no. 2, 1993 Li, W. and Henry, S., "Objectoriented metrics that predict maintainability", Journal of Systems and Software, vol. 23, no. 2, 1993 Li, W. and Henry, S., "Objectoriented metrics that predict maintainability", Journal of Systems and Software, vol. 23, no. 2, 1993 Li, W. and Henry, S., "Objectoriented metrics that predict maintainability", Journal of Systems and Software, vol. 23, no. 2, 1993 Arisholm, E., Briand, L. C., and Foyen, A., "Dynamic coupling measurement for objectoriented software", IEEE Transactions on Software
class (at compile time) Number of function members that implement the same operator in ancestors and in the current class (at run time). Number of function members that implement the same operator in descendants and in the current class (at compile time). Number of function members that implement the same operator in descendants and in the current class (at run time) It is the sum of static Polymorphism in Ancestors and Static Polymorphism in Descendants It is the sum of dynamic Polymorphism in Ancestors and dynamic Polymorphism in Descendants Count of number of functions with the same name in a class
Engineering Arisholm, E., Briand, L. C., and Foyen, A., "Dynamic coupling measurement for objectoriented software", IEEE Transactions on Software Engineering
Arisholm, E., Briand, L. C., and Foyen, A., "Dynamic coupling measurement for objectoriented software", IEEE Transactions on Software Engineering Arisholm, E., Briand, L. C., and Foyen, A., "Dynamic coupling measurement for objectoriented software", IEEE Transactions on Software Engineering
SP= SPA+SPD SPA= Static Polymorphism in Ancestors SPD= Static Polymorphism in Descendants
DP= DPA+DPD DPA= Dynamic Polymorphism in Ancestors DPD= Dynamic Polymorphism in Descendants
Arisholm, E., Briand, L. C., and Foyen, A., "Dynamic coupling measurement for objectoriented software", IEEE Transactions on Software Engineering Arisholm, E., Briand, L. C., and Foyen, A., "Dynamic coupling measurement for objectoriented software", IEEE Transactions on Software Engineering Arisholm, E., Briand, L. C., and Foyen, A., "Dynamic coupling measurement for objectoriented software", IEEE Transactions on Software Engineering
EOC with respect to a scenario X between objects oi and oj is the percentage of no.of messages exchanged between to the total no.of messages. IOC with respect to a scenario X between objects oi and oj is the percentage of no.of messages exchanged between to the total no.of messages. This metric is a direct translation of C&K CBO metric,except that it is defined at the runtime.
EOC =[ {Mx(oi,oj )|oi,oj O oi oj}/ MTx ] *100 Mx(oi, oj ) =the number of messages sent from oi to oj and MTx = total number of messages exchanged during the execution of the scenario x. EOC =[ {Mx(oi,oj )|oi,oj O oi oj}/ MTx ] *100 Mx(oi, oj ) =the number of messages received by oi from oj and MTx = total number of messages exchanged during the execution of the scenario x Dynamic CBO= no.of couples of a class with other classes at run time.
S. Yacoub, H. Ammar, and T. Robinson, "Dynamic Metrics for Object-Oriented Designs," proc. IEEE 6th International Symposium on Software Metrics (Metrics'99), pp. 50-61 S. Yacoub, H. Ammar, and T. Robinson, "Dynamic Metrics for Object-Oriented Designs," proc. IEEE 6th International Symposium on Software Metrics (Metrics'99), pp. 50-61 Misook Choi, JongSuk Lee, A Dynamic Coupling for Reusable and Efficient Software System, 5th ACIS International Conference on Software Engineering Research, Management & Applications, Guru nandha rao,Measurement of Dynamic Coupling in a Object Oriented SystemAmerican Journal of Scientific Research
Degree of dynamic coupling between two classes at run time (DDCR) Degree of dynamic coupling from a given set classes RI
No.of times class A access methods or instance variables of class B as the percentage of total no of methods or instance variables accessed by class A This is an extension of above metric to indicate the level of dynamic coupling within set of classes Runtime import coupling between objects
DDCR=[No.of times A access methods of class B/total no of methods accessed by class A]*100
[sum of no.of accesses to methods or instance variables outside each class/sum of total no of accesses from these classes]*100 RI =No of classes from which a given class accesses methods or instance variables at runtime
Guru nandha rao,Measurement of Dynamic Coupling in a Object Oriented SystemAmerican Journal of Scientific Research Misook Choi, JongSuk Lee, A Dynamic Coupling for Reusable and Efficient Software System, 5th ACIS International Conference on Software
RDI
RE
export between
RI =No of classes which accesses methods or instance variables from a given class at runtime
RDE
Engineering Research, Management & Applications Misook Choi, JongSuk Lee, A Dynamic Coupling for Reusable and Efficient Software System, 5th ACIS International Conference on Software Engineering Research, Management & Applications Misook Choi, JongSuk Lee, A Dynamic Coupling for Reusable and Efficient Software System, 5th ACIS International Conference on Software Engineering Research, Management & Applications Misook Choi, JongSuk Lee, A Dynamic Coupling for Reusable and Efficient Software System, 5th ACIS International Conference on Software Engineering Research, Management & Applications Pham Thi Quynh, Huynh Quyet Thang Dynamic coupling metric for service oriented software International Journal on Electroonics Engineering,2009
CBS is built directly from CBO metric in a suite of C&K metrics.For service A,CBS metric is calculated based on the number of relationships between A and other services in system. Measuring the instability of service
n =the number of services in system AiBj=0 if Ai does not connect to Bj and AiBj=1 if Ai connects to Bj
fan.in= number of functions that call to function A fan.out =number of functions that are called by
Joost Visser, Departamento de Informatica Universidade do Minho Braga, Portugal, Structure metrics for XML schema
function A. Degree of Coupling between 2 services metric (DC2S) The DC2S metric identifies relationship between two services to detect the dependency between these services. DC2S metric identifies the level of coupling between two services in runtime dynamic metric, weight of edge is the number of times from request service to provider service. The value of
this metric is low, the coupling in system will be loose and reserve. This metric helps to distinguish the difference between two systems which have the same nodes but differ in the connection between nodes
Pham Thi Quynh, Huynh Quyet Thang Dynamic coupling metric for service oriented software International Journal on Electroonics Engineering,2009
Max=K*V*(V-1) Min=V*(V-1) d(u,v)=length of shortest path from u to v K= maximum value in the length of shortest path between any two nodes V=vertex set of the graph G(U,V)
Pham Thi Quynh, Huynh Quyet Thang Dynamic coupling metric for service oriented software International Journal on Electroonics Engineering,2009
It comes under MOOD2 metrics suite. OHEF measures the goodness of scope settings on class operations i.e. methods When OHEF=1, scope settings are perfect. AHEF is related to AHF. AHF measures the general level of attribute hiding whereas AHEF measures how well the
OHEF = Classes that do access operations / Classes that can access operations
Fernando Brito e Abreu: Using OCL to formalize object oriented metrics definitions. Technical Report ES007
AHEF = Classes that do access attributes / Classes that can access attributes
Fernando Brito e Abreu: Using OCL to formalize object oriented metrics definitions. Technical Report ES007
hiding succeeds. Internal inheritance factor(IIF) IIF measures the amount of internal inheritance in your system. Internal inheritance happens when a class inherits another class in the same system. If there is no inheritance, IIF=0. This metric is simply the percentage of the classes that are parametrized. A parametrized class is also called a generic class Counts the number of Implements statements. A class may implement either another class (VB Classic) or an interface definition. This is the same as WMC(Weighted methods of class from C&K metrics) but Private methods are ignored This is the same as WMC but all inherited methods are also counted Number of variables and arrays defined at the class-level.
Fernando Brito e Abreu: Using OCL to formalize object oriented metrics definitions. Technical Report ES007
IIF = Classes that inherit a VB class / All classes that inherit something
Fernando Brito e Abreu: Using OCL to formalize object oriented metrics definitions. Technical Report ES007
R. Marinescu. A Multi-Layered System of Metrics for the Measurement of Reuse by Inheritance. Paper submitted to TOOLS USA'99, March 1999.
Yu, Z. and Rajlich, V., "Hidden Dependencies in Program Comprehension and Change Propagation",
WMCi=WMC+inherited methods WMC=weighted methods per class VARS=no.of variables and arrays defined in the class
Yu, Z. and Rajlich, V., "Hidden Dependencies in Program Comprehension and Change Propagation", Yu, Z. and Rajlich, V., "Hidden Dependencies in Program Comprehension and
Non-private variables defined by class(VARSnp) Variables defined and inherited by class(VARSi) Events defined by class(EVENT)
Inherited and procedure-level variables are not counted. The same as VARS, but Private variables are excluded. The same as VARS, but inherited variables are included. Number of Event statements (event definitions) in a class. Inherited events and event handlers are not counted. Number of constructors (Sub New) in a class. Class_Initialize in VB Classic is an event handler, and not counted in CTORS. Number of methods + variables defined by class. Measures the size of the class in terms of operations and data. Number of non-private methods + variables defined by class. Measures the size of the interface from other parts of the system to the class. If zero, no OO metrics are meaningful.
Change Propagation",
VARSnp=VARS-private variables VARS=Variables defined by class VARSi=VARS-inherited variables VARS=Variables defined by class EVENT=no of event statements
Yu, Z. and Rajlich, V., "Hidden Dependencies in Program Comprehension and Change Propagation", Yu, Z. and Rajlich, V., "Hidden Dependencies in Program Comprehension and Change Propagation", Poshyvanyk, D., Marcus, A., Dong, Y., and Sergeyev, A., "IRiSS - A Source Code Exploration Tool", in Industrial and Tool Proceedings of 21st IEEE International Conference on Software Maintenance,2005 Poshyvanyk, D., Marcus, A., Dong, Y., and Sergeyev, A., "IRiSS - A Source Code Exploration Tool", in Industrial and Tool Proceedings of 21st IEEE International Conference on Software Maintenance,2005 Poshyvanyk, D., Marcus, A., Dong, Y., and Sergeyev, A., "IRiSS - A Source Code Exploration Tool", in Industrial and Tool Proceedings of 21st IEEE International Conference on Software Maintenance,2005 Poshyvanyk, D., Marcus, A., Dong, Y., and Sergeyev, A., "IRiSS - A Source Code Exploration Tool", in Industrial and Tool Proceedings of 21st IEEE International Conference on Software Maintenance,2005 Poshyvanyk, D., Marcus, A., Dong, Y., and Sergeyev,
Class size(CSZ)
CSZ = WMC + VARS. WMC=weighted methods per class VARS=Variables defined by class
CIS = WMCnp + VARSnp. WMCnp= Non-private methods defined by class VARSnp=Non-private variables defined by class
Number classes(CLS)
of
of abstract defined in
Number of concrete classes defined in project. A concrete class is one that is not abstract. Number of distinct class hierarchies.
CLSc = CLS CLSa CLS= Number of classes CLSa= Number of abstract classes
Number of .NET Interfaces. Abstract classes are not counted as interfaces even though they can be thought of as interfaces.
A., "IRiSS - A Source Code Exploration Tool", in Industrial and Tool Proceedings of 21st IEEE International Conference on Software Maintenance,2005 Poshyvanyk, D., Marcus, A., Dong, Y., and Sergeyev, A., "IRiSS - A Source Code Exploration Tool", in Industrial and Tool Proceedings of 21st IEEE International Conference on Software Maintenance,2005 Poshyvanyk, D., Marcus, A., Dong, Y., and Sergeyev, A., "IRiSS - A Source Code Exploration Tool", in Industrial and Tool Proceedings of 21st IEEE International Conference on Software Maintenance,2005 Poshyvanyk, D., Marcus, A., Dong, Y., and Sergeyev, A., "IRiSS - A Source Code Exploration Tool", in Industrial and Tool Proceedings of 21st IEEE International Conference on Software Maintenance,2005 Poshyvanyk, D., Marcus, A., Dong, Y., and Sergeyev, A., "IRiSS - A Source Code Exploration Tool", in Industrial and Tool Proceedings of 21st IEEE International Conference on Software Maintenance,2005 Poshyvanyk, D., Marcus, A., Dong, Y., and Sergeyev, A., "IRiSS - A Source Code Exploration Tool", in Industrial and Tool Proceedings of 21st IEEE International Conference on Software Maintenance,2005
It is the maximum value od Depth Inheritance Tree of C&K metrics. maxDIT should not exceed 6. the number of classes outside this module that depend on classes inside this module the number of classes inside this module that depend on classes outside this module It is the ratio of efferent coupling to the total no of efferent and afferent coupling It is the inverse of sum of no of input,output parameters,no of global variables,no of modules called and no of modules calling. .5 is low coupling, .001 is high coupling It is used to estimate the effort required to develop a system. it is an estimate of how much of code a single developer can reasonably expect to own. CPD=no.of key classes /no of people
Ca = no.of classes outside the module Mi that depends on classes inside module Mi Ce = no.of classes inside the module M i that depends on classes outside module Mi I = Ce / (Ca + Ce) Ca = no.of classes outside the module M i that depends on classes inside module Mi Ce = no.of classes inside the module M i that depends on classes outside module Mi MC = 1/(Pi+Po+G+Mcalled+Mcalling) Pi=no of input parameters Po=no of output parameters G=no of global variables Mcalled=no of modules called from the class Ci Mcalling = no of modules calling from the class Ci
Succi, G., Pedrycz, W., Djokic, S., Zuliani, P., and Russo, B., "An Empirical Exploration of the Distributions of the Chidamber and Kemerer Object-Oriented Metrics Suite", Empirical Software Eng., 10 (1), Jan. 2005, Robert Martin, OO Design Quality Metrics, An Analysis of Dependencies Robert Martin, OO Design Quality Metrics, An Analysis of Dependencies Robert Martin, OO Design Quality Metrics, An Analysis of Dependencies
Module coupling(MC)
Dhama, H. 1995. Quantitative models of cohesion and coupling in software. In Selected Papers of the Sixth Annual Oregon Workshop on Software Metrics (Silver Falls, Oregon, United States). W. Harrison and R. L. Glass, Eds. Elsevier Science, New York, NY, 65-74. Lorenz, Mark & Kidd Jeff: Object-Oriented Software Metrics, Prentice Hall, 1994. Lorenz, Mark & Kidd Jeff: Object-Oriented Software Metrics, Prentice Hall, 1994.
It is equal to the no of leaves in the tree. Higher BIT means higher number of methods/attributes reused in the derived class MRPIR computes the total number of methods reused per inheritance relation in the inheritance hierarchy. It applies on whole inheritance hierarchy in the system. It computes the total number of attributes reused per inheritance relation in the inheritance hierarchy Generality of a class is the measure of its relative abstraction level. Higher the generality of a class more it is likely to be reused It is the probability of reusing classes in the inheritance hierarchy
Dr. Kadhim M.Breesam Metrics for Object-Oriented Design Focusing on Class Inheitance Metric IEEE 2nd International Conference on Dependability of Computer Systems (DepCoSRELCOMEX07) 2007 Dr. Kadhim M.Breesam Metrics for Object-Oriented Design Focusing on Class Inheitance Metric IEEE 2nd International Conference on Dependability of Computer Systems (DepCoSRELCOMEX07) 2007
MIk= No. of methods inherited through kth inheritance relationship r= Total number of inheritance relationships
AIk= No. of attributes inherited through kth inheritance relationship r= Total number of inheritance relationships GC=a/al a=abstraction level of the class al=total no of possible abstraction levels
Dr. Kadhim M.Breesam Metrics for Object-Oriented Design Focusing on Class Inheitance Metric IEEE 2nd International Conference on Dependability of Computer Systems (DepCoSRELCOMEX07) 2007 Dr. Kadhim M.Breesam Metrics for Object-Oriented Design Focusing on Class Inheitance Metric IEEE 2nd International Conference on Dependability of Computer Systems (DepCoSRELCOMEX07) 2007 Dr. Kadhim M.Breesam Metrics for Object-Oriented Design Focusing on Class Inheitance Metric IEEE 2nd International Conference on Dependability of Computer Systems (DepCoSRELCOMEX07) 2007
Abstractness(A)
Ni = Total number of classes that can be inherited Nlg = Total number of classes that can be inherited but having lowest possible generic level N = Total number of classes in the inheritance hierarchy A= abstract classes in the category / total no of
category to the total no of classes in the category.It ranges from 0 to 1. 0 means concrete and 1 means completely abstract. The metric counts the percentage of the catch blocks in each method of the class. The NCBC denominator represents the maximum number of possible catch blocks for class Cik. This would be the case where all possible exceptions in Cik have a corresponding catch block to handle these exceptions.
K.K.Aggarwal, Yogesh Singh, Arvinder Kaur and Ruchika Malhotra ,Software Design Metrics for ObjectOriented Software, JOURNAL OF OBJECT TECHNOLOGY,2008
n = Number of Methods in a class m = Number of Catch Blocks in a Method Cij is jth Catch Block in ith Method Cik is kth Catch Block in ith Method l = Maximum Number of possible Catch Blocks in a Method
The following table gives a brief description of the Non- OO metrics,definition,formula and its author.
TABLE 1.2 LIST OF NON-OO METRICS AND THEIR FORMULA
It captures the relation between the number of weighted defects and the size of product release. Purpose is to deliver a high quality product. It shows the relation between the number of weighted defects shipped to customers and size of the product release. Purpose is to deliver a high
WTP+WF/KCSI
WTP no. of weighted defects found in the product under test (before official release) WF no. of weighted defects found in the product after release. KCSI no. of new or changed source lines of code in thousands.
Dr. Arvinder Kaur1,#Mrs. Bharti Suri2,#Ms. Abhilasha Sharma3 Software Testing Product Metrics - A Survey K. K. Aggarwal & Yogesh Singh Software Engineering Programs Documentation Operating Procedures (Second Edition) New Age International Publishers, 2005.
WF/KCSI
WF no. of weighted defects found in the product after release. KCSI no. of new or changed source lines of code in thousands
Test (TI)
Improvement
Test (TE)
effectiveness
quality product. It shows the relation between the number of weighted defects detected by the test team during and the size of the product release. Purpose is to deliver a high quality product. It shows the relation between the number of weighted defects detected during testing and the total number of weighted defects in the product. Purpose is to deliver a high quality product. It shows the relation between tie spent on testing and the size of the product release. Purpose is to decrease timetomarket. It shows the relation between time spent on testing and the time spent on developing. Purpose is to decrease time-to market.
WTTP/KCSI
KCSI no. of new or changed source lines of code in thousands. WTTP is the no. of weighted defects found by the test team in the test cycle of the product.
Yanping Chen, Robert L. Probert, Kyle Robenson Effective Test Metrics for Test Strategy Evolution
WT/(WTP+WF)*100%
WTP no. of weighted defects found in the product under test (before official release) WF no. of weighted defects found in the product after release. WT no. of weighted defects found by the test team during the product cycle.
P. Dhavachelvan, G. V. Uma, V.S.K. Venkatachalapathy A new approach in development of distributed framework for automated software testing using agents
TT/KCSI
KCSI no. of new or changed source lines of code in thousands. TT no. of business days used for product testing.
N. E. Fenton and S. L. Pfleager Software Metrics: A Rigorous and Practical Approach, Second Edition Revised ed. Boston L. Finkelstein Theory and Philosophy of Measurement , in Theoretical Fundamentals, Vol. 1, Handbook of measurement Science, P.H. Sydenlam, Ed. Chichester : John Wiley & Sons
TT/TD*100%
TT no. of business days used for product testing. TD no. of business days used for product development.
It shows the relation between resource or money spent on testing and the size of the product release. Purpose is to decrease cost-to-market. It shows the relation between testing cost and development cost of the product. Purpose is to decrease costtomarket. It shows the relation between money spent by test team and the number of weighted defects detected during testing. Purpose is to decrease cost-tomarket. It shows the relation between thi number of weighted defedts detected and the size of the product release. It shows the relation between time spent on testing and the size of the product release.
CT/KCSI
CT total cost of testing the product in dollars. KCSI no. of new or changed source lines of code in thousands.
CT/CD*100%
CT total cost of testing the product in dollars. CD total cost of developing the product in dollars.
S.H. Kan, J. Parrish and D.Manclove In-process metrics for software testing
CT/WT
CT total cost of testing the product in dollars. WT no. of weighted defects found by the test team during the product cycle.
Stephen H. Kan Metrics and Models in Software Quality Engineering, Second Edition,
WP/KCSI
WP no. of weighted defects found in one specific test phase. KCSI no. of new or changed source lines of code in thousands.
Cem Kaner Software Engineering Metrics: What do they measure and how do we know?
TTP/KCSI
TTP no. of business days used for a specific test phase. KCSI no. of new or changed source lines of code in thousands.
N.Nagappan Toward a software Testing and Reliability Early warning Metric Suite
It shows the relation between resource or money spent on the test phase and the size of the product release. It shows the relation between money spent on the test phase and the number of weighted defects detected. It shows the relation between the number of one type of defects detected in one specific test phase and the total number of this type of defect in the product. It is the probability of failure free operation of a computer program for a specified time in a specified environment. Goal of the test session efficiency metric is to identify trends in the scheduled test times effectiveness
CTP/KCSI
KCSI no. of new or changed source lines of code in thousands. CTP total cost of a specific test phase in dollars.
Dr. Arvinder Kaur1,#Mrs. Bharti Suri2,#Ms. Abhilasha Sharma3 Software Testing Product Metrics - A Survey E. Osterweil Strategic Directions in Software Quality,
CTP/WT
CTP total cost of a specific test phase in dollars. WT no. of weighted defects found by the test team during the product cycle.
WD/(WD+WN)*100%
WD no. of weighted defects of this defect type that are detected after the test phase. WN no. of weighted defects of this defect type (any particular type) that remain uncovered after the test phase (missed defects)
Software reliability
Z(t) = (h)exp(-ht/N)
Z(t) instantaneous failure rate. h Failure rate prior to the start of testing N no. of faults inherent in the program prior to the start of testing. SYSE= Active Test Time/Scheduled Test Time TE= Total no. of good runs/Total runs SYSE System Efficiency TE Tester efficiency
Linda H. Rosenberg, Theodore F. Hammer, Lenore L. Huffman Requirements, Testing, and Metrics NASA,GSFC Norman F. Schneidewind Measuring and Evaluating Maintenance Process Using Reliability, Risk, and Test Metrics IEEE Transaction on Software Engineering
Test focus
Goal is to identify the amount of effort spent finding and fixing real faults versus the effort spent either eliminating false defects or waiting for a hardware fix. Goal is (i) To quantify the relative stabilization of a software subsystem (ii) To identify any possible overtesting or testing bottlenecks by examining the fault density of the subsystem over time. Three components are: T, O, H. Goal of the metric is to examine the efficiency of testing over time. This metric gives the test cases execution productivity which on further analysis can give conclusive result. This metric gives the test case writing productivity based on which one can have a conclusive remark. This metric determine the number of valid defects that testing team has identified during execution.
TF = No. of DRs closed with a software fix/Total no. of DR DR Discrepancy Report TF Test Focus
Software maturity
T = Total no. of DRs changed to a subsystem/1000 SLOC O = No. of currently open subsystem DRs/1000 SLOC H = Active test hours per subsystem/1000 SLOC T Total Density O Open Density H Test Hours
Test coverage
Ramesh Pusala Operational Excellence through efficient Software Testing Metrics Infosys, 2006 George E. Stark, Robert C. Durst, Tammy M. Pelnik An Evaluation of Software Testing metrics Dr. Arvinder Kaur1,#Mrs. Bharti Suri2,#Ms. Abhilasha Sharma3 Software Testing Product Metrics - A Survey Cost Effective Software Test Metrics LJUBOMIR LAZICa, NIKOS MASTORAKISb
Productivity
This metric determine the number of defects rejected during execution. It gives the percentage of the invalid defect the testing team has opened and one can control, whenever required. Defect whose resolution give rise to new defect(s) are bad fix defect.This metric determine the effectiveness of defect resolution process. It gives the percentage of the bad defect resolution which needs to be controlled. This metric determine the efficiency of the testing team in identifying the defects. It also indicated the defects missed out during testing phase which migrated to the next phase. This metric determine the quality of the product under test and at the time of release, based on which one can take decision for releasing of the product i.e. it indicates the quality. This metric gives the scripting productivity for performance test script and have trend over a period of time.
Dr. K. V. Kale
Number of of Bad FixDefect(s) Bad FixDefect = * 100 % Total Number of Valid Defects
Roger S. Pressman Software Engineering, A Practitioners Approach 5th Edition, McGraw Hill, 1997.
Fenton, N., S.L. Pfleeger Software Metrics: A Rigorous and Practical Approach, PWS Publishing Co
Performance Scripting
Productivity (PSP)
This metric determine the quality of the Performance testing team in meeting the requirements which can be used as an input for further improvisation, if required. Requirements volatility (RV) refers to additions, deletions and modifications of requirements during the systems development life cycle. Ignoring requests for requirement changes can cause system failure due to user rejection, and failure to manage RV can increase the development time and cost. The ratio of the number of defect reports which were rejected (perhaps because they were not actually bugs) divided by the total number of defects. This shows the ability to keep the product right while fixing defect This report shows (Closed+Reopen) / Fixed
(Req met during PT) - (Req not met after Signoff of PT) PerformanceTest Efficiency = * 100 % Req met during PT
Requirements Volatility
{(No. of requirements added + No. of requirements deleted + No. of requirements modified) / No. of initial approved requirements} * 100%
Defect Ratio
Rejection
Regression Defect
Defect Validation
defects Defect Rate Validation No of defects validated per week. This is a measure of QA productivity in terms of validating the fixed defects
defects) *100%
No of defects validated per week per person = (total no. of validated defects) *40/(Total hours to validate the defects * no. of resources)
DRE= (Defects removed during development phase x100%) / Defects latent in the product Defects latent in the product = Defects removed during development Phase+ defects found later by user
Productivity = KLOC / Person-month Quality = Defects / KLOC Cost = $ / LOC Documentation = pages of documentation / KLOC Where, KLOC stand for no. of lines of code (in thousands). Person-month stand for is the time(in months) taken by developers to finish the product. Defects stand for Total Number of errors discovered
Mr. Premal B. Nirpal Dr. K. V. Kale A Brief Overview Of Software Testing Metrics Mr. Premal B. Nirpal Dr. K. V. Kale
A Brief Overview Of Software Testing Metrics
reduces future maintenance efforts. Using early quality indicators based on objective empirical evidence is therefore a realistic objective.
IV.REFERENCES
Lorenz, Mark & Kidd Jeff: Object-Oriented Software Metrics , Prentice Hall, 1994. Dr. Kadhim M.Breesam Metrics for Object-Oriented Design Focusing on Class Inheitance Metric IEEE 2nd International
S.R.Chidamber and C.F.Kamerer, A metrics Suite for Object-Oriented Design. IEEE Trans. Software Engineering,1994 L.Briand , W.Daly and J. Wust, A Unified Framework for Coupling Measurement in Object-Oriented Systems. IEEE Transactions on software Engineering Y.Lee, B.Liang, S.Wu and F.Wang, Measuring the Coupling and Cohesion of an Object-Oriented program based on Information flow, 1995 R.Harrison, S.J.Counsell, and R.V.Nithi, An Evaluation of MOOD set of ObjectOriented Software Metrics. IEEE Trans. Software Engineering McCabe (December 1976). "A ComplexityMeasure". IEEE Transactions on Software Engineering: 308320. Laing V.,Coleman C., : Principal Components of Orthogonal OO Metrics",Software Assurance Technology Center (SATC),2001 Denys Poshyvanyk, Andrian Marcus,The Conceptual Coupling Metrics for ObjectOriented Systems Henderson Sellers,Object Oriented Metrics: Measures of complexity,Prentice hall Aine Mitchell, James F. Power, Toward a definition of run-time object-oriented metrics, 2003 Sencer Sultanolu, mit Karaka, Software Size Estimating, Web Document, 1998 David N. Card, Khaled El Emam, Betsy Scalzo, Measurement of Object-Oriented SoftwareDevelopment Projects, 2001 Conference on Dependability of Computer Systems (DepCoS-RELCOMEX07) 2007 Cost Effective Software Test Metrics LJUBOMIR LAZICa, NIKOS MASTORAKISb. Bennett, Ted L., and Paul W. Wennberg. Eliminating Embedded Software Defects Prior to Integration Test.Dec. 2005. V. R. Basili, G. Caldiera, H. D. Rombach, The Goal Question Metric Approach, Encyclopedia of Software Engineering,volume 1, John Wiley & Sons, 1994, pp. 528-532 Notable Metrics in Software Testing S. M. K. Quadri1 and Sheikh Umar Farooq2
K.K.Aggarwal, Yogesh Singh, Arvinder Kaur and Ruchika Malhotra ,Software Design Metrics for ObjectOriented Software, JOURNAL OF OBJECT TECHNOLOGY,2008 Mariano Ceccato and Paolo Tonella, Measuring the Effects of Software Aspectization,International conference on Object Technology,1998 Dhama, H. 1995. Quantitative models of cohesion and coupling in software. In Selected Papers of the Sixth Annual Oregon Workshop on Software Metrics (Silver Falls, Oregon, United States). W. Harrison and R. L. Glass, Eds. Elsevier Science, New York, NY, 65-74. Robert Martin, OO Design Quality Metrics, An Analysis of Dependencies Poshyvanyk, D., Marcus, A., Dong, Y., and Sergeyev, A., "IRiSS - A Source Code Exploration Tool", in Industrial and Tool Proceedings of 21st IEEE International Conference on Software Maintenance,2005 A Brief Overview Of Software Testing Metrics Mr. Premal B. Nirpal Dr. K. V. Kale Yanping Chen, Robert L. Probert, Kyle Robenson Effective Test Metrics for Test Strategy Evolution Proceedings of the 2004 Conference of the centre for Advanced Studies on Collaborative Research CASCON04 Software Testing Product Metrics - A Survey Dr. Arvinder Kaur1,#Mrs. Bharti Suri2,#Ms. Abhilasha Sharma3 Norman F. Schneidewind Measuring and Evaluating Maintenance Process Using Reliability, Risk, and Test Metrics IEEE Transaction on Software Engineering. S.H. Kan, J. Parrish and D.Manclove In-process metrics for software testing IBM Systems Journal, Vol. 40. no. 1,2009.
George E. Stark, Robert C. Durst, Tammy M. Pelnik An Evaluation of Software Testing metrics for NASAs Mission Control Center 1992.