You are on page 1of 31

A Survey Of OO AND NON-OO Metrics

Preethika Devi.K,Monica S Dept of Computer Science, College Of Engineering,Guindy

Abstract:
This paper presents the results derived from our survey on metrics used in objectoriented environments. Our survey includes a small set of the most well known and commonly applied traditional software metrics which could be applied to objectoriented programming and a set of objectoriented metrics (i.e. those designed specifically for objectoriented programming). Given the central role that software development plays in the delivery and application of information technology, managers are increasingly focusing on process improvement in the software development area. This demand has spurred the provision of a number of new and/or improved approaches to software development, with perhaps the most prominent being objectorientation (OO). In addition, the focus on process improvement has increased the demand for software measures, or metrics with which to manage the process. The need for such metrics is particularly acute when an organization is adopting a new technology for which established practices have yet to be developed. It is a measure of some property of a piece of software or its specifications. Since quantitative measurements are essential in all sciences, there is a continuous effort by theoreticians to bring similar approaches to software development. The goal is obtaining objective, reproducible and quantifiable measurements, which may have numerous valuable applications in schedule and budget planning, cost estimation, quality assurance testing, software debugging, software performance optimization, and optimal personnel task assignments.

Key words:
Object Oriented, Design, Inheritance, Metric, Measure, Coupling, Cohesion , Test metrics, Size estimation, Effort estimation, Test effectiveness evaluation.

I. II.

INTRODUCTION

Object-Oriented Analysis and Design of software provide many benefits such as reusability, decomposition of problem

into easily understood object and the aiding of future correction of problems make them important in software. modifications. But the OOAD software Software metrics are all about measurements which, in turn, development life cycle is not easier than the typical involve numbers, the use of numbers to make things better, procedural approach. Therefore, it is necessary to provide to improve the process of developing software and to dependable guidelines that one may follow to help ensure improve all aspects of the management of that process. good OO programming practices and Importance of Metrics write reliable code. Object-Oriented programming metrics is Metrics is used to improve the quality and an aspect to be considered. Metrics to be a set productivity of products and services thus achieving of standards against which one can measure the Customer Satisfaction. effectiveness of Object-Oriented Analysis techniques in the Easy for management to digest one number and drill design of a system. down, if required. Five characteristics of Object Oriented Metrics are as Different Metric(s) trend act as monitor when the following: process is going out-of-control. Localization operations used in many classes Metrics provides improvement for current process. Encapsulation metrics for classes, not modules Software metrics hold importance in testing phase, as software testing Information Hiding should be measured & improved metrics acts as indicators of software quality and fault proneness. In order Inheritance adds complexity, should be measured to measure the actual values such as software size, defect density, verification effectiveness and productivity, records Object Abstraction metrics represent level of of these values must be maintained. Ideally, these actual abstraction values will be tracked against estimates that are made at Software metrics are used to evaluate the software the start of a project and updated during project execution. development process and the quality of the resulting product. Software metrics aid evaluation of the testing II.TABLE process and the software product by providing objective criterion and measurements for management decision making. Their association with early detections and The following table gives a brief description of the OO metrics,definition,formula and its author.
TABLE 1.1 LIST OF OO METRICS AND THEIR FORMULA

Name
Average Service State Dependency (ASSD)

Description It is used in web service testing where


multiple services share a same state which can be updated and retrieved by these service components . If is stateful component then

Formula ASSD=1/n

Author

ck
k =1

n = total no.of components in domain Ck= component which has state

Kai Qian, Jigang Liu Decoupling Metrics for Service Composition

Average Service Persistent Dependency (ASPD)

Required Service Dependency

Ck =1 otherwise Ck =0. ASPD shows the average participation times for a service component to tie with other service components indirectly. The lower the ASPD is, the looser the coupling may be. We can see that one composite component may consist of basic components and composite components recursively until it reaches a basic component. This is the way to composite a composite service component either by aggregation or containment composition.

ASPD=1/n *

0 <i n 0< j m

P (i , j)

n= total no.of components m=no.of repositories P(i,j)=1, if service component i participates the
persistent data j otherwise 0

Kai Qian, Jigang Liu Decoupling Metrics for Service Composition

---

Kai Qian, Jigang Liu Decoupling Metrics for Service Composition

Average Required Service Dependency(AR SD)

It is the average of required service dependency. The lower


the ARSD is the looser the coupling will be.

ARSD= 1/n *

Ri
i 1

Kai Qian, Jigang Liu Decoupling Metrics for Service Composition

n= total no.of components

R i= no of required services the service component i requires to provide its services.


n

Average Service Invocation Coupling(ASIC)

It is web service invocation coupling metric. The lower the


ASIC is the looser the coupling will be there between service components. This index also measures the portability and performance quality

ASIC = 1/n *

IC i
i =1

Kai Qian, Jigang Liu Decoupling Metrics for Service Composition

ICi = W nb* Nnb + Wb* Nb + Wsys * Nsyn Nnb =no.of non blocking asynchronous operations Nb =no.of blocking asynchronous operations Nsyn =no.of synchronous operations

attributes.

Reuse ratio (U)

It is the ratio of no.of superclasses to the total no.of classes.It must be high. It is the ratio of no.of subclasses to the total no.of superclasses. The PF, metric is proposed as a measure of polymorphism. It measures the degree of method overriding in the class inheritance tree.

U=

no.of superclass total no.of classes no.of subclass no.of superclass

Specialisation Ratio(S) Polymorphism Factor(PF)

S=

B.Henderson-sellers, ObjectOriented Metrics, Measures of Complexity. Prentice Hall, 1996. B.Henderson-sellers, ObjectOriented Metrics, Measures of Complexity. Prentice Hall, 1996 R.Harrison, S.J.Counsell, and R.V.Nithi, An Evaluation of MOOD set of ObjectOriented Software Metrics. IEEE Trans. Software Engineering

Mn(Ci)=no.of new methods Mn (Ci)=no.of new methods DC(Ci)=no.of descendant count


Number of methods overridden by a subclass(NMO) When a method in a subclass has the same name and type signature as in its superclass, then the method in the superclass is said to be overridden by the method in the subclass. It is the total no.of attributes present in a particular class It is the total no.of methods present in a particular class The WMC is a count of sum of complexities of all NOA= total no.of attributes present in a class.

B.Henderson-sellers, ObjectOriented Metrics, Measures of Complexity. Prentice Hall, 1996

Number of Attributes per Class (NOA) Number of methods per Class (NOM)

B.Henderson-sellers, ObjectOriented Metrics, Measures of Complexity. Prentice Hall, 1996 B.Henderson-sellers, ObjectOriented Metrics, Measures of Complexity. Prentice Hall, 1996 S.R.Chidamber and C.F.Kamerer, A metrics Suite for Object-Oriented

NOM= total no.of methods present

in a class

Weighted methods

per

class(WMC) Response for a class(RFC)

methods in a class The response set of a class (RFC) is defined as set of methods that can be potentially executed in response to a message received by an object of that class CBO for a class is count of the number of other classes to which it is coupled. Two classes are coupled when methods declared in one class use methods or instance variables defined by the other class Data Abstraction is a technique of creating new data types suited for an application to be programmed. It provides the ability to create user-defined data types called Abstract Data Types (ADTs). It is the number of send statements defined in a class. It is the ratio of the maximum possible number of couplings in the system to the actual number of coupling is not

n=no of methods in a class Ci =complexity of method i RFC = Mi U all j{ Rij } Mi = set of all methods in a class { Rij }= set of methods called by Mi

Design. IEEE Trans. Engineering,1994

Software

S.R.Chidamber and C.F.Kamerer, A metrics Suite for Object-Oriented Design. IEEE Trans. Software Engineering

Coupling between objects(CBO)

----

S.R.Chidamber and C.F.Kamerer, A metrics Suite for Object-Oriented Design. IEEE Trans. Software Engineering,1994

Data Abstraction Coupling(DAC)

DAC = no.of Abstract Data types(ADTs) defined in the class

B.Henderson-sellers, ObjectOriented Metrics, Measures of Complexity. Prentice Hall, 1996

Message passing coupling(MPC) Coupling Factor(CF)

MPC = no.of send statements in a class

B.Henderson-sellers, ObjectOriented Metrics, Measures of Complexity. Prentice Hall, 1996 R.Harrison, S.J.Counsell, and R.V.Nithi, An Evaluation of MOOD set of ObjectOriented Software Metrics. IEEE Trans. Software Engineering

TC= total no.of classes is_client(Ci,Cj)= 1,if relationship exists between Ci

Lack of cohesion in methods(LCOM)

Tight class cohesion(TCC)

imputable to inheritance Lack of Cohesion (LCOM) measures the dissimilarity of methods in a class by looking at the instance variable or attributes used by methods. positive high value of LCOM implies that classes are less cohesive. So a low value of LCOM is desirable The measure TCC is defined as the percentage of pairs of public methods of the class with common attribute usage

and Cj,0 otherwise LCOM=|P|-|Q|,if |P|>|Q| 0,otherwise. P= {(Ii,Ij), Ii Ij = 0 } Q= {(Ii,Ij), Ii


S.R.Chidamber and C.F.Kamerer, A metrics Suite for Object-Oriented Design. IEEE Trans. Software Engineering,1994

Ij 0 }

(Ij) = set of all instance variables used by method M i

TCC=[method pairs with common attribute usage/no of pairs of method]*100

L.Briand , W.Daly and J. Wust, A Unified Framework for Coupling Measurement in Object-Oriented Systems. IEEE Transactions on software Engineering L.Briand , W.Daly and J. Wust, A Unified Framework for Coupling Measurement in Object-Oriented Systems. IEEE Transactions on software Engineering

Loose class cohesion(LCC)

Information flow based cohesion(IFC)

Information flow based inheritance coupling(IFCIC)

The measure LCC is defined as the percentage of pairs of public methods of the class, which are directly or indirectly connected.It is same as TCC but it also considers indirect methods. ICH for a class is defined as the number of invocations of other methods of the sameclass, weighted by the number of parameters of the invoked method. same as IFC,but only counts methods invocation of ancestors of classes

---

Y.Lee, B.Liang, S.Wu and F.Wang, Measuring the Coupling and Cohesion of an Object-Oriented program based on Information flow, 1995 Y.Lee, B.Liang, S.Wu and F.Wang, Measuring the Coupling and Cohesion of an Object-Oriented program based on Information flow, 1995

---

RFC -1

Depth Inheritance Tree(DIT)

of

Number of children(NOC)

Same as RFC except that methods indirectly invoked are not included. The depth of a class within the inheritance hierarchy is the maximum number of steps from the class node to the root of the tree and is measured by the number of ancestor classes. In cases involving multiple inheritances, the DIT will be the maximum length from the node to the root of the tree The NOC is the number of immediate subclasses of a class in a hierarchy It is system level metrics. MIF is defined as the ratio of the sum of the inherited methods in all classes of the system

S.R.Chidamber and C.F.Kamerer, A metrics Suite for Object-Oriented Design. IEEE Trans. Software Engineering,1994 S.R.Chidamber and C.F.Kamerer, A metrics Suite for Object-Oriented Design. IEEE Trans. Software Engineering,1994

DIT= no of ancestor classes

NOC=no.of immediate subclass

S.R.Chidamber and C.F.Kamerer, A metrics Suite for Object-Oriented Design. IEEE Trans. Software Engineering,1994 R.Harrison, S.J.Counsell, and R.V.Nithi, An Evaluation of MOOD set of ObjectOriented Software Metrics. IEEE Trans. Software Engineering

Method Inheritance Factor(MIF)

Ma(Ci)= Mi(Ci)+ Md(Ci) TC= total no of classes Md(Ci)=no of methods declared in the class Mi(Ci)=no of methods inherited in the class

Attribute Inheritance factor(AIF)

AIF is defined as the ratio of the sum of inherited attributes in all classes of the system. AIF denominator is the total number of available attributes for all classes. It is defined in an analogous manner and

R.Harrison, S.J.Counsell, and R.V.Nithi, An Evaluation of MOOD set of ObjectOriented Software Metrics. IEEE Trans. Software Engineering

Aa(Ci)= Ai(Ci)+ Ad(Ci) TC= total no of classes Ad(Ci)=no of attributes declared in the class Ai(Ci)=no of attributes inherited in the class

Method hiding factor(MHF)

Attribute hiding factor(AHF)

DAC

provides an indication of the impact of inheritance in the object oriented software. The MHF metric states the sum of the invisibilities of all methods in all classes.The invisibility of a method is the percentage of the total class from which the method is hidden. If the value of MHF is high (100%), it means all methods are private which indicates very little functionality. Thus it is not possible to reuse methods with high MHF. The AHF metric shows the sum of the invisibilities of all attributes in all classes.The invisibility of an attribute is the percentage of the total classes from which this attribute is hidden. If the value of AHF is high (100%), it means all attributes are private. AHF with low (0%) value indicate all attributes are public. This counts the unique classes used

Md(Ci)=no of methods declared in the class Mv(Ci)=no of methods visible in the class Mh(Ci)=no of methods hidden in the class

R.Harrison, S.J.Counsell, and R.V.Nithi, An Evaluation of MOOD set of ObjectOriented Software Metrics. IEEE Trans. Software Engineering

Ad(Ci)=no of attributes declared in the class Av(Ci)=no of attributes visible in the class Ah(Ci)=no of attributes hidden in the class

R.Harrison, S.J.Counsell, and R.V.Nithi, An Evaluation of MOOD set of ObjectOriented Software Metrics. IEEE Trans. Software Engineering

S.R.Chidamber and C.F.Kamerer, A metrics Suite for Object-Oriented Design. IEEE Trans. Software

Engineering,1994

Mccabe cyclomatic complexity(MCC )

It directly measures the number of linearly independent paths through a program's source code. The concept, although not the method, is somewhat similar to that of general text complexity measured by the Flesch-Kincaid Readability Test.
SLOC is used to estimate the total effort that will be needed to develop a program, as well as to calculate approximate productivity. The SLOC metric measures the number of physical lines of active code, that is, no blank or commented lines code The CP metric is defined as the number of commented lines of code divided by the number of non-blank lines of code . The comment percentage is calculated by the total number of comments divided by the total lines of code

MCC = L N + 2P L= no.of edges/links N=no.of nodes P=no.of disconnected paths

McCabe (December 1976). "A ComplexityMeasure". IEEE Transactions on Software Engineering: 308320.

Source Lines of code(SLOC)

Lorenz, Mark & Kidd Jeff: Object-Oriented Software Metrics, Prentice Hall, 1994.

Comment Percent(CP)

CP= no.of commented lines / no.of non-balnk lines of code

Laing V.,Coleman C., : Principal Components of Orthogonal OO Metrics",Software Assurance Technology Center (SATC),2001

Specialisation Index per class(SIX)

Number messages (NOM)

of send

Conceptual Similarity between Methods - CSM

less the number of blank lines. The specialisation index measures the extent to which subclasses override (replace) behaviour of their superclasses. SIX provides a measure of the quality of subclassing. NOM measures the number of messages sent in a method, segregated by type of message. The types include: Unary, Messages with no arguments Binary, messages with one argument, separated by a special selector name. (concadenation and math functions). Keyword, messages with one or more arguments The conceptual similarity between methods mk M(C) and mj M(C), CSM(mk, mj), is computed as the cosine between the vectors vmk and vmj, corresponding to mk and mj in the semantic space constructed by

SIX=[No of overridden methods * Class hierarchy nesting level]/Total no of methods

Lorenz, Mark & Kidd Jeff: Object-Oriented Software Metrics, Prentice Hall, 1994.

----

Lorenz, Mark & Kidd Jeff: Object-Oriented Software Metrics, Prentice Hall, 1994

Denys Poshyvanyk, Andrian Marcus,The Conceptual Coupling Metrics for Object-Oriented Systems

Conceptual Similarity between a Method and a Class - CSMC

Conceptual Similarity between two Classes - CSBC

Conceptual Coupling of a Class CoCC

LSI. Let ck C and cj C be two distinct (ck cj) classes in the system. Each class has a set of methods M(ck) = {mk1, , mkr}, where r = |M(ck)| and M(cj) = {mj1, , mjt}, where t = |M(cj)|. Between every pair of methods (mk, mj) there is a similarity measure CSM(mk, mj). average of the similarity measures between all unordered pairs of methods from class ck and class cj. The definition ensures that the conceptual similarity between two classes is symmetrical, as CSBC(ck, cj) = CSBC(cj, ck). It is measured by the degree to which the methods of a class are conceptually related to the methods of other classes. It measures the Number of Associations between a class and its

Cj=class j
t = |M(cj)|

Denys Poshyvanyk, Andrian Marcus,The Conceptual Coupling Metrics for Object-Oriented Systems

Denys Poshyvanyk, Andrian Marcus,The Conceptual Coupling Metrics for Object-Oriented Systems

Cj=class j
r= |M(Ck)|

n = no.of classes c = class i


di C, and cdi. ----

Denys Poshyvanyk, Andrian Marcus,The Conceptual Coupling Metrics for Object-Oriented Systems

Number of associations(NA S)

RHarrison,S_Counsell,R.Nithi Coupling metrics for OO design,1998

Average Inheritance Depth(AID) Class to leaf depth(CLD)

peer. It is the average of the Inheritance depth. AID=1,parent classes =0,classes without parents Maximum number of levels in the hierarchy that are below the class Number of classes that a given class directly inherits from.

AID = DIT/n DIT=Inheritance depth n= no.of classes

Henderson Sellers,Object Oriented Metrics: Measures of complexity,Prentice hall

CLD= Maximum number of levels in the hierarchy that are below the class

Number of Parents(NOP)

NOP=no.of parents of a particular class

Number of Descendants(NO D) Number of Ancestors(NOA)

Number of classes directly/indirectly inherited from this class Number of classes from which the class is inherited (directly/indirectly) Number of methods in a class that the class inherits from its ancestors and does not override. Number of methods not inherited not overridden Number of function members that implement the same operator in ancestors and in the current

Number of Methods inherited (NMinh) Number Methods Added(NMA) of

J. Bansiya and C.G. Davis, A Hierarchical Model for ObjectOriented Design Quality Assessment, IEEE Transactions on Software Engineering, Vol. 28, No. 1, 2002 J. Bansiya and C.G. Davis, A Hierarchical Model for ObjectOriented Design Quality Assessment, IEEE Transactions on Software Engineering, Vol. 28, No. 1, 2002 Li, W. and Henry, S., "Objectoriented metrics that predict maintainability", Journal of Systems and Software, vol. 23, no. 2, 1993 Li, W. and Henry, S., "Objectoriented metrics that predict maintainability", Journal of Systems and Software, vol. 23, no. 2, 1993 Li, W. and Henry, S., "Objectoriented metrics that predict maintainability", Journal of Systems and Software, vol. 23, no. 2, 1993 Li, W. and Henry, S., "Objectoriented metrics that predict maintainability", Journal of Systems and Software, vol. 23, no. 2, 1993 Arisholm, E., Briand, L. C., and Foyen, A., "Dynamic coupling measurement for objectoriented software", IEEE Transactions on Software

Static Polymorphism in Ancestors (SPA)

Dynamic Polymorphism in Ancestors (DPA)

class (at compile time) Number of function members that implement the same operator in ancestors and in the current class (at run time). Number of function members that implement the same operator in descendants and in the current class (at compile time). Number of function members that implement the same operator in descendants and in the current class (at run time) It is the sum of static Polymorphism in Ancestors and Static Polymorphism in Descendants It is the sum of dynamic Polymorphism in Ancestors and dynamic Polymorphism in Descendants Count of number of functions with the same name in a class

Engineering Arisholm, E., Briand, L. C., and Foyen, A., "Dynamic coupling measurement for objectoriented software", IEEE Transactions on Software Engineering

Static Polymorphism in Descendants (SPD)

Arisholm, E., Briand, L. C., and Foyen, A., "Dynamic coupling measurement for objectoriented software", IEEE Transactions on Software Engineering Arisholm, E., Briand, L. C., and Foyen, A., "Dynamic coupling measurement for objectoriented software", IEEE Transactions on Software Engineering

Dynamic Polymorphism in Descendants (DPD)

Static Polymorphism (SP)

SP= SPA+SPD SPA= Static Polymorphism in Ancestors SPD= Static Polymorphism in Descendants

Dynamic Polymorphism (DP)

DP= DPA+DPD DPA= Dynamic Polymorphism in Ancestors DPD= Dynamic Polymorphism in Descendants

Overloading in standalone classes(OVO)

Arisholm, E., Briand, L. C., and Foyen, A., "Dynamic coupling measurement for objectoriented software", IEEE Transactions on Software Engineering Arisholm, E., Briand, L. C., and Foyen, A., "Dynamic coupling measurement for objectoriented software", IEEE Transactions on Software Engineering Arisholm, E., Briand, L. C., and Foyen, A., "Dynamic coupling measurement for objectoriented software", IEEE Transactions on Software Engineering

Export Object Coupling (EOC)

EOC with respect to a scenario X between objects oi and oj is the percentage of no.of messages exchanged between to the total no.of messages. IOC with respect to a scenario X between objects oi and oj is the percentage of no.of messages exchanged between to the total no.of messages. This metric is a direct translation of C&K CBO metric,except that it is defined at the runtime.

EOC =[ {Mx(oi,oj )|oi,oj O oi oj}/ MTx ] *100 Mx(oi, oj ) =the number of messages sent from oi to oj and MTx = total number of messages exchanged during the execution of the scenario x. EOC =[ {Mx(oi,oj )|oi,oj O oi oj}/ MTx ] *100 Mx(oi, oj ) =the number of messages received by oi from oj and MTx = total number of messages exchanged during the execution of the scenario x Dynamic CBO= no.of couples of a class with other classes at run time.

S. Yacoub, H. Ammar, and T. Robinson, "Dynamic Metrics for Object-Oriented Designs," proc. IEEE 6th International Symposium on Software Metrics (Metrics'99), pp. 50-61 S. Yacoub, H. Ammar, and T. Robinson, "Dynamic Metrics for Object-Oriented Designs," proc. IEEE 6th International Symposium on Software Metrics (Metrics'99), pp. 50-61 Misook Choi, JongSuk Lee, A Dynamic Coupling for Reusable and Efficient Software System, 5th ACIS International Conference on Software Engineering Research, Management & Applications, Guru nandha rao,Measurement of Dynamic Coupling in a Object Oriented SystemAmerican Journal of Scientific Research

Import Object Coupling (IOC)

Dynamic CBO of class

Degree of dynamic coupling between two classes at run time (DDCR) Degree of dynamic coupling from a given set classes RI

No.of times class A access methods or instance variables of class B as the percentage of total no of methods or instance variables accessed by class A This is an extension of above metric to indicate the level of dynamic coupling within set of classes Runtime import coupling between objects

DDCR=[No.of times A access methods of class B/total no of methods accessed by class A]*100

[sum of no.of accesses to methods or instance variables outside each class/sum of total no of accesses from these classes]*100 RI =No of classes from which a given class accesses methods or instance variables at runtime

Guru nandha rao,Measurement of Dynamic Coupling in a Object Oriented SystemAmerican Journal of Scientific Research Misook Choi, JongSuk Lee, A Dynamic Coupling for Reusable and Efficient Software System, 5th ACIS International Conference on Software

RDI

Runtime import degree of coupling

RDI =[ no.of accesses made by a class/total no of access]

RE

Runtime coupling objects

export between

RI =No of classes which accesses methods or instance variables from a given class at runtime

RDE

Runtime export degree of coupling

RDE =[ no.of accesses made to a class/total no of access]

Engineering Research, Management & Applications Misook Choi, JongSuk Lee, A Dynamic Coupling for Reusable and Efficient Software System, 5th ACIS International Conference on Software Engineering Research, Management & Applications Misook Choi, JongSuk Lee, A Dynamic Coupling for Reusable and Efficient Software System, 5th ACIS International Conference on Software Engineering Research, Management & Applications Misook Choi, JongSuk Lee, A Dynamic Coupling for Reusable and Efficient Software System, 5th ACIS International Conference on Software Engineering Research, Management & Applications Pham Thi Quynh, Huynh Quyet Thang Dynamic coupling metric for service oriented software International Journal on Electroonics Engineering,2009

Coupling Between ServicesMetric (CBS)

Instability Metric for Service Metric (IMS)

CBS is built directly from CBO metric in a suite of C&K metrics.For service A,CBS metric is calculated based on the number of relationships between A and other services in system. Measuring the instability of service

n =the number of services in system AiBj=0 if Ai does not connect to Bj and AiBj=1 if Ai connects to Bj

fan.in= number of functions that call to function A fan.out =number of functions that are called by

Joost Visser, Departamento de Informatica Universidade do Minho Braga, Portugal, Structure metrics for XML schema

function A. Degree of Coupling between 2 services metric (DC2S) The DC2S metric identifies relationship between two services to detect the dependency between these services. DC2S metric identifies the level of coupling between two services in runtime dynamic metric, weight of edge is the number of times from request service to provider service. The value of
this metric is low, the coupling in system will be loose and reserve. This metric helps to distinguish the difference between two systems which have the same nodes but differ in the connection between nodes
Pham Thi Quynh, Huynh Quyet Thang Dynamic coupling metric for service oriented software International Journal on Electroonics Engineering,2009

n = no of services in the system N(A,Bi)=no of connections from service A to Bi

Degree of Coupling within a given set of services metric (DCSS)

Max=K*V*(V-1) Min=V*(V-1) d(u,v)=length of shortest path from u to v K= maximum value in the length of shortest path between any two nodes V=vertex set of the graph G(U,V)

Pham Thi Quynh, Huynh Quyet Thang Dynamic coupling metric for service oriented software International Journal on Electroonics Engineering,2009

Operation Hiding Effectiveness Factor(OHEF)

Attribute Hiding Effectiveness Factor(AHEF)

It comes under MOOD2 metrics suite. OHEF measures the goodness of scope settings on class operations i.e. methods When OHEF=1, scope settings are perfect. AHEF is related to AHF. AHF measures the general level of attribute hiding whereas AHEF measures how well the

OHEF = Classes that do access operations / Classes that can access operations

Fernando Brito e Abreu: Using OCL to formalize object oriented metrics definitions. Technical Report ES007

AHEF = Classes that do access attributes / Classes that can access attributes

Fernando Brito e Abreu: Using OCL to formalize object oriented metrics definitions. Technical Report ES007

hiding succeeds. Internal inheritance factor(IIF) IIF measures the amount of internal inheritance in your system. Internal inheritance happens when a class inherits another class in the same system. If there is no inheritance, IIF=0. This metric is simply the percentage of the classes that are parametrized. A parametrized class is also called a generic class Counts the number of Implements statements. A class may implement either another class (VB Classic) or an interface definition. This is the same as WMC(Weighted methods of class from C&K metrics) but Private methods are ignored This is the same as WMC but all inherited methods are also counted Number of variables and arrays defined at the class-level.
Fernando Brito e Abreu: Using OCL to formalize object oriented metrics definitions. Technical Report ES007

IIF = Classes that inherit a VB class / All classes that inherit something

Parametric polymorphism factor(PPF)

PPF = Parametrized classes / All classes

Fernando Brito e Abreu: Using OCL to formalize object oriented metrics definitions. Technical Report ES007

Number of interfaces implemented by class (IMPL)

R. Marinescu. A Multi-Layered System of Metrics for the Measurement of Reuse by Inheritance. Paper submitted to TOOLS USA'99, March 1999.

Non-private methods defined by class(WMCnp)

WMCnp=WMC- private methods WMC=weighted methods per class

Yu, Z. and Rajlich, V., "Hidden Dependencies in Program Comprehension and Change Propagation",

Methods defined and inherited by class (WMCi) Variables defined class(VARS) by

WMCi=WMC+inherited methods WMC=weighted methods per class VARS=no.of variables and arrays defined in the class

Yu, Z. and Rajlich, V., "Hidden Dependencies in Program Comprehension and Change Propagation", Yu, Z. and Rajlich, V., "Hidden Dependencies in Program Comprehension and

Non-private variables defined by class(VARSnp) Variables defined and inherited by class(VARSi) Events defined by class(EVENT)

Inherited and procedure-level variables are not counted. The same as VARS, but Private variables are excluded. The same as VARS, but inherited variables are included. Number of Event statements (event definitions) in a class. Inherited events and event handlers are not counted. Number of constructors (Sub New) in a class. Class_Initialize in VB Classic is an event handler, and not counted in CTORS. Number of methods + variables defined by class. Measures the size of the class in terms of operations and data. Number of non-private methods + variables defined by class. Measures the size of the interface from other parts of the system to the class. If zero, no OO metrics are meaningful.

Change Propagation",

VARSnp=VARS-private variables VARS=Variables defined by class VARSi=VARS-inherited variables VARS=Variables defined by class EVENT=no of event statements

Yu, Z. and Rajlich, V., "Hidden Dependencies in Program Comprehension and Change Propagation", Yu, Z. and Rajlich, V., "Hidden Dependencies in Program Comprehension and Change Propagation", Poshyvanyk, D., Marcus, A., Dong, Y., and Sergeyev, A., "IRiSS - A Source Code Exploration Tool", in Industrial and Tool Proceedings of 21st IEEE International Conference on Software Maintenance,2005 Poshyvanyk, D., Marcus, A., Dong, Y., and Sergeyev, A., "IRiSS - A Source Code Exploration Tool", in Industrial and Tool Proceedings of 21st IEEE International Conference on Software Maintenance,2005 Poshyvanyk, D., Marcus, A., Dong, Y., and Sergeyev, A., "IRiSS - A Source Code Exploration Tool", in Industrial and Tool Proceedings of 21st IEEE International Conference on Software Maintenance,2005 Poshyvanyk, D., Marcus, A., Dong, Y., and Sergeyev, A., "IRiSS - A Source Code Exploration Tool", in Industrial and Tool Proceedings of 21st IEEE International Conference on Software Maintenance,2005 Poshyvanyk, D., Marcus, A., Dong, Y., and Sergeyev,

Constructors defined by class(CTORS)

Class size(CSZ)

CSZ = WMC + VARS. WMC=weighted methods per class VARS=Variables defined by class

Class interface size(CIS)

CIS = WMCnp + VARSnp. WMCnp= Non-private methods defined by class VARSnp=Non-private variables defined by class

Number classes(CLS)

of

Number of abstract classes(CLSa)

Number classes project.

of abstract defined in

Number of concrete classes(CLSc)

Number of concrete classes defined in project. A concrete class is one that is not abstract. Number of distinct class hierarchies.

CLSc = CLS CLSa CLS= Number of classes CLSa= Number of abstract classes

Number of root classes(ROOTS)

Number of leaf classes(LEAFS)

A leaf class is one that other classes don't inherit from.

Number of interfaces(INTER FS)

Number of .NET Interfaces. Abstract classes are not counted as interfaces even though they can be thought of as interfaces.

A., "IRiSS - A Source Code Exploration Tool", in Industrial and Tool Proceedings of 21st IEEE International Conference on Software Maintenance,2005 Poshyvanyk, D., Marcus, A., Dong, Y., and Sergeyev, A., "IRiSS - A Source Code Exploration Tool", in Industrial and Tool Proceedings of 21st IEEE International Conference on Software Maintenance,2005 Poshyvanyk, D., Marcus, A., Dong, Y., and Sergeyev, A., "IRiSS - A Source Code Exploration Tool", in Industrial and Tool Proceedings of 21st IEEE International Conference on Software Maintenance,2005 Poshyvanyk, D., Marcus, A., Dong, Y., and Sergeyev, A., "IRiSS - A Source Code Exploration Tool", in Industrial and Tool Proceedings of 21st IEEE International Conference on Software Maintenance,2005 Poshyvanyk, D., Marcus, A., Dong, Y., and Sergeyev, A., "IRiSS - A Source Code Exploration Tool", in Industrial and Tool Proceedings of 21st IEEE International Conference on Software Maintenance,2005 Poshyvanyk, D., Marcus, A., Dong, Y., and Sergeyev, A., "IRiSS - A Source Code Exploration Tool", in Industrial and Tool Proceedings of 21st IEEE International Conference on Software Maintenance,2005

Maximum depth of inheritance tree(maxDIT)

It is the maximum value od Depth Inheritance Tree of C&K metrics. maxDIT should not exceed 6. the number of classes outside this module that depend on classes inside this module the number of classes inside this module that depend on classes outside this module It is the ratio of efferent coupling to the total no of efferent and afferent coupling It is the inverse of sum of no of input,output parameters,no of global variables,no of modules called and no of modules calling. .5 is low coupling, .001 is high coupling It is used to estimate the effort required to develop a system. it is an estimate of how much of code a single developer can reasonably expect to own. CPD=no.of key classes /no of people

Afferent coupling(Ca) Efferent coupling(Ce) Instability(I)

Ca = no.of classes outside the module Mi that depends on classes inside module Mi Ce = no.of classes inside the module M i that depends on classes outside module Mi I = Ce / (Ca + Ce) Ca = no.of classes outside the module M i that depends on classes inside module Mi Ce = no.of classes inside the module M i that depends on classes outside module Mi MC = 1/(Pi+Po+G+Mcalled+Mcalling) Pi=no of input parameters Po=no of output parameters G=no of global variables Mcalled=no of modules called from the class Ci Mcalling = no of modules calling from the class Ci

Succi, G., Pedrycz, W., Djokic, S., Zuliani, P., and Russo, B., "An Empirical Exploration of the Distributions of the Chidamber and Kemerer Object-Oriented Metrics Suite", Empirical Software Eng., 10 (1), Jan. 2005, Robert Martin, OO Design Quality Metrics, An Analysis of Dependencies Robert Martin, OO Design Quality Metrics, An Analysis of Dependencies Robert Martin, OO Design Quality Metrics, An Analysis of Dependencies

Module coupling(MC)

Dhama, H. 1995. Quantitative models of cohesion and coupling in software. In Selected Papers of the Sixth Annual Oregon Workshop on Software Metrics (Silver Falls, Oregon, United States). W. Harrison and R. L. Glass, Eds. Elsevier Science, New York, NY, 65-74. Lorenz, Mark & Kidd Jeff: Object-Oriented Software Metrics, Prentice Hall, 1994. Lorenz, Mark & Kidd Jeff: Object-Oriented Software Metrics, Prentice Hall, 1994.

Person days per class(PDC) Classes per developer (CPD)

Breadth of Inheritance Tree (BIT)

Method Reuse Per Inheritance Relation (MRPIR)

Attribute Reuse Per Inheritance Relation (ARPIR)

It is equal to the no of leaves in the tree. Higher BIT means higher number of methods/attributes reused in the derived class MRPIR computes the total number of methods reused per inheritance relation in the inheritance hierarchy. It applies on whole inheritance hierarchy in the system. It computes the total number of attributes reused per inheritance relation in the inheritance hierarchy Generality of a class is the measure of its relative abstraction level. Higher the generality of a class more it is likely to be reused It is the probability of reusing classes in the inheritance hierarchy

BIT=no of leaves in the tree

Dr. Kadhim M.Breesam Metrics for Object-Oriented Design Focusing on Class Inheitance Metric IEEE 2nd International Conference on Dependability of Computer Systems (DepCoSRELCOMEX07) 2007 Dr. Kadhim M.Breesam Metrics for Object-Oriented Design Focusing on Class Inheitance Metric IEEE 2nd International Conference on Dependability of Computer Systems (DepCoSRELCOMEX07) 2007

MIk= No. of methods inherited through kth inheritance relationship r= Total number of inheritance relationships

AIk= No. of attributes inherited through kth inheritance relationship r= Total number of inheritance relationships GC=a/al a=abstraction level of the class al=total no of possible abstraction levels

Dr. Kadhim M.Breesam Metrics for Object-Oriented Design Focusing on Class Inheitance Metric IEEE 2nd International Conference on Dependability of Computer Systems (DepCoSRELCOMEX07) 2007 Dr. Kadhim M.Breesam Metrics for Object-Oriented Design Focusing on Class Inheitance Metric IEEE 2nd International Conference on Dependability of Computer Systems (DepCoSRELCOMEX07) 2007 Dr. Kadhim M.Breesam Metrics for Object-Oriented Design Focusing on Class Inheitance Metric IEEE 2nd International Conference on Dependability of Computer Systems (DepCoSRELCOMEX07) 2007

Generality of Class (GC)

Reuse Probability (RP)

Abstractness(A)

This is the ratio of abstract classes in the

Ni = Total number of classes that can be inherited Nlg = Total number of classes that can be inherited but having lowest possible generic level N = Total number of classes in the inheritance hierarchy A= abstract classes in the category / total no of

Mariano Ceccato and Paolo Tonella, Measuring the Effects of

Number of Catch Blocks per Class (NCBC)

category to the total no of classes in the category.It ranges from 0 to 1. 0 means concrete and 1 means completely abstract. The metric counts the percentage of the catch blocks in each method of the class. The NCBC denominator represents the maximum number of possible catch blocks for class Cik. This would be the case where all possible exceptions in Cik have a corresponding catch block to handle these exceptions.

classes in the category

Software Aspectization,International conference on Object Technology,1998

K.K.Aggarwal, Yogesh Singh, Arvinder Kaur and Ruchika Malhotra ,Software Design Metrics for ObjectOriented Software, JOURNAL OF OBJECT TECHNOLOGY,2008

n = Number of Methods in a class m = Number of Catch Blocks in a Method Cij is jth Catch Block in ith Method Cik is kth Catch Block in ith Method l = Maximum Number of possible Catch Blocks in a Method

The following table gives a brief description of the Non- OO metrics,definition,formula and its author.
TABLE 1.2 LIST OF NON-OO METRICS AND THEIR FORMULA

Quality code (QC)

It captures the relation between the number of weighted defects and the size of product release. Purpose is to deliver a high quality product. It shows the relation between the number of weighted defects shipped to customers and size of the product release. Purpose is to deliver a high

WTP+WF/KCSI
WTP no. of weighted defects found in the product under test (before official release) WF no. of weighted defects found in the product after release. KCSI no. of new or changed source lines of code in thousands.

Dr. Arvinder Kaur1,#Mrs. Bharti Suri2,#Ms. Abhilasha Sharma3 Software Testing Product Metrics - A Survey K. K. Aggarwal & Yogesh Singh Software Engineering Programs Documentation Operating Procedures (Second Edition) New Age International Publishers, 2005.

Quality of the Product (QP)

WF/KCSI
WF no. of weighted defects found in the product after release. KCSI no. of new or changed source lines of code in thousands

Test (TI)

Improvement

Test (TE)

effectiveness

Test time (TT)

Test time over development time(TD)

quality product. It shows the relation between the number of weighted defects detected by the test team during and the size of the product release. Purpose is to deliver a high quality product. It shows the relation between the number of weighted defects detected during testing and the total number of weighted defects in the product. Purpose is to deliver a high quality product. It shows the relation between tie spent on testing and the size of the product release. Purpose is to decrease timetomarket. It shows the relation between time spent on testing and the time spent on developing. Purpose is to decrease time-to market.

WTTP/KCSI
KCSI no. of new or changed source lines of code in thousands. WTTP is the no. of weighted defects found by the test team in the test cycle of the product.

Yanping Chen, Robert L. Probert, Kyle Robenson Effective Test Metrics for Test Strategy Evolution

WT/(WTP+WF)*100%
WTP no. of weighted defects found in the product under test (before official release) WF no. of weighted defects found in the product after release. WT no. of weighted defects found by the test team during the product cycle.

P. Dhavachelvan, G. V. Uma, V.S.K. Venkatachalapathy A new approach in development of distributed framework for automated software testing using agents

TT/KCSI
KCSI no. of new or changed source lines of code in thousands. TT no. of business days used for product testing.

N. E. Fenton and S. L. Pfleager Software Metrics: A Rigorous and Practical Approach, Second Edition Revised ed. Boston L. Finkelstein Theory and Philosophy of Measurement , in Theoretical Fundamentals, Vol. 1, Handbook of measurement Science, P.H. Sydenlam, Ed. Chichester : John Wiley & Sons

TT/TD*100%
TT no. of business days used for product testing. TD no. of business days used for product development.

Test cost normalized to product size (TCS)

It shows the relation between resource or money spent on testing and the size of the product release. Purpose is to decrease cost-to-market. It shows the relation between testing cost and development cost of the product. Purpose is to decrease costtomarket. It shows the relation between money spent by test team and the number of weighted defects detected during testing. Purpose is to decrease cost-tomarket. It shows the relation between thi number of weighted defedts detected and the size of the product release. It shows the relation between time spent on testing and the size of the product release.

CT/KCSI
CT total cost of testing the product in dollars. KCSI no. of new or changed source lines of code in thousands.

Paul C. Jorgensen Software Testing - A Craftsmans Approach Second Edition

Test cost as a ratio of development cost (TCD)

CT/CD*100%
CT total cost of testing the product in dollars. CD total cost of developing the product in dollars.

S.H. Kan, J. Parrish and D.Manclove In-process metrics for software testing

Cost per weighted defect unit (CWD)

CT/WT
CT total cost of testing the product in dollars. WT no. of weighted defects found by the test team during the product cycle.

Stephen H. Kan Metrics and Models in Software Quality Engineering, Second Edition,

Test improvement in product quality.

WP/KCSI
WP no. of weighted defects found in one specific test phase. KCSI no. of new or changed source lines of code in thousands.

Cem Kaner Software Engineering Metrics: What do they measure and how do we know?

Test time needed normalized to size of product.

TTP/KCSI
TTP no. of business days used for a specific test phase. KCSI no. of new or changed source lines of code in thousands.

N.Nagappan Toward a software Testing and Reliability Early warning Metric Suite

Test cost normalized to size of product.

It shows the relation between resource or money spent on the test phase and the size of the product release. It shows the relation between money spent on the test phase and the number of weighted defects detected. It shows the relation between the number of one type of defects detected in one specific test phase and the total number of this type of defect in the product. It is the probability of failure free operation of a computer program for a specified time in a specified environment. Goal of the test session efficiency metric is to identify trends in the scheduled test times effectiveness

CTP/KCSI
KCSI no. of new or changed source lines of code in thousands. CTP total cost of a specific test phase in dollars.

Dr. Arvinder Kaur1,#Mrs. Bharti Suri2,#Ms. Abhilasha Sharma3 Software Testing Product Metrics - A Survey E. Osterweil Strategic Directions in Software Quality,

Cost per weighted defect unit.

CTP/WT
CTP total cost of a specific test phase in dollars. WT no. of weighted defects found by the test team during the product cycle.

Test effectiveness for driving out defects in each test phase.

WD/(WD+WN)*100%
WD no. of weighted defects of this defect type that are detected after the test phase. WN no. of weighted defects of this defect type (any particular type) that remain uncovered after the test phase (missed defects)

Ramesh Pusala Operational Excellence through efficient Software Testing Metrics

Software reliability

Z(t) = (h)exp(-ht/N)
Z(t) instantaneous failure rate. h Failure rate prior to the start of testing N no. of faults inherent in the program prior to the start of testing. SYSE= Active Test Time/Scheduled Test Time TE= Total no. of good runs/Total runs SYSE System Efficiency TE Tester efficiency

Linda H. Rosenberg, Theodore F. Hammer, Lenore L. Huffman Requirements, Testing, and Metrics NASA,GSFC Norman F. Schneidewind Measuring and Evaluating Maintenance Process Using Reliability, Risk, and Test Metrics IEEE Transaction on Software Engineering

Test session efficiency

Test focus

Goal is to identify the amount of effort spent finding and fixing real faults versus the effort spent either eliminating false defects or waiting for a hardware fix. Goal is (i) To quantify the relative stabilization of a software subsystem (ii) To identify any possible overtesting or testing bottlenecks by examining the fault density of the subsystem over time. Three components are: T, O, H. Goal of the metric is to examine the efficiency of testing over time. This metric gives the test cases execution productivity which on further analysis can give conclusive result. This metric gives the test case writing productivity based on which one can have a conclusive remark. This metric determine the number of valid defects that testing team has identified during execution.

TF = No. of DRs closed with a software fix/Total no. of DR DR Discrepancy Report TF Test Focus

Notable Metrics in Software Testing S. M. K. Quadri1 and Sheikh Umar Farooq2

Software maturity

T = Total no. of DRs changed to a subsystem/1000 SLOC O = No. of currently open subsystem DRs/1000 SLOC H = Active test hours per subsystem/1000 SLOC T Total Density O Open Density H Test Hours

Stephen H. Kan, Metrics and Models in Software Quality Engineering

Test coverage

% of code branches that have been executed during testing.

Ramesh Pusala Operational Excellence through efficient Software Testing Metrics Infosys, 2006 George E. Stark, Robert C. Durst, Tammy M. Pelnik An Evaluation of Software Testing metrics Dr. Arvinder Kaur1,#Mrs. Bharti Suri2,#Ms. Abhilasha Sharma3 Software Testing Product Metrics - A Survey Cost Effective Software Test Metrics LJUBOMIR LAZICa, NIKOS MASTORAKISb

Test Execution Productivity Test (TCP) Case

No of Test cycles executed / Actual Effort for testing

Productivity

Total Raw Test Steps Test CaseProductivity = Step(s)/hour Efforts(hours)

Defect Acceptance (DA)

Number of Valid Defects Defect Acceptance = * 100 % Total Number of Defects

Defect Rejection (DR)

This metric determine the number of defects rejected during execution. It gives the percentage of the invalid defect the testing team has opened and one can control, whenever required. Defect whose resolution give rise to new defect(s) are bad fix defect.This metric determine the effectiveness of defect resolution process. It gives the percentage of the bad defect resolution which needs to be controlled. This metric determine the efficiency of the testing team in identifying the defects. It also indicated the defects missed out during testing phase which migrated to the next phase. This metric determine the quality of the product under test and at the time of release, based on which one can take decision for releasing of the product i.e. it indicates the quality. This metric gives the scripting productivity for performance test script and have trend over a period of time.

Number of Defect(s) Rejected Defect Rejection = * 100 % Total Number of Defects

A Brief Overview Of Software Testing Metrics Mr. Premal B. Nirpal

Dr. K. V. Kale

Bad Fix Defect (B)

Number of of Bad FixDefect(s) Bad FixDefect = * 100 % Total Number of Valid Defects

Roger S. Pressman Software Engineering, A Practitioners Approach 5th Edition, McGraw Hill, 1997.

Test Efficiency (TE)

DT Test Efficiency = * 100 % DU DT +


Where, DT = Number of valid defects identified during testing. DU = Number of valid defects identified by user after release of application.
(Severity Index* No.of Valid Defect(s) for this severity Defect Severity Index = Total Number of Valid Defects

Fenton, N., S.L. Pfleeger Software Metrics: A Rigorous and Practical Approach, PWS Publishing Co

Defect Severity Index (DSI)

A Brief Overview Of Software Testing Metrics

Mr. Premal B. Nirpal Dr. K. V. Kale

OperationsPerformed Performance Scripting Productivity = Operation(s)/hour Efforts(hours)

A Brief Overview Of Software Testing Metrics

Performance Scripting

Mr. Premal B. Nirpal Dr. K. V. Kale

Productivity (PSP)

Performance Test Efficiency (PTE)

This metric determine the quality of the Performance testing team in meeting the requirements which can be used as an input for further improvisation, if required. Requirements volatility (RV) refers to additions, deletions and modifications of requirements during the systems development life cycle. Ignoring requests for requirement changes can cause system failure due to user rejection, and failure to manage RV can increase the development time and cost. The ratio of the number of defect reports which were rejected (perhaps because they were not actually bugs) divided by the total number of defects. This shows the ability to keep the product right while fixing defect This report shows (Closed+Reopen) / Fixed

(Req met during PT) - (Req not met after Signoff of PT) PerformanceTest Efficiency = * 100 % Req met during PT

A Brief Overview Of Software Testing Metrics

Mr. Premal B. Nirpal Dr. K. V. Kale

Requirements Volatility

{(No. of requirements added + No. of requirements deleted + No. of requirements modified) / No. of initial approved requirements} * 100%

A Brief Overview Of Software Testing Metrics

Mr. Premal B. Nirpal Dr. K. V. Kale

Defect Ratio

Rejection

(No. of defects rejected / Total no. of defects raised) * 100%

A Brief Overview Of Software Testing Metrics

Mr. Premal B. Nirpal Dr. K. V. Kale

Regression Defect

( no of regression bugs )*100 / Total no of bugs%

A Brief Overview Of Software Testing Metrics

Mr. Premal B. Nirpal Dr. K. V. Kale


A Brief Overview Of Software Testing Metrics

Defect Validation

(No of validated (closed + reopen) defects/No of fixed

defects Defect Rate Validation No of defects validated per week. This is a measure of QA productivity in terms of validating the fixed defects

defects) *100%

No of defects validated per week per person = (total no. of validated defects) *40/(Total hours to validate the defects * no. of resources)
DRE= (Defects removed during development phase x100%) / Defects latent in the product Defects latent in the product = Defects removed during development Phase+ defects found later by user
Productivity = KLOC / Person-month Quality = Defects / KLOC Cost = $ / LOC Documentation = pages of documentation / KLOC Where, KLOC stand for no. of lines of code (in thousands). Person-month stand for is the time(in months) taken by developers to finish the product. Defects stand for Total Number of errors discovered

Mr. Premal B. Nirpal Dr. K. V. Kale A Brief Overview Of Software Testing Metrics Mr. Premal B. Nirpal Dr. K. V. Kale
A Brief Overview Of Software Testing Metrics

Defect Removal Effectiveness

Efficiency of Testing Process (define size in KLoC or FP, Req.)


Testing Efficiency= Size of Software Tested /Resources used It is a direct approach method and requires a higher level of detail by means of decomposition and partitioning.Once the expected value for estimation variable has been determined, historical LOC or FP data are applied and person months, costs etc are calculated using the following formula.

Mr. Premal B. Nirpal Dr. K. V. Kale

Lines of Code (LOC)

A Brief Overview Of Software Testing Metrics

Mr. Premal B. Nirpal Dr. K. V. Kale

III.CONCLUSION AND FUTURE WORK


This paper introduces the basic metric suite for objectoriented and non object oriented design. The need for such metrics is particularly acute when an organization is adopting a new technology for which established practices have yet to be developed. Metric data provides quick feedback for software designers and managers. Analyzing and collecting the data can predict design quality. If appropriately used, it can lead to a significant reduction in costs of the overall implementation and improvements in quality of the final product. The improved quality, in turn

reduces future maintenance efforts. Using early quality indicators based on objective empirical evidence is therefore a realistic objective.

IV.REFERENCES
Lorenz, Mark & Kidd Jeff: Object-Oriented Software Metrics , Prentice Hall, 1994. Dr. Kadhim M.Breesam Metrics for Object-Oriented Design Focusing on Class Inheitance Metric IEEE 2nd International

S.R.Chidamber and C.F.Kamerer, A metrics Suite for Object-Oriented Design. IEEE Trans. Software Engineering,1994 L.Briand , W.Daly and J. Wust, A Unified Framework for Coupling Measurement in Object-Oriented Systems. IEEE Transactions on software Engineering Y.Lee, B.Liang, S.Wu and F.Wang, Measuring the Coupling and Cohesion of an Object-Oriented program based on Information flow, 1995 R.Harrison, S.J.Counsell, and R.V.Nithi, An Evaluation of MOOD set of ObjectOriented Software Metrics. IEEE Trans. Software Engineering McCabe (December 1976). "A ComplexityMeasure". IEEE Transactions on Software Engineering: 308320. Laing V.,Coleman C., : Principal Components of Orthogonal OO Metrics",Software Assurance Technology Center (SATC),2001 Denys Poshyvanyk, Andrian Marcus,The Conceptual Coupling Metrics for ObjectOriented Systems Henderson Sellers,Object Oriented Metrics: Measures of complexity,Prentice hall Aine Mitchell, James F. Power, Toward a definition of run-time object-oriented metrics, 2003 Sencer Sultanolu, mit Karaka, Software Size Estimating, Web Document, 1998 David N. Card, Khaled El Emam, Betsy Scalzo, Measurement of Object-Oriented SoftwareDevelopment Projects, 2001 Conference on Dependability of Computer Systems (DepCoS-RELCOMEX07) 2007 Cost Effective Software Test Metrics LJUBOMIR LAZICa, NIKOS MASTORAKISb. Bennett, Ted L., and Paul W. Wennberg. Eliminating Embedded Software Defects Prior to Integration Test.Dec. 2005. V. R. Basili, G. Caldiera, H. D. Rombach, The Goal Question Metric Approach, Encyclopedia of Software Engineering,volume 1, John Wiley & Sons, 1994, pp. 528-532 Notable Metrics in Software Testing S. M. K. Quadri1 and Sheikh Umar Farooq2

K.K.Aggarwal, Yogesh Singh, Arvinder Kaur and Ruchika Malhotra ,Software Design Metrics for ObjectOriented Software, JOURNAL OF OBJECT TECHNOLOGY,2008 Mariano Ceccato and Paolo Tonella, Measuring the Effects of Software Aspectization,International conference on Object Technology,1998 Dhama, H. 1995. Quantitative models of cohesion and coupling in software. In Selected Papers of the Sixth Annual Oregon Workshop on Software Metrics (Silver Falls, Oregon, United States). W. Harrison and R. L. Glass, Eds. Elsevier Science, New York, NY, 65-74. Robert Martin, OO Design Quality Metrics, An Analysis of Dependencies Poshyvanyk, D., Marcus, A., Dong, Y., and Sergeyev, A., "IRiSS - A Source Code Exploration Tool", in Industrial and Tool Proceedings of 21st IEEE International Conference on Software Maintenance,2005 A Brief Overview Of Software Testing Metrics Mr. Premal B. Nirpal Dr. K. V. Kale Yanping Chen, Robert L. Probert, Kyle Robenson Effective Test Metrics for Test Strategy Evolution Proceedings of the 2004 Conference of the centre for Advanced Studies on Collaborative Research CASCON04 Software Testing Product Metrics - A Survey Dr. Arvinder Kaur1,#Mrs. Bharti Suri2,#Ms. Abhilasha Sharma3 Norman F. Schneidewind Measuring and Evaluating Maintenance Process Using Reliability, Risk, and Test Metrics IEEE Transaction on Software Engineering. S.H. Kan, J. Parrish and D.Manclove In-process metrics for software testing IBM Systems Journal, Vol. 40. no. 1,2009.

George E. Stark, Robert C. Durst, Tammy M. Pelnik An Evaluation of Software Testing metrics for NASAs Mission Control Center 1992.

Paul C. Jorgensen Software Testing - A Craftsmans Approach Second Edition