Transcript Lecture 10

Software Metrics
Engineering
• Engineering is a discipline based on
quantitative approaches
– All branches of engineering rely upon the use
measurements
• Software Engineering is not an exception
– SE uses measurements to evaluate the quality of
software products
Software Metrics
• Software metrics are an attempt to quantify
the quality of the software
• Measures are expressed in terms of
computable formulas or expressions
• Elements of these expressions are derived
from source code, design and other artifacts
used in software development
Quality Measurement
• Direct measurement
– Are values calculated directly from known sources
(source code, design diagram etc.)
– E.g. number of software failures over some time
period
• Indirect measurement
– Cannot be so accurately defined nor directly
realized
– E.g. reliability, maintainability, usability
Advantages of Metrics
• Metrics are a tool for comparing certain
aspects of software
• Sometimes metrics provide guidance for
evaluating complexity, time and cost of a
future product
• Metrics can be used to evaluate software
engineers
• Metrics can even be applied to the creation of
development teams
Example
A general rule in allocating project time:
• 25%: Analysis & Design
• 50%: Coding & Unit Test
• 25%: Integration/System/Acceptance Test
Weyuker’s Properties of Metrics
• Two programs for the same application need
not have the same metrics values
• If a program P consists of modules Q and R,
the complexity of P is generally not the sum of
the complexities of Q and R
GOM (Goal Oriented Metrics)
• General metrics equation
– F = c1× m1+c2× m2+…+cn× mn
– Cj is a regression coefficient and mk is a primitive
metric (those that are directly observable from
software artifacts)
• Metric F is found by computing all mk and
then evaluating the expression for predefined
cj
Some Quality Factors
• Accuracy
– Accuracy refers to the precision of computations and control
• Completeness
– Completeness is the degree to which full implementation of required
functionality has been achieved
• Consistency
– Consistency refers to the degree to which requirements are continued
(i.e., not changed) throughout the development process
• Expandability
– Expandability is the degree to which a product can be extended to
include additional features
• Hardware Independence
– Hardware independence is the degree to which a product depends on
the underlying hardware
ISO 9126 Quality Factors
• Reliability
– Reliability refers to the amount of time software is
available for use
– Reliability can be measured by the frequency of
failure, the severity of failure, the accuracy of output
results, the ability to recover from failure and meantime –to-failure
• Portability
– Portability is the degree to which software can be
ported to another platform or environment
– Portability is measured by adaptability, installability,
conformance, hardware and software dependences
What makes a software metric effective?
• It must be easy to derive, calculate and
understand
• It must be useful for improving some aspect of
software engineering
• It should be independent of any programming
language
• It should be platform and environment
independent
Function-point Metrics
• Function-point metrics are intended to measure
the size complexity of a software system
• They can be evaluating using a data flow diagram,
and counting any of the following:
–
–
–
–
–
The number of user inputs
The number of outputs to user
The number of user inquiries
The number of files or data stores
The number of external interfaces
Sample formula for function-point metrics
• Fp = total of primitive metrics ×
(0.65 + 0.01 ×F )
Where Fj is a value to be chosen by the

developer to indicate the complexity of this
product with regard to other competitive
products. Typical values for Fj range from 1 to
50
j
Function-point Example
Function Points: Application requirements are
examined to determine project/code size:
• Count number of inputs, outputs, inquiries,
master files, interfaces the program will
require.
• Measures product size for the user's point of
view
• Feature Points: Includes # of complex
algorithms
Example Continued
Step 1: To estimate Function Points count the
number of:
Example Continued
Step 2: Multiply each category count by its’ weight:
________________
UnadjustedFunctionPoints (UFP) = SUM(I=0..6) [#AttributeTypeI
ComplexityFactor]
Example Continued
Step 3: Calculate the Degree of Influence:
DegreeOfInfluence (DI): Determine complexity of 14
factors:
Each factor is rated on a scale from 0 (no influence) to 5
(high influence)
Example Continued
Sum the Degree of Influence factors and solve
the below equation.
Function Points = UFP * (0.65 + 0.01 *
DegreeOfInfluence) _____________
A metric for specification quality
• Specificity is the opposite of ambiguity
Specificity =
n ui /n r
Where nui refers to the number of functional requirements
that are interpreted identically by all reviewers and nr refers
to the total number of requirements (both functional and
non-functional) in the requirements document

• Completeness
can be measured as follows:
Completeness = n u /(n j  n s )
Where nu refers to the number of unique functional
requirements that do not depend on any other
requirement, nj refers to the number of external inputs and
ns number of states of the system

A metric for measuring coupling between
two modules in imperative paradigm
• Coupling = k / M
Where
M = di + (a×ci)+do+(b×co)+gd+(c×gc)+w+r
di the number of input data parameters
ci number of input control parameters
do number of output data parameters
co number of output control parameter
gd number of global data varials
gc number of global control variables
w number of modules called
r number of modules calling this module
k =1, a =b= c=2(all these can be adjusted)
Metrics for O-O Systems
• Like testing, metric equation for procedural
paradigm have been found to be inadequate
for O-O paradigm
• Several OO metrics have been published in the
literature:
C&K metrics(1994), Kim’s metrics(1994,1996),
MOOD’s metrics(1995), Liu’s metrics(1999)
Metrics for O-O Systems
• DOR(C )= 
r(C )
k1
k
t  tr
Where r( C ) denotes the number of subclasses

of C
t denotes the total number of classes in the
program
tr denotes the sum of all subclasses in the
program
Object Oriented Class Estimation
Object Oriented Class Estimation
A. Estimate the number of classes in code to be developed/modified: ________
B. Categorize the type of user interface and assign weight:
No UI:
2.0
Simple text-based UI:
2.25
Simple GUI
2.5
Complex GUI
3.0
C. Multiply # Classes by UI Weight:
________
D. Add A + C to get estimate of total number of classes:
________
E. Multiply D (total # classes) by # person days/class (15-20)
________
Calculation of ‘Lack of Cohesion’
metrics
Lack of Cohesion (LCOM) for a class is defined as follows:
• Let ‘C’ denote the class for which LCOM is computed.
• Let ‘m’ denote the number of methods in C.
• Let ‘a’ denote the attributes of C.
• Let ‘m(a)’ denote the methods of C that access the
attribute ‘a’ in C.
• Let ‘Sum(m(a))’ denote the sum of all ‘m(a)’ over all
the attributes in C.
• Now, LCOM( C ) is defined as ( m – Sum (m(a)) / a) /
(m – 1)
24
Cohesion Metrics
• If class C has only one method or no method,
LCOM is undefined. If there are no attributes in C,
then also LCOM is undefined. In both situations,
for computational purposes, LCOM is set to zero.
• Normally, LCOM value is expected to be between
0 and 2. A value greater than 1 is an indication
that the class must be redesigned. The higher is
the value of LCOM, the lower is the cohesion and
hence the need for redesign.
25
Example
26
Calculation
The class ‘Account’ (without constructor) ‘Account’ (with constructor)
m = 4, a = 3
m(Account#) = 1
m(UserID) = 1
m(Balance) = 3
sum(m(a)) = 5
LCOM (Account) = (4 – 5 / 3) / (3 – 1) = 1.667
m = 5, a = 3
m(Account#) = 2
m(UserID) = 2
m(Balance) = 3
sum(m(a)) = 7
LCOM(Account) = (5 – 7 / 3) (5
– 1) = 0.667
27
Calculation
The class ‘LoginAccount’ (without constructor)
m = 1, a = 2
= 2, a = 2
m(UserID) = 0
m(UserID) = 1
m(Password) = 1
m(Password) = 2
sum(m(a)) = 1
sum(m(a)) = 3
LCOM (LoginAccount) = (1 – 1 / 2) / (1- 1)
= 0 (set to zero because m = 1)
0.5
‘LoginAccount’ (with constructor)
m
LCOM(LoginAccount) = (2
– 3 / 2) (2 – 1) =
28
Calculation
The class ‘AccountsDatabase’ (without constructor) ‘AccountsDatabase’ (with constructor)
m = 4, a = 1
5, a = 1
m(Accounts) = 4
m(Accounts) = 5
sum(m(a)) = 4
sum(m(a)) = 5
LCOM (AccountsDatabase) = (4 – 4 / 1) / (4 – 1)
=0/3=0
m=
LCOM(AccountsDatabase) = (5 – 5/1)(5-1) = 0
29
Calculation
The class ‘Login Accounts Database’
Exactly similar to ‘Accounts Database’ and hence
LCOM (Login Accounts Database) = 0, both for
with constructor and without constructor.
30
MOOD’s Metric for Measure of Polymorphism

TC
i1
M o (Ci )
• Polymorphism factor = TC M (C )  DC(C )
n
i
i
i1
Where TC denotes the number of classes in the

system
DC(Ci ) denotes the number of subclasses of Ci
Mo(Ci ) denotes the number of overriding methods
in Ci
Mn(Ci ) denotes the number of new methods in Ci
MOOD’s Metric on Degree of Coupling
between Classes
TC
TC
i1
j 1
• Coupling factor =  ( Is _ client(C ,C ))
TC2  TC  2   (C )
Where
Is_client(Ci, Cj) = 1 if client Ci has at least one

non-inheritance reference to supplier classes
Cj; otherwise, it is 0
denotes the maximum number of

coupling due to inheritance
i
j
TC
i
i1
2
TC
(Ci )
i1
Problem with metrics
• Defining metrics formulas is difficult
– No standards and no guidelines: each equation
defines a metric for a particular application
• Correctness of the metrics formulas must be
established before using them
• Metrics measure a product, too late to predict
the complexity of the product