Note 7. Software Metrics - University of Wisconsin

Download Report

Transcript Note 7. Software Metrics - University of Wisconsin

Computer Science and Software Engineering
University of Wisconsin - Platteville
Note 7. Software Metrics
Yan Shi
Lecture Notes for SE 3730 / CS 5730
Part of the slides are adopted from Pearson’s slides
Outline
 What is a software metric?
 Software size metrics
— KLOC
— Function Points
 Software quality metrics
— process metrics
— product metrics
 Software complexity metrics
— Halstead’s metrics
 Defect Cost Analysis
What is a Metric?
 A metric is a measurable indication of some
quantitative aspect of a system with the following
characteristics:
—
—
—
—
Measurable
Independent of human influence
Accountable: save the raw data
Precise
 E.g., Number of errors found per person hours.
 A metric can be a “result” or a “predictor”.
— determine the quality
— predict and improve quality
Software Size Metrics
 KLOC – CoCoMo (Constructive Cost Model):
—
—
—
—
Real-time embedded system: 40-160 LOC/man-month
System programs:
150-400 LOC/man-month
200-800 LOC/man-month
Commercial applications:
KLOC has dependence on the programming
language or development tool.
 Function Point:
— measure the project size by functionality specified
for that system.
result or predictor?
Function Point Method
 Step1: Compute crude function points (CFP)
 Step2: Compute the relative complexity
adjustment factor (RCAF) for the project.
— RCAF varies between 0 and 70.
 Step3: Compute the number of function
points (FP):
FP = CFP  (0.65+0.01RCAF)
Function Point Method Step1
 Step1: Compute crude function points (CFP)
— identify functional components
— evaluate each component as simple, average or
complex
— apply weighting factors to the components
— CFP is the summed weighted values
Function Point Method Step1
 Step1: Compute crude function points (CFP)
2.evaluate each component
1.identify
functional
components
4.CFP is the
summed
weighted
values
3.apply weighting factors
Function Point Method Step2
 Step2: Compute the relative complexity
adjustment factor (RCAF) for the project.
— assign grades (0-5) to the 14 subjects that
substantially affect the development effort:
— RCAS = sum of all grades
Function Point Method Step2
© Pearson Education Limited 2004 & cah, UoN 2008
FP Example: Attend Master System
Data Flow Diagram
© Pearson Education Limited 2004 & cah, UoN 2008
FP Example: Attend Master System
Total
Complexity level
Software
system
components
Simple
average
Weight
Factor
A
B
C=
AxB
D
E
F=
DxE
G
H
I=
GxH
1
3
3
---
4
---
1
6
6
9
---
4
---
2
5
10
1
7
7
17
User online
queries
1
3
3
1
4
4
1
6
6
13
Logical files
1
7
7
---
10
---
1
15
15
22
---
5
---
---
7
---
2
10
20
20
User inputs
User outputs
External
interfaces
Total CFP
Count
Points
Count
Weight
Factor
CFP
Count
Points
Weight
Factor
complex
Points
81
© Pearson Education Limited 2004 & cah, UoN 2008
FP Example: Attend Master System
No
1
Subject
Requirement for reliable backup and recovery
0 1 2 3 4 5
2
Requirement for data communication
0 1 2 3 4 5
3
Extent of distributed processing
0 1 2 3 4 5
4
Performance requirements
0 1 2 3 4 5
5
Expected operational environment
0 1 2 3 4 5
6
Extent of online data entries
0 1 2 3 4 5
7
Extent of multi-screen or multi-operation online data input
0 1 2 3 4 5
8
Extent of online updating of master files
0 1 2 3 4 5
9
Extent of complex inputs, outputs, online queries and files
0 1 2 3 4 5
10
Extent of complex data processing
0 1 2 3 4 5
11
Extent that currently developed code can be designed for reuse
0 1 2 3 4 5
12
Extent of conversion and installation included in the design
0 1 2 3 4 5
13
Extent of multiple installations in an organization and variety of customer
organizations
Extent of change and focus on ease of use
0 1 2 3 4 5
14
Total = RCAF
Grade
0 1 2 3 4 5
41
© Pearson Education Limited 2004 & cah, UoN 2008
FP Example: Attend Master System
FP = CFP x (0.65 + 0.01 x RCAF)
FP = 81 x (0.65 + 0.01 x 41) = 85.86
© Pearson Education Limited 2004 & cah, UoN 2008
FP: Benefits and Drawbacks
 FP can be used as a predictor to estimate the project size/
resources needed during project planning!
— as long as we have requirement specification
However,
 it is very subjective
— depend on the estimators’ expert knowledge
— cannot be counted automatically
 Not universally applicable – best for data processing systems
 Detailed requirement specification may not be available during
planning phase!
 Estimating LOC based on FP and language:
QSM Function Point Programming Language Table
Software Quality Metrics
Quality
Development
Timetable
Quality
metrics
Process
metrics
Effectiveness
Product
metrics
Maintainace
Productivity
Software Process Quality Metrics
 Errors counted:
—
—
—
—

NCE: number of code errors
WCE: weighted number of code errors
NDE: number of development errors (design+code)
WDE: weighted number of development errors
How to decide weight?
—
—
—
—
Critical: blocking other tests and alpha release, 9
Severe: blocking other tests and beta release, 6-8
Moderate: testing workaround possible, but blocking final release ~3
Very minor: fix before the “Sun Burns Out” 1
 Product size: KLOC or FP
Error Density metrics
Code
Name
CED
Code Error Density
CED = NCE / KLOC
DED
Development Error Density
DED = NDE / KLOC
WCED
Weighted Code Error Density
WCDE = WCE / KLOC
WDED
Weighted Development Error Density
WDED = WDE / KLOC
WCEF
Weighted Code Errors per Function
Point
Weighted Development Errors per
Function Point
WCEF = WCE / NFP
WDEF
Calculation formula
WDEF = WDE / NFP
Error Severity Metrics
Code
Name
ASCE Average Severity of Code
Errors
ASDE Average Severity of
Development Errors
Calculation formula
WCE
ASCE = ----------NCE
WDE
ASDE = ----------NDE
Software Process Timetable Metrics
Code
TTO
Name
Time Table Observance
ADMC Average Delay of Milestone
Completion
Calculation formula
MSOT
TTO = ----------MS
TCDAM
ADMC = ----------MS
MSOT = Milestones completed on time.
MS = Total number of milestones.
TCDAM = Total Completion Delays (days, weeks, etc.) for all milestones.
Error Removal Effectiveness Metrics
Code
Name
Calculation formula
NDE
Development Errors Removal
DERE
DERE = ---------------Effectiveness
NDE + NYF
WDE
DWERE Development Weighted
DWERE = -----------------Errors Removal Effectiveness
WDE+WYF
NYF = number software failures detected during a year of
maintenance service.
WYF = weighted number of software failures detected during a year
of maintenance service.
Software Process Productivity Metrics
Code
Name
DevP
Development Productivity
point Development
FDevP Function
Productivity
CRe
Code Reuse
DocRe Documentation Reuse
Calculation formula
DevH
DevP = ---------KLOC
DevH
FDevP = ---------NFP
ReKLOC
Cre = -------------KLOC
ReDoc
DocRe = ----------NDoc
DevH = Total working hours invested in the development of the software system.
ReKLOC = Number of thousands of reused lines of code.
ReDoc = Number of reused pages of documentation.
NDoc = Number of pages of documentation.
Software Product Metrics
 HD quality metrics:
— HD calls density metrics - measured by the number of calls.
— HD calls severity metrics - the severity of the HD issues raised.
— HD success metrics – the level of success in responding to HD
calls.
 HD productivity metrics.
 HD effectiveness metrics.
 Corrective maintenance quality metrics.
—
—
—
—
Software system failures density metrics
Software system failures severity metrics
Failures of maintenance services metrics
Software system availability metrics
 Corrective maintenance productivity and effectiveness
metrics.
Define New Software Quality Metrics
Complexity Metrics
 Halstead’s metrics
Complexity of a piece of code depends on:
—
—
—
—
n1: number of unique operators
n2: number of unique operands
N1: total number of occurrences of operators
N2: total number of occurrences of operands
 McCabe’s metrics
— graph complexity based on control flow graphs
Halstead’s Metrics[1977]
 Program length: N = N1 + N2
 Program vocabulary: n = n1 + n2
 Estimated length: Nˆ = n1 log2 n1 + n2 log2 n2
— Close estimate of length for well structured programs
 Purity ratio: PR = Nˆ /N
— code optimization: the higher the ratio above 1.0, the
more optimized the code.
 Program volume: V = N log2 n
— Number of bits to provide a unique designator for each of
the n items in the program vocabulary.
 Difficulty:
 Program effort: E=D*V
— This is a good measure of program understandability
Exercise: Halstead’s Metrics
 Distinct operators:
if ( ) { } > < = * ;
 Distinct operands:
k 5 2 x




if (k < 5)
{
if (k > 2)
x = x*k;
}
n1: number of unique operators 10
n2: number of unique operands 4
N1: total number of occurrences of operators 13
N2: total number of occurrences of operands 7
 Calculate Halstead’s metrics!
Exercise: Halstead’s Metrics
 Program length: N = N1 + N2 = 20
 Program vocabulary: n = n1 + n2 = 14
 Estimated length: Nˆ = n1 log2 n1 + n2 log2 n2 = 41.2
 Purity ratio: PR = Nˆ /N = 2.1
 Program volume: V = N log2 n = 76.1
 Difficulty:
= 8.75
 Program effort: E = D*V = 665.9
Object Oriented Metrics
 A way to characterize the “objectorientedness” of a design (Shyam Chidamber
and Chris Kemerer, 1994)
—
—
—
—
—
—
Weighted methods per class
depth of inheritance tree
number of children
coupling between object classes
response for a class
lack of cohesion in methods
Defect Cost Analysis
 Defect injection point:
— In what stage of the development cycle was the defect
put into the system?
 Defect detection point:
— In what stage of the development cycle was the defect
discovered?
 The latency between injection and detection point of a
defect:
— the longer, the more expensive to fix.
 This analysis help evolve a process to prevent defects
and reduce the latency.
Summary
 What is a software metric?
 Software size metrics
— KLOC
— Function Points
 Software quality metrics
— process metrics: development

quality, timetable, effectiveness, productivity
— product metrics: maintainace
 Software complexity metrics
— Halstead’s metrics: n1, n2, N1, N2
 Defect Cost Analysis