Transcript Document 7290599
SEA Side
Software Engineering Annotations Annotation11:
Software Metrics
One hour presentation to inform you of new techniques and practices in software development.
Professor Sara Stoecklin Director of Software Engineering- Panama City Florida State University – Computer Science [email protected]
850-522-2091 850-522-2023 Ex 182 1
Express in Numbers
Measurement provides a mechanism for
objective
evaluation
2
Software Crisis
• According to American Programmer, 31.1% of computer software projects get canceled before they are completed, • 52.7% will overrun their initial cost estimates by 189%. • 94% of project start-ups are restarts of previously failed projects.
Solution?
systematic approach to software development and
measurement
3
Software Metrics
•
It refers to a broad range of quantitative measurements for computer software that enable to
–
improve the software process continuously
–
assist in quality control and productivity
–
assess the quality of technical products
–
assist in tactical decision-making
4
Measure, Metrics, Indicators
• • •
Measure.
–
provides a quantitative indication of the extent, amount, dimension, capacity, or size of some attributes of a product or process.
Metrics.
–
relates the individual measures in some way .
Indicator.
– a combination of metrics that provide insight into the software process or project or product itself.
5
What Should Be Measured?
process measurement process metrics project metrics product metrics product What do we use as a basis?
• size?
• function?
6
Metrics of Process Improvement
• Focus on Manageable Repeatable Process • Use of Statistical SQA on Process • Defect Removal Efficiency 7
Statistical Software Process Improvement
All errors and defects are categorized by origin The overall cost in each category is computed The cost to correct each error and defect is recorded No. of errors and defects in each category is counted and ranked in descending order Resultant data are analyzed and the “culprit” category is uncovered Plans are developed to eliminate the errors
8
Causes and Origin of Defects
Error Checking 11% Standards 7% Specification 25% Data Handling 11% User Interface 12% Hardware Interface 8% Sofware Interface 6% Logic 20%
9
Metrics of Project Management
• Budget • Schedule/ReResource Management • Risk Management • Project goals met or exceeded • Customer satisfaction 10
Metrics of the Software Product
• Focus on Deliverable Quality • Analysis Products • Design Product Complexity – algorithmic, architectural, data flow • Code Products • Production System 11
How Is Quality Measured?
• Analysis Metrics – Function-based Metrics: Function Points( Albrecht), Feature Points (C. Jones) – Bang Metric (DeMarco): Functional Primitives, Data Elements, Objects, Relationships, States, Transitions, External Manual Primitives, Input Data Elements, Output Data Elements, Persistent Data Elements, Data Tokens, Relationship Connections.
12
Source Lines of Code (SLOC)
• Measures the number of physical lines of active code • In general the higher the SLOC in a module the less understandable and maintainable the module is 13
Function Oriented Metric Function Points
• Function Points are a measure of “how big” is the program, independently from the actual physical size of it • It is a weighted count of several features of the program • Dislikers claim FP make no sense wrt the representational theory of measurement • There are firms and institutions taking them very seriously 14
Analyzing the Information Domain measurement parameter number of user inputs number of user outputs number of user inquiries number of files number of ext.interfaces
count weighting factor simple avg. complex X X X X X 3 4 3 7 5 4 5 4 10 7 6 7 6 15 10 = = = = =
Inputs Wi
Output Wo
Inquiry Win
InternalFi les Wif
ExternalIn terfaces Wei
15
Taking Complexity into Account Factors are rated on a scale of 0 (not important) to 5 (very important): data communications distributed functions heavily used configuration transaction rate on-line data entry end user efficiency on-line update complex processing installation ease operational ease multiple sites facilitate change Formula :
CM
Complexity Multiplier F Complexity Multiplier
16
Typical Function-Oriented Metrics
• errors per FP (thousand lines of code) • defects per FP • $ per FP • pages of documentation per FP • FP per person-month 17
LOC vs. FP
•
Relationship between lines of code and function points depends upon the programming language that is used to implement the software and the quality of the design
•
Empirical studies show an approximate relationship between LOC and FP
18
LOC/FP (average)
Assembly language 320 C COBOL, FORTRAN C++ Visual Basic Smalltalk SQL Graphical languages (icons) 128 106 64 32 22 12 4
19
How Is Quality Measured?
• Design Metrics – Structural Complexity: fan-in, fan-out, morphology – System Complexity: – Data Complexity: – Component Metrics: Size, Modularity, Localization, Encapsulation, Information Hiding, Inheritance, Abstraction, Complexity, Coupling, Cohesion, Polymorphism • Implementation Metrics Size, Complexity, Efficiency, etc.
20
Comment Percentage (CP)
• Number of commented lines of code divided by the number of non-blank lines of code • Usually 20% indicates adequate commenting for C or Fortran code • The higher the CP value the more maintainable the module is 21
Size Oriented Metric - Fan In and Fan Out
• The Fan In of a module is the amount of information that “enters” the module • The Fan Out of a module is the amount of information that “exits” a module • We assume all the pieces of information with the same size • Fan In and Fan Out can be computed for functions, modules, objects, and also non-code components • Goal - Low Fan Out for ease of maintenance.
22
Size Oriented Metric - Halstead Software Science
Primitive Measures number of distinct operators number of distinct operands total number of operator occurrences total number of operand occurrences Used to Derive maintenance effort of software testing time required for software
23
if (a) { X(); } else { Y(); }
Flow Graph
Predicate Nodes
a
X Y • • •
V(G) = E - N + 2
where E = number of edges and N = number of nodes 24
McCabes Metric
• • • Smaller the V(G) the simpler the module.
Modules larger than V(G) 10 are a little unmanageable
.
A high cyclomatic complexity indicates that the code may be of low quality and difficult to test and maintain
25
Chidamber and Kemerer Metrics
• Weighted methods per class (MWC) • Depth of inheritance tree (DIT) • Number of children (NOC) • Coupling between object classes (CBO) • Response for class (RFC) • Lack of cohesion metric (LCOM) 26
Weighted methods per class (WMC)
WMC
i n
1
c i
• c i M i is the
complexity
of the class of each method – Often, only public methods are considered • Complexity may be the McCabe complexity of the method • Smaller values are better • Perhaps the average complexity per method is a better metric?
The number of methods and complexity of methods involved is a direct predictor of how much time and effort is required to develop and maintain the class.
27
Depth of inheritance tree (DIT)
• For the system under examination, consider the hierarchy of classes • DIT is the
length of the maximum path from the node to the root of the tree
• Relates to the scope of the properties – How many ancestor classes can potential affect a class •
Smaller values are better
28
Number of children (NOC)
• For any class in the inheritance tree, NOC is the number of
immediate
children of the class – The number of direct subclasses • How would you interpret this number?
•
A moderate value indicates scope for reuse and high values may indicate an inappropriate abstraction in the design
29
Coupling between object classes (CBO)
• • For a class, C, the CBO metric is the number of other classes to which the class is coupled • • A class, X, is coupled to class C if – X operates on (affects) C or – C operates on X
Excessive coupling indicates weakness of class encapsulation and may inhibit reuse High coupling also indicates that more faults
30
may be introduced due to inter-class activities
Response for class (RFC)
• Mc i # of methods called in response to a message that invokes method M i – Fully nested set of calls • Smaller numbers are better – Larger numbers indicate increased complexity and debugging difficulties
i n
1
Mc i
If a large number of methods can be invoked in response to a message, the testing and debugging of the class becomes more complicated
31
Lack of cohesion metric (LCOM)
• Number of methods in a class that reference a specific instance variable • A measure of the “tightness” of the code • If a method references many instance variables, then it is more complex, and less cohesive • • The larger the number of similar methods in a class the more cohesive the class is
Cohesiveness of methods within a class is desirable, since it promotes encapsulation
32
Testing Metrics
• Metrics that predict the likely number of tests required during various testing phases • Metrics that focus on test coverage for a given component 33
Views on SE Measurement
34
Views on SE Measurement
35
Views on SE Measurement
36
12 Steps to Useful Software Metrics
Step 1 - Identify Metrics Customers Step 2 - Target Goals Step 3 - Ask Questions Step 4 - Select Metrics Step 5 - Standardize Definitions Step 6 - Choose a Model Step 7 - Establish Counting Criteria Step 8 - Decide On Decision Criteria Step 9 - Define Reporting Mechanisms Step 10 - Determine Additional Qualifiers Step 11 - Collect Data 37
Step 1 - Identify Metrics Customers
Who needs the information?
Who’s going to use the metrics?
If the metric does not have a customer - do not use it.
38
Step 2 - Target Goals
Organizational goals
–
Be the low cost provider
–
Meet projected revenue targets
Project goals
– –
Deliver the product by June 1st Finish the project within budget
Task goals (entry & exit criteria)
–
Effectively inspect software module ABC
–
Obtain 100% statement coverage during testing
39
Step 3 - Ask Questions
Goal: Maintain a high level of customer satisfaction • What is our current level of customer satisfaction?
• What attributes of our products and services are most important to our customers?
• How do we compare with our competition?
40
Step 4 - Select Metrics
Select metrics that provide information to help answer the questions • Be practical, realistic, pragmatic • Consider current engineering environment • Start with the possible
Metrics don’t solve problems -- people solve problems Metrics provide information so people can make better decisions
41
Selecting Metrics
• • • • • • • Goal:
Ensure all known defects are corrected before shipment
42
Metrics Objective Statement Template
To
understand evaluate control predict
the
attribute of the entity
in order to
goal(s)
Example - M etric: % defects corrected
To
evaluate
the
% defects found & corrected during testing
in order to
ensure all known defects are corrected before shipment
43
Step 5 - Standardize Definitions
Developer User
44
Step 6 - Choose a Measurement
Models for code inspection metrics • Primitive Measurements: –
Lines of Code Inspected = loc
– – –
Hours Spent Preparing = prep_hrs Hours Spent Inspecting = in_hrs Discovered Defects = defects
• Other Measurements
:
–
Preparation Rate = loc / prep_hrs
– –
Inspection Rate = loc / in_hrs Defect Detection Rate = defects / (prep_hrs + in_hrs)
45
Step 7 - Establish Counting Criteria
Lines of Code • Variations in counting • No industry accepted standard • SEI guideline - check sheets for criteria • Advice: use a tool 46
Counting Criteria - Effort
What is a Software Project?
• When does it start / stop?
• • What activities does it include?
Who works on it?
47
Step 8 - Decide On Decision Criteria
• Establish Baselines Current value – Problem report backlog – Defect prone modules • Statistical analysis (mean & distribution) – Defect density – Fix response time – Cycle time – Variance from budget (e.g., cost, schedule) 48
Step 9 - Define Reporting Mechanisms
Jan-97 Feb-97 Mar-97 Apr-97 Open 23 27 18 12 Fixed 13 Resolved 3 24 26 18 11 15 27
160 120 80 40 0 1st Qtr 120 2nd Qtr 3rd Qtr 4th Qtr 100 80 60 40 20 0 Jan Mar May July 100 80 60 40 20 0 1st Qtr 2nd Qtr 3rd Qtr 4th Qtr 80 40 0 0 20 40 60 80 100 120 1 2 3 4 5 6 7 8 9 10 11 12
49
Step 10 - Determine Additional Qualifiers
A good metric is a generic metric Additional qualifiers: • Provide demographic information • Allow detailed analysis at multiple levels • Define additional data requirements 50
Additional Qualifier Example
Metric:
software defect arrival rate
• Release / product / product line • Module / program / subsystem • Reporting customer / customer group • Root cause • Phase found / phase introduced • Severity 51
Step 11 – Collect Data
What data to collect?
• Metric primitives • Additional qualifiers Who should collect the data?
• The data owner – Direct access to source of data – Responsible for generating data – Owners more likely to detect anomalies – Eliminates double data entry 52
Examples of Data Ownership
Owner Examples of Data Owned • Management • Schedule • Budget • Engineers • Time spent per task • Inspection data including defects found • Root cause of defects • Testers • Test Cases planned / executed / passed • Problems • Test coverage • Configuration management • Lines of code specialists • Users • Modules changed • Problems • Operation hours 53
Step 12 – Consider Human Factors
The People Side of the Metrics Equation • How measures affect people • How people affect measures “
Don’t underestimate the intelligence of your engineers. For any one metric you can come up with, they will find at least two ways to beat it.”
[unknown] 54
Measure individuals
Don’t
Use metrics as a “stick” Use only one metric Quality Cost Schedule Ignore the data
55
Do
Select metrics based on goals Goal 2 Goal 1 Question 1 Metrics 1 Question 2 Question 3 Question 4 [Basili-88] Metric 2 Metric 3 Metric 4 Metric 5 Provide feedback
Data Data Providers Feedback Metrics
Processes, Products & Services Obtain “buy-in” Focus on processes, products & services
56
References
• • • • • • • • • • Chidamber, S. R. & Kemerer, C. F., “A Metrics Suite for Object Oriented Design”, June 1994.
IEEE Transactions on Software Engineering
, Vol. 20, #6, Hitz, M. and Montazeri, B. “Chidamber and Kemerer’s Metrics Suite: A Measurement Theory Perspective”,
IEE Transaction on Software Engineering
, Vol. 22, No. 4, April 1996.
Lacovara , R.C., and Stark G. E., “A Short Guide to Complexity Analysis, Interpretation and Application”, May 17, 1994. http://members.aol.com/GEShome/complexity/Comp.html
Tang, M., Kao, M., and Chen, M., “An Empirical Study on Object-Oriented Metrics”,
IEEE Transactions on Software Engineering,
0403-5, 1999.
0-7695 Tegarden, D., Sheetz, S., Monarchi, D., “Effectiveness of Traditional Software Metrics for Object-Oriented Systems”,
Proceedings: 25th Hawaii International Confernce on System Sciences, January
, 1992, pp. 359-368.
“Principal Components of Orthogonal Object-Oriented Metrics” http://satc.gsfc.nasa.gov/support/OSMASAS_SEP01/Principal_Components_of_Orthogonal_Object_Oriented_Metrics.htm
Software Engineering Fundamentals by Behforhooz & Hudson, Oxford Press, 1996 Chapter 18: Software Quality and Quality Assurrance Software Engineering: A Practioner's Approach by Roger Pressman, McGraw-Hill, 1997 IEEE Standard on Software Quality Metrics Validation Methdology (1061) Object-Oriented Metrics by Brian Henderson-Sellers, Prentice-Hall, 1996 57