COCOMO Estimation at CCD

Download Report

Transcript COCOMO Estimation at CCD

University of Southern California
Center for Systems and Software Engineering
IV&V Work Review
7/16/2015
1
University of Southern California
Center for Systems and Software Engineering
7/16/2015
2
University of Southern California
Center for Systems and Software Engineering
7/16/2015
3
University of Southern California
Center for Systems and Software Engineering
•
“The value-based V&V approach holds a great appeal – a more intensive and focused
V&V process. Since items are prioritized and rated as to importance and likelihood of
having errors. This is meant for you to allocate your time according to how likely
errors (and how much damage could be done) will occur in an artifact. By choosing
to review those areas that have changed or are directly impacted by changes in the
other documents I believe I can give spend more quality time in reviewing the
changes and give greater emphasis on the changes and impacts.”
•
“Top 10 issue list gives a centralized location for showing the issues as opposed to
spread across several documents. Additionally, by prioritizing the significance of
each issue, it gives document authors a better picture of which issues they should
spend more time on resolving and let them know which ones are more important to
resolve. Previously, they would have just tackled the issues in any particular order,
and may not have spent the necessary time or detail to ensure proper resolution.
Focusing on a top 10 list helps me to look at the bigger picture instead of worrying
about as many minor problems, which will result in documents that will have fewer
big problems.”
•
“For the review of the Draft FC Package, the Value-based IIV&V Process will be used.
This review process was selected because of the time constraint of this review. There
is only one weekend to review all seven Draft FC Package documents. The Valuebased review will allow me to prioritize the documents based on importance, quality
risk, dependencies, and reviewing cost. The documents will be reviewed based on its
identified priority. This allows documents more critical to the success of the project
to be reviewed first and given more time to. ”
7/16/2015
4
University of Southern California
Center for Systems and Software Engineering
7/16/2015
5
University of Southern California
Center for Systems and Software Engineering
There is no end for
improvement
7/16/2015
6
University of Southern California
Center for Systems and Software Engineering
Feedback from IC
• Bugzilla
– Make easier to search by fix agent, bug type, etc.
– Higher criticality threshold for Bugzilla entries
– Improve procedures
• too lock-stop; too slow
• took very long time to enter the bug and to close the
bug
– Consider alternatives to Bugzilla
• JIRA, redmine,Trac, Gforge, FusionForge
7/16/2015
7
University of Southern California
Center for Systems and Software Engineering
Feedback from IC
• IV&V very helpful
• Exit criteria could be improved
– Only helpful early
– Go value-based early
• Clarify, reinforce IV&V roles, activities
• Tendency of IV&Vers to destabilize design
(sometimes good)
• More up-front participation
• Tasks for 2 V&Vs are not clearly defined
• Consider peer review as part of artifact submission
process
7/16/2015
8
University of Southern California
Center for Systems and Software Engineering
Suggestion to IV&Vers and Developer
• Prioritize tasks or artifacts
• Differentiate Severity and Priority of Defects
• Prioritize defects based on their severity and
priority
• Take any chance to improve communication &
process
7/16/2015
9
University of Southern California
Center for Systems and Software Engineering
Software Test
Qi Li
University of Southern California-CSSE
7/16/2015
10
University of Southern California
Center for Systems and Software Engineering
Outline
• Software Test in General
• Value-based Software Test
7/16/2015
11
University of Southern California
Center for Systems and Software Engineering
Most Common Software problems






Incorrect calculation
Incorrect data edits & ineffective data edits
Incorrect matching and merging of data
Data searches that yields incorrect results
Incorrect processing of data relationship
Incorrect coding / implementation of business
rules
 Inadequate software performance
7/16/2015
12
University of Southern California
Center for Systems and Software Engineering
 Confusing or misleading data
 Software usability by end users &
Obsolete Software
 Inconsistent processing
 Unreliable results or performance
 Inadequate support of business needs
 Incorrect or inadequate interfaces
with other systems
 Inadequate performance and security
controls
 Incorrect file handling
7/16/2015
13
University of Southern California
Center for Systems and Software Engineering
Cost to fix faults
60* to 100*
1.5* to 6*
Cost
1*
Definition
7/16/2015
Development
Post Release
14
University of Southern California
Center for Systems and Software Engineering
Objectives of testing
• Executing a program with the intent of finding an
error.
• To check if the system meets the requirements
and be executed successfully in the Intended
environment.
• To check if the system is “ Fit for purpose”.
• To check if the system does what it is expected to
do.
7/16/2015
15
University of Southern California
Center for Systems and Software Engineering
Objectives of testing
• A good test case is one that has a probability
of finding an as yet undiscovered error.
• A successful test is one that uncovers a yet
undiscovered error.
• A good test is not redundant.
• A good test should be “best of breed”.
• A good test should neither be too simple nor
too complex.
7/16/2015
16
University of Southern California
Center for Systems and Software Engineering
Objective of a Software Tester
• Find bugs as early as possible and make sure
they get fixed.
• To understand the application well.
• Study the functionality in detail to find where
the bugs are likely to occur.
• Study the code to ensure that each and every
line of code is tested.
• Create test cases in such a way that testing is
done to uncover the hidden bugs and also
ensure that the software is usable and reliable
7/16/2015
17
University of Southern California
Center for Systems and Software Engineering
Static and Dynamic Verification
• Software inspections and walkthroughs
- Concerned with analysis of the static
system representation to discover
problems (static verification)
• Software testing - Concerned with
exercising and observing product
behaviour (dynamic verification)
– The system is executed with test data and
its operational behaviour is observed
7/16/2015
18
University of Southern California
Center for Systems and Software Engineering
Inspections and testing
• Inspections and testing are complementary
and not opposing verification techniques
• Both should be used during the V & V
process
• Inspections can check conformance with a
specification but not conformance with the
customer’s real requirements
• Inspections cannot check non-functional
characteristics such as performance,
usability, etc.
7/16/2015
19
University of Southern California
Center for Systems and Software Engineering
60.00%
50.00%
40.00%
E_Re%
30.00%
E_PA%
E_Re%+E_PA%
20.00%
E_Test%
10.00%
0.00%
V3.0
V3.1
V2.6
V2.5
Qi Li, Fengdi Shu, Barry W. Boehm, Qing Wang: Improving the ROI of
Software Quality Assurance Activities: An Empirical Study. ICSP 2010:
357-368
7/16/2015
20
University of Southern California
Center for Systems and Software Engineering
Test data and test cases
• Test data Inputs which have been
devised to test the system
• Test cases Inputs to test the system
and the predicted outputs from these
inputs if the system operates
according to its specification
7/16/2015
21
University of Southern California
Center for Systems and Software Engineering
Methods of testing
• Test to specification:
–
–
–
–
Black box,
Data driven
Functional testing
Code is ignored: only use specification
document to develop test cases
• Test to code:
– Glass box/White box
– Logic driven testing
– Ignore specification and only examine the code.
7/16/2015
22
University of Southern California
Center for Systems and Software Engineering
Black-box testing
• An approach to testing where the program is
considered as a ‘black-box’
• The program test cases are based on the
system specification
• Test planning can begin early in the
software process
7/16/2015
23
University of Southern California
Center for Systems and Software Engineering
Black-box testing
I
n
a
Input test data
I
p
n
b
e
u
t
o
m
h
a
s
c
a
v
l
i
a
o
o
u
u
u
s
i
n
g
s
r
e
S
y
s
t
e
m
O
t
Output test results
7/16/2015
Oe
d
u
h
t
e
e
p
u
p
f
e
t
r
c
e
t
s
s
w
e
h
n
c
i
e
c
h
o
r
e
v
e
a
l
f
s
24
University of Southern California
Center for Systems and Software Engineering
Pairing down test cases
• Use methods that take advantage of
symmetries, data equivalencies, and
independencies to reduce the number of
necessary test cases.
– Equivalence Testing
– Boundary Value Analysis
• Determine the ranges of working system
• Develop equivalence classes of test cases
• Examine the boundaries of these classes
carefully
7/16/2015
25
University of Southern California
Center for Systems and Software Engineering
Equivalence partitioning
• Input data and output results often fall into
different classes where all members of a
class are related
• Each of these classes is an equivalence
partition where the program behaves in an
equivalent way for each class member
• Test cases should be chosen from each
partition
7/16/2015
26
University of Southern California
Center for Systems and Software Engineering
Equivalence partitioning
Invalid inputs
Valid inputs
System
Outputs
7/16/2015
27
University of Southern California
Center for Systems and Software Engineering
Boundary value testing
• Partition system inputs and outputs into
‘equivalence sets’
– If input is a 5-digit integer between 10,000 and
99,999, equivalence partitions are < 10,000,
10,000 - 99, 999 and > 10, 000
• Choose test cases at the boundary of these
sets
– 00000, 09999, 10000, 99999, 10001
7/16/2015
28
University of Southern California
Center for Systems and Software Engineering
Equivalence partitions
3
4
Less than 4
7
11
10
Between 4 and 10
More than 10
Number of input values
9999
10000
Less than 10000
50000
100000
99999
Between 10000 and 99999
More than 99999
Input values
7/16/2015
29
University of Southern California
Center for Systems and Software Engineering
Testing Levels
•
•
•
•
7/16/2015
Unit testing
Integration testing
System testing
Acceptance testing
30
University of Southern California
Center for Systems and Software Engineering
Unit testing
• The most ‘micro’ scale of testing.
• Tests done on particular functions or code modules.
• Requires knowledge of the internal program design
and code.
• Done by Programmers (not by testers).
7/16/2015
31
University of Southern California
Center for Systems and Software Engineering
Integration Testing
– Testing of combined parts of an application
to determine their functional correctness.
– ‘Parts’ can be
•
code modules
•
individual applications
•
client/server applications on a network.
7/16/2015
32
University of Southern California
Center for Systems and Software Engineering
Systems Testing
 To test the co-existence of products and
applications that are required to perform
together in the production-like operational
environment (hardware, software, network)
 To ensure that the system functions
together with all the components of its
environment as a total system
 To ensure that the system releases can be
deployed in the current environment
7/16/2015
33
University of Southern California
Center for Systems and Software Engineering
Acceptance Testing
Objectives

When
Input




Output
7/16/2015

To verify that the system meets
the user requirements
After System Testing
Business Needs & Detailed
Requirements
Master Test Plan
User Acceptance Test Plan
User Acceptance Test report
34
University of Southern California
Center for Systems and Software Engineering
Load testing
– Testing an application under heavy loads.
– Eg. Testing of a web site under a range of
loads to determine, when the system
response time degraded or fails.
7/16/2015
35
University of Southern California
Center for Systems and Software Engineering
Stress Testing
– Testing under unusually heavy loads, heavy
repetition of certain actions or inputs, input
of large numerical values, large complex
queries to a database etc.
– Term often used interchangeably with ‘load’
and ‘performance’ testing.
Performance testing
– Testing how well an application complies to
performance requirements.
7/16/2015
36
University of Southern California
Center for Systems and Software Engineering
Alpha testing
•Testing done when development is nearing
completion; minor design changes may still
be made as a result of such testing.
Beta-testing
•Testing when development and testing are
essentially completed and final bugs and
problems need to be found before release.
7/16/2015
37
University of Southern California
Center for Systems and Software Engineering
Good Test Plans
• Developed and Reviewed early.
• Clear, Complete and Specific
• Specifies tangible deliverables that can be inspected.
• Staff knows what to expect and when to expect it.
7/16/2015
38
University of Southern California
Center for Systems and Software Engineering
Good Test Plans
• Realistic quality levels for goals
• Includes time for planning
• Can be monitored and updated
• Includes user responsibilities
• Based on past experience
• Recognizes learning curves
7/16/2015
39
University of Southern California
Center for Systems and Software Engineering
Good Test Plans
• Developed and Reviewed early.
• Clear, Complete and Specific
• Specifies tangible deliverables that can be inspected.
• Staff knows what to expect and when to expect it.
7/16/2015
40
University of Southern California
Center for Systems and Software Engineering
Good Test Plans
• Realistic quality levels for goals
• Includes time for planning
• Can be monitored and updated
• Includes user responsibilities
• Based on past experience
• Recognizes learning curves
7/16/2015
41
University of Southern California
Center for Systems and Software Engineering
Contents
Test Cases
– Test plan reference id
– Test case
– Test condition
– Expected behavior
7/16/2015
42
University of Southern California
Center for Systems and Software Engineering
Good Test Cases
Find Defects
• Have high probability of finding a new defect.
• Unambiguous tangible result that can be inspected.
• Repeatable and predictable.
7/16/2015
43
University of Southern California
Center for Systems and Software Engineering
Good Test Cases
• Traceable to requirements or design documents
• Push systems to its limits
• Execution and tracking can be automated
• Do not mislead
• Feasible
7/16/2015
44
University of Southern California
Center for Systems and Software Engineering
Outline
• Software Test in General
• Value-based Software Test
7/16/2015
45
University of Southern California
Center for Systems and Software Engineering
Pareto 80-20 distribution of test case value
[Bullock, 2000]
100
Actual business
value
80
% of
Value
for
Correct
Customer
Billing
60
Automated test
generation tool
- all tests have equal value*
40
20
5
10
15
Customer Type
7/16/2015
*Usual SwE assumption for all requirements,
objects, defects, …
46
University of Southern California
Center for Systems and Software Engineering
Business Case for Value-Based
Testing
Return on Investment
(ROI)
2
1.5
1
0.5
0
-0.5 0
20
40
60
80
100
-1
% Tests Run
Pareto testing
7/16/2015
ATG testing
47
University of Southern California
Center for Systems and Software Engineering
Value-based Software Testing FrameworkFeature Prioritization
7/16/2015
48
University of Southern California
Center for Systems and Software Engineering
How much test is enough?
Li, Q., Yang, Y., Li, M., Wang, Q., Boehm, B. W. and Hu, C., Improving software testing
process: feature prioritization to make winners of success-critical stakeholders. Journal of
Software Maintenance and Evolution: Research and Practice, n/a. doi: 10.1002/smr.512
7/16/2015
49
University of Southern California
Center for Systems and Software Engineering
Value-based Test Case Prioritization
Failed
<<Change the status to NA
for all test cases that
depends on this failed test
case>>
<<No dependencies or all
test cases in Dependencies
Set have been passed>>
Not-Tested-Yet
Ready-to-Test
NA
Passed
7/16/2015
50
University of Southern California
Center for Systems and Software Engineering
Value-based Test Order Logic
•Value First: Test the one with the highest value.
•Dependency Second: If the test case with the highest
value is not “Ready-to-Test”, which means at least one
of the test cases in its Dependencies Set is “Not-TestedYet”. In such situation, prioritize the “Not-Tested-Yet” test
cases according to “Value First” in this Dependencies
Set and start to test until all test cases in the
Dependencies Set are “Passed”. Then the test case with
the highest value is “Ready-to-Test”.
•Shrink the prioritization set ASAP: Exclude the tested
one out of the prioritization set.
7/16/2015
51
University of Southern California
Center for Systems and Software Engineering
Value-based Test Order Logic
Value-based Prioritization for One Regression Testing Round
Pick the one with the
highest Value
Exclude the “Passed”
one for prioritization
N
<<- -In the Whole Set- ->>
<<In the Dependencies Set>>
N
Have dependencies?
Start to test
<<Ready-to-Test>>
Failed?
Y
Y
N
All dependencies
passed?
Y
<<Ready-to-Test>>
Exclude the “Failed” one
and the others “NA” that
depends on it for
prioritization
Multiple Regression Tests
Until all Test Cases “Passed”
7/16/2015
52
University of Southern California
Center for Systems and Software Engineering
Test Case Dependency Tree
Start
TC1.1.1 (8, 8)
TC1.1.3 (1, 4.5)
TC1.2.1 (8, 8)
TC2.1.1(12,9.3)
TC1.1.2 (1,1)
TC1.2.2 (1, 4.5)
TC1.2.3 (1,1)
TC1.2.4 (1, 4.5)
TC5.1.1 (2, 6)
TC3.1.1 (16, 11)
TC3.2.1 (12, 11.2)
TC3.2.1 (4, 10)
TC3.3.1 (4,9.6)
TC4.1.1 (15, 11.8)
TC4.2.1 (10, 11.5)
TC5.2.1 (2, 10.1 )
7/16/2015
53
University of Southern California
Center for Systems and Software Engineering
Accumulated Cost-Effectiveness (ACE) of Test
70
60
50
40
30
Test Case Prioritization
20
Feature Prioritization
10
0
1
7/16/2015
2
3
4
5
6
7
8
9
10
11
12
13
14
15
54
University of Southern California
Center for Systems and Software Engineering
Other information
•CSCI599 Program Analysis & Software Testing
Professor William Halfond
•Testing Tools:
http://www.opensourcetesting.org/functional.php
http://www.softwareqatest.com/qatweb1.html
7/16/2015
55