Document 7639998

Download Report

Transcript Document 7639998

The Software Development
Life Cycle: An Overview
Presented by
Maxwell Drew
and
Dan Kaiser
Southwest State University
Computer Science Program
Last Time

Introduction to the Principles of Testing
The Testing Process

Schwan’s Development Standards

MSF Implementation and Testing

RUP Implementation and Testing

Session 7:
Testing and Deployment

Brief review of the testing process

Dynamic Testing Methods

Static Testing Methods

Deployment in MSF

Deployment in RUP
Overall Goal


Our overall goal is still to validate and
verify the software.
Validation
"Are we building the right product?"

Verification
"Are we building the product right?”
The V-model of Development
Requirements
specification
System
integration
test plan
Acceptance
test plan
Service
Acceptance
test
Detailed
design
System
desig n
System
specification
Sub-system
integration
test plan
System
integration test
Sub-system
integration test
Module and
unit code
and tess
Acceptance Tests

Pilot test: install on experimental basis

Alpha test: in-house test

Beta test: customer pilot

Parallel testing: new system operates
in parallel with old system
Integration Testing Strategies

Strategies covered





Top-down testing
Bottom-up testing
Thread testing
Stress testing
Back-to-back testing
Dynamic vs Static Verification

Dynamic verification


Concerned with exercising and observing
product behavior (testing)
Static verification

Concerned with analysis of the static
system representation to discover
problems
Static and Dynamic V&V
Static
verification
Requirements
specification
Prototype
High-level
design
Formal
specification
Detailed
design
Program
Dynamic
validation
Methods of Dynamic V&V

Black-box testing

Structural testing
(AKA White-box or Glass-box testing)
Defect testing



The objective of defect testing is to
discover defects in programs
A successful defect test is a test which
causes a program to behave in an
anomalous way
Tests show the presence not the
absence of defects
Test data and test cases

Test data Inputs which have been
devised to test the system

Test cases Inputs to test the system
and the predicted outputs from these
inputs if the system operates according
to its specification
The defect testing process
Test
cases
Design test
cases
Test
data
Prepare test
data
Test
results
Run program
with test data
Test
reports
Compare results
to test cases
Black-box testing



Approach to testing where the program is
considered as a ‘black-box’
The program test cases are based on the
system specification
Test planning can begin early in the
software process
Black-box testing
Input test data
I
Inputs causing
anomalous
behaviour
e
System
Output test results
Oe
Outputs which reveal
the presence of
defects
Equivalence partitioning

Partition system inputs and outputs into
‘equivalence sets’


If input is a 5-digit integer between 10,000 and
99,999, then equivalence partitions are
<10,000, 10,000-99,999 and > 100,000
Choose test cases at the boundary of these
sets

00000, 09999, 10000, 99999, 100001
Equivalence partitions
3
4
Less than 4
7
11
10
Between 4 and 10
More than 10
Number of input values
9999
10000
Less than 10000
Input values
50000
100000
99999
Between 10000 and 99999
More than 99999
Search routine specification
procedure Search (Key : ELEM ; T: ELEM_ARRAY; Found : BOOLEAN;
L: ELEM_INDEX) ;
Pre-condition
-- the array has at least one element
T[FIRST] <= T[LAST ]
Post-condition
-- the element is found and is referenced by L
( Found and T[L] = Key)
or
-- the element is not in the array
( not Found and not (exists i, FIRST >= i <= LAST, T[i] = Key ))
Search Routine - Input Partitions

Inputs which conform to the pre-conditions

Inputs where a pre-condition does not hold

Inputs where the key element is a member of
the array

Inputs where the key element is not a
member of the array
Testing Guidelines (Arrays)

Test software with arrays which have only a
single value

Use arrays of different sizes in different tests

Derive tests so that the first, middle and last
elements of the array are accessed

Test with arrays of zero length (if allowed by
programming language)
Search routine - input partitions
Array
Single value
Single value
More than 1 value
More than 1 value
More than 1 value
More than 1 value
Element
In array
Not in array
First element in array
Last element in array
Middle element in array
Not in array
Search Routine - Test Cases
Input array (T)
17
17
17, 29, 21, 23
41, 18, 9, 31, 30, 16, 45
17, 18, 21, 23, 29, 41, 38
21, 23, 29, 33, 38
Key (Key )
17
0
17
45
23
25
Output (Found, L)
true, 1
false, ??
true, 1
true, 6
true, 4
false, ??
Structural testing



Sometime called white-box or glassbox testing
Derivation of test cases according to
program structure. Knowledge of the
program is used to identify additional
test cases
Objective is to exercise all program
statements (not all path combinations)
White-box testing
Test data
Tests
Derives
Component
code
Test
outputs
Binary Search
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
void Binary_search (elem key, elem T[ ], int size, bool &found, int &L)
{
int bott, top, mid ;
bott = 0 ; top = size -1 ;
L = ( top + bott ) / 2 ;
if (T[L] == key)
found = true ;
else
found = false ;
while (bott <=top && !found){
mid = top + bott / 2 ;
if ( T [mid] == key ){
found = true;
L = mid ;
}
else if (T [mid] < key )
bott = mid + 1 ;
else
top = mid-1 ;
} //end while
} //binary_search
Binary Search - Equiv. Partitions







Pre-conditions satisfied, key element in array
Pre-conditions satisfied, key element not in
array
Pre-conditions unsatisfied, key element in
array
Pre-conditions unsatisfied, key element not
in array
Input array has a single value
Input array has an even number of values
Input array has an odd number of values
Binary search equiv. partitions
Equivalence class boundaries
Elements < Mid
Elements > Mid
Mid-point
Binary search - test cases
Input array (T)
17
17
17, 21, 23, 29
9, 16, 18, 30, 31, 41, 45
17, 18, 21, 23, 29, 38, 41
17, 18, 21, 23, 29, 33, 38
12, 18, 21, 23, 32
21, 23, 29, 33, 38
Key (Key )
17
0
17
45
23
21
23
25
Output (Fou nd, L)
true, 1
false, ??
true, 1
true, 7
true, 4
true, 3
true, 4
false, ??
Binary Search
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
void Binary_search (elem key, elem T[ ], int size, bool &found, int &L)
{
1
int bott, top, mid ;
bott = 0 ; top = size -1 ;
(while Bott <= Top loop)
2
L = ( top + bott ) / 2 ;
if (T[L] == key)
found = true ;
(if not Found then...)
3
else
found = false ;
(If T (mid) = Key then...)
4
5
while (bott <=top && !found){
mid = top + bott / 2 ;
6
7
if ( T [mid] == key ){
(if T (mid) < Key then...)
found = true;
8
9
L = mid ;
}
10
else if (T [mid] < key )
bott = mid + 1 ;
11
12
else
top = mid-1 ;
} //end while
13
} //binary_search
Binary Search Flow Graph
1
(while Bott < = Top loop)
2
(if not Found then...)
3
4
(If T (mid) = Key then...)
5
6
7
8
9
10
12
13
11
(if T (mid) < Key then...)
Independent paths






1, 2, 3, 4, 12, 13
1, 2, 3, 5, 6, 11, 2, 12, 13
1, 2, 3, 5, 7, 8, 10, 11, 2, 12, 13
1, 2, 3, 5, 7, 9, 10, 11, 2, 12, 13
Test cases should be derived so that all of
these paths are executed
A dynamic program analyzer may be used to
check that paths have been executed
Cyclomatic complexity



The number of tests necessary to test all
control statements equals the cyclomatic
complexity
Cyclomatic complexity equals number of
conditions in a program + 1
Can be calculated from the number of
nodes (N) and the number of edges (E) in
the flow graph

Complexity = E - N + 2
Static Verification

Verifying the conformance of a
software system to its specification
without executing the code
Static Verification





Involves analyses of source text by humans
or software
Can be carried out on ANY documents
produced as part of the software process
Discovers errors early in the software process
Usually more cost-effective than testing for
defect detection at the unit and module level
Allows defect detection to be combined with
other quality checks
Static Verification Effectiveness



More than 60% of program errors can be
detected by informal program inspections
More than 90% of program errors may be
detectable using more rigorous mathematical
program verification
The error detection process is not confused
by the existence of previous errors
Program Inspections



Formalized approach to document reviews
Intended explicitly for defect DETECTION
(not correction)
Defects may be logical errors, anomalies in
the code that might indicate an erroneous
condition (e.g. an un-initialized variable) or
non-compliance with standards
Fagan’s Inspection Pre-conditions






A precise specification must be available
Team members must be familiar with the
organization standards
Syntactically correct code must be available
An error checklist should be prepared
Management must accept that inspection will
increase costs early in the software process
Management must not use inspections for
staff appraisal
The inspection process
Planning
Overview
Follow-up
Individual
preparation
Rework
Inspection
meeting
Inspection procedure





System overview presented to inspection
team
Code and associated documents are
distributed to inspection team in advance
Inspection takes place and discovered errors
are noted
Modifications are made to repair discovered
errors
Re-inspection may or may not be required
Inspection Teams

Made up of at least 4 members





Author of the code being inspected
Reader who reads the code to the team
Inspector who finds errors, omissions and
inconsistencies
Moderator who chairs the meeting and notes
discovered errors
Other roles are Scribe and Chief moderator
Inspection rate





500 statements/hour during overview
125 source statement/hour during individual
preparation
90-125 statements/hour can be inspected
Inspection is therefore an expensive process
Inspecting 500 lines costs about 40
staff/hours
effort = $$$$$
Inspection checklists




Checklist of common errors should be used to
drive the inspection
Error checklist is programming language
dependent
The 'weaker' the type checking, the larger the
checklist
Examples: Initialization, Constant naming,
loop termination, array bounds, etc.
Table 8.2. Typical inspection preparation and meeting times.
Development artifact
Requirements document
Functional specification
Logic specification
Source code
User documents
Preparation time
25 pages per hour
45 pages per hour
50 pages per hour
150 lines of code per hour
35 pages per hour
Meeting time
12 pages per hour
15 pages per hour
20 pages per hour
75 lines of code per hour
20 pages per hour
Table 8.3. Faults found during discovery activities.
Discovery activity
Requirements review
Design review
Code inspection
Integration test
Acceptance test
Faults found per thousand lines of code
2.5
5.0
10.0
3.0
2.0
Mathematically-based Verification



Verification is based on mathematical
arguments which demonstrate that a
program is consistent with its
specification
Programming language semantics must
be formally defined
The program must be formally specified
Program Proving



Rigorous mathematical proofs that a program
meets its specification are long and difficult to
produce
Some programs cannot be proved because
they use constructs such as interrupts. These
may be necessary for real-time performance
The cost of developing a program proof is so
high that it is not practical to use this technique
in the vast majority of software projects
Program Verification Arguments



Less formal, mathematical arguments can
increase confidence in a program's
conformance to its specification
Must demonstrate that a program conforms to
its specification
Must demonstrate that a program will
terminate
Axiomatic approach



Define pre and post conditions for the
program or routine
Demonstrate by logical argument that the
application of the code logically leads from
the pre to the post-condition
Demonstrate that the program will always
terminate
Cleanroom


The name is derived from the 'Cleanroom'
process in semiconductor fabrication. The
philosophy is defect avoidance rather than
defect removal
Software development process based on:




Incremental development
Formal specification.
Static verification using correctness arguments
Statistical testing to determine program reliability.
The Cleanroom Process
Formally
specify
system
Error rework
Define
software
increments
Develop
operational
profile
Construct
structured
program
Formally
verify
code
Design
statistical
tests
Integrate
increment
Test
integrated
system
Testing Effectiveness

Experimentation found black-box
testing to be more effective than
structural testing in discovering defects

Static code reviewing was less
expensive and more effective in
discovering program faults
Table 8.5. Fault discovery percentages by fault origin.
Discovery technique
Prototyping
Requirements review
Design review
Code inspection
Unit testing
Requirements
40
40
15
20
1
Design
35
15
55
40
5
Coding
35
0
0
65
20
Documentation
15
5
15
25
0
Table 8.6. Effectiveness of fault discovery techniques. (Jones 1991)
Reviews
Prototypes
Testing
Correctness
proofs
Requirements
faults
Fair
Good
Poor
Poor
Design faults
Code faults
Excellent
Fair
Poor
Poor
Excellent
Fair
Good
Fair
Documentation
faults
Good
Not applicable
Fair
Fair
When to Stop Testing

Fault seeding

Assumption
detected seeded faults = detected non-seeded faults
total seeded faults
total non-seeded faults
When to Stop Testing

Confidence in the software
C = confidence
S = number of seeded faults
N = number of faults claimed (0 = no faults)
n = number of actual faults discovered
1
if n  N

C  
S /(S  N 1) if n  N
Questions?