Transcript Document

ECE 355: Software Engineering
CHAPTER 11
Part II
1
Course outline
• Unit 1: Software Engineering Basics
• Unit 2: Process Models and Software Life Cycles
• Unit 3: Software Requirements
• Unit 4: Unified Modeling Language (UML)
• Unit 5: Design Basics and Software Architecture
• Unit 6: OO Analysis and Design
• Unit 7: Design Patterns
 Unit 8: Testing and Reliability
• Unit 9: Software Engineering Management and Economics
2
Course Outline
• Introduction to software engineering
• Requirements Engineering
• Design Basics
• Traditional Design
• OO Design
• Design Patterns
• Software Architecture
• Design Documentation
Verification & Validation
• Software Process Management and Economics
3
• These slides are based on:
– Lecture slides by Ian Summerville, see
http://www.comp.lancs.ac.uk/computing/resources/ser/
– ECE355 Lecture slides by Sagar Naik
– Lecture Notes from Bernd Bruegge, Allen H.
Dutoit “Object-Oriented Software Engineering
– Using UML, Patterns and Java”
4
Overview
Basics of Testing
• Testing & Debugging Activities
• Testing Strategies
– Black-Box Testing
– White-Box Testing
• Testing in the Development Process
–
–
–
–
–
Unit Test
Integration Test
System Test
Acceptance Test
Regression Test
• Practical Considerations
5
Static and dynamic V&V
Static
verification
Requirements
specification
Architecture
Detailed
design
Implementation
V1
Implementation
V1
Prototype
Dynamic
V &V
Special cases
•
Executable specifications
6
•
Animation of formal specs
Program testing
• Can reveal the presence of errors NOT their
absence
– Only exhaustive testing can show a program is
free from defects. However, exhaustive testing
for any but trivial programs is impossible
• A successful test is a test which discovers one or
more errors
• Should be used in conjunction with static
verification
• Run all tests after modifying a system
7
©Ian Sommerville 1995
Testing in the V-Model
Acceptance
test
Requirements
Customer
Developer
Architectural
Design
Detailed
Design
Module
implementation
System test
Integration test
Unit test
Functional
(BB)
Structural
(WB)
8
Testing stages
• Unit testing
– Testing of individual components
• Integration testing
– Testing to expose problems arising from the
combination of components
• System testing
– Testing the complete system prior to delivery
• Acceptance testing
– Testing by users to check that the system satisfies
requirements. Sometimes called alpha testing
9
Types of testing
• Statistical testing
– Tests designed to reflect the frequency of user
inputs. Used for reliability estimation.
– Covered later in section on Software
reliability.
• Defect testing
– Tests designed to discover system defects.
– A successful defect test is one which reveals
the presence of defects in a system.
10
©Ian Sommerville 1995
Some Terminology
– Failure
• A failure is said to occur whenever the external
behavior does not conform to system spec.
– Error
• An error is a state of the system which, in the
absence of any corrective action, could lead to a
failure.
– Fault
• An adjudged cause of an error.
11
Some Terminology
It is there in the program…
fault, bug, error,
defect
Fault
Program state
Error
Observed
Failure
12
Testing Activities
Subsystem
Code
Subsystem
Code
Unit
Test
Unit
Test
Tested
Subsystem
Tested
Subsystem
Requirements
Analysis
Document
System
Design
Document
Integration
Test
Integrated
Subsystems
Functional
Test
User
Manual
Functioning
System
Tested Subsystem
Subsystem
Code
Unit
Test
All tests by developer
13
Testing Activities continued
Global
Requirements
Validated
Functioning
System PerformanceSystem
Test
Client’s
Understanding
of Requirements
Accepted
System
Acceptance
Test
Tests by client
Tests by developer
User
Environment
Installation
Test
Usable
System
User’s understanding
Tests (?) by user
System in
Use
14
Overview
• Basics of Testing
Testing & Debugging Activities
• Testing Strategies
– Black-Box Testing
– White-Box Testing
• Testing in the Development Process
–
–
–
–
–
Unit Test
Integration Test
System Test
Acceptance Test
Regression Test
• Practical Considerations
15
Testing and debugging
• Defect testing and debugging are distinct
processes
• Defect testing is concerned with confirming the
presence of errors
• Debugging is concerned with locating and
repairing these errors
• Debugging involves formulating a hypothesis
about program behaviour then testing these
hypotheses to find the system error
16
©Ian Sommerville 1995
Debugging Activities
Locate error
& fault
Design fault
repair
Repair fault
Re-test
program
17
©Ian Sommerville 1995
Testing Activities
Identify
Test conditions (“What”): an item or event to be verified.
Design
How the “what” can be tested: realization
Build
Build test cases (imp. scripts, data)
Run the system
Execute
Test case outcome with
Compare expected outcome
Test result
18
Testing Activities
• Test condition
– What: Descriptions of circumstances that
could be examined (event or item).
– Categories: functionality, performance, stress,
robustness…
– Derive
• Using testing techniques (to be discussed)
• (Refer to the V-Model)
19
Testing Activities
• Design test cases: the details
– Input values
– Expected outcomes
•
•
•
•
•
Things created (output)
Things changed/updated  database?
Things deleted
Timing
…
– Expected outcomes
• Known
• Unknown (examine the first actual outcome)
– Environment prerequisites: file, net connection …
20
Testing Activities
• Build test cases (implement)
– Implement the preconditions (set up the environment)
– Prepare test scripts (may use test automation tools)
Structure of a test case
Simple
(I, EO)
linear
{(I1, EO1), (I2, EO2), …}
Tree
I
EO1
EO2
21
Testing Activities
• Scripts contain data and instructions for testing
–
–
–
–
Comparison information
What screen data to capture
When/where to read input
Control information
• Repeat a set of inputs
• Make a decision based on output
– Testing concurrent activities
22
Testing Activities
• Compare (test outcomes, expected
outcomes)
– Simple/complex (known differences)
– Different types of outcomes
•
•
•
•
Variable values (in memory)
Disk-based (textual, non-textual, database, binary)
Screen-based (char., GUI, images)
Others (multimedia, communicating apps.)
23
Testing Activities
• Compare:
actual output == expected output??
– Yes
• Pass (Assumption: Test case was “instrumented.”)
– No
• Fail (Assumption: No error in test case,
preconditions)
24
Overview
• Basics of Testing
• Testing & Debugging Activities
Testing Strategies
– Black-Box Testing
– White-Box Testing
• Testing in the Development Process
–
–
–
–
–
Unit Test
Integration Test
System Test
Acceptance Test
Regression Test
• Practical Considerations
25
Goodness of test cases
• Exec. of a test case against a program P
– Covers certain requirements of P;
– Covers certain parts of P’s functionality;
– Covers certain parts of P’s internal logic.
 Idea of coverage guides test case
selection.
26
Overview
• Basics of Testing
• Testing & Debugging Activities
• Testing Strategies
Black-Box Testing
– White-Box Testing
• Testing in the Development Process
–
–
–
–
–
Unit Test
Integration Test
System Test
Acceptance Test
Regression Test
• Practical Considerations
27
Black-box Testing
• Focus: I/O behavior. If for any given input, we can predict
the output, then the module passes the test.
– Almost always impossible to generate all possible inputs ("test
cases")
• Goal: Reduce number of test cases by equivalence
partitioning:
– Divide input conditions into equivalence classes
– Choose test cases for each equivalence class. (Example: If an object
is supposed to accept a negative number, testing one negative
number is enough)
28
Black-box Testing (Continued)
• Selection of equivalence classes (No rules, only guidelines):
– Input is valid across range of values. Select test cases from 3
equivalence classes:
• Below the range
• Within the range
• Above the range
– Input is valid if it is from a discrete set. Select test cases from 2
equivalence classes:
• Valid discrete value
• Invalid discrete value
• Another solution to select only a limited amount of test cases:
– Get knowledge about the inner workings of the unit being tested =>
white-box testing
29
Equivalence partitioning
Invali d in pu ts
Vali d in pu ts
S y stem
Ou tput s
©Ian Sommerville 1995
30
Equivalence partitioning
•
Partition system inputs and outputs into
‘equivalence sets’
–
–
•
If input is a 5-digit integer between 10000 and 99999,
equivalence partitions are <10000, 10000-99999 and >99999
If you can predict that certain set of inputs will be treated
differently in processing from another one, put them into
separate partitions, e.g., Canadian and US addresses (will
validate state/province), addresses in other countries (no state
validation), etc.
Two test case selection strategies
–
–
Choose one random test case from each partition
Choose test cases at the boundary of the partitions
• 09999, 10000, 99999, 100000
31
©Ian Sommerville 1995
Equivalence partitions
09999
10000
50000
99999
100000
Less than 10000 Between 10000 and 99999 More than 99999
32
©Ian Sommerville 1995
Search routine specification
procedure Search (Key : ELEM ; T: ELEM_ARRAY;
Found : in out BOOLEAN; L: in out ELEM_INDEX) ;
Pre-condition
-- the array has at least one element
T’FIRST <= T’LAST
Post-condition
-- the element is found and is referenced by L
( Found and T (L) = Key)
or
-- the element is not in the array
( not Found and
not (exists i, T’FIRST >= i <= T’LAST, T (i) = Key ))
33
©Ian Sommerville 1995
Strategy for input partitions (I)
(General)
• Inputs which conform to the pre-conditions
• Inputs where a pre-condition does not hold
• Inputs derived from the post-condition
– Inputs where the key element is a member of
the array
– Inputs where the key element is not a member
of the array
34
©Ian Sommerville 1995 [modified]
Strategy for input partitions (II)
(Arrays)
• Test software with arrays which have only
a single value
• Test with arrays of zero length (if allowed
by programming language)
• Use arrays of different sizes in different
tests
• Derive tests so that the first, middle and
last elements of the array are accessed
35
©Ian Sommerville 1995
Search routine - input partitions
Array
Single value
Single value
More than 1 value
More than 1 value
More than 1 value
More than 1 value
Element
In array
Not in array
First element in array
Last element in array
Middle element in array
Not in array
No value
Not in array
36
©Ian Sommerville 1995 [modified]
Search routine - test cases
Input array
(T )
17
17
17, 29, 21, 23
41, 18, 9, 31, 30, 16, 45
17, 18, 21, 23, 29, 41, 38
21, 23, 29, 33, 38
()
Key
(Key )
17
0
17
45
23
25
1
Output
true, 1
false, ? ?
true, 1
true, 6
true, 4
false, ? ?
(Foun d, L )
??,??
37
©Ian Sommerville 1995 [modified]
Overview
• Basics of Testing
• Testing & Debugging Activities
• Testing Strategies
– Black-Box Testing
White-Box Testing
• Testing in the Development Process
–
–
–
–
–
Unit Test
Integration Test
System Test
Acceptance Test
Regression Test
• Practical Considerations
38
White-box Testing
• Statement Testing (Algebraic Testing): Test single statements
(Choice of operators in polynomials, etc)
• Loop Testing:
– Cause execution of the loop to be skipped completely. (Exception:
Repeat loops)
– Loop to be executed exactly once
– Loop to be executed more than once
• Path testing:
– Make sure all paths in the program are executed
• Branch Testing (Conditional Testing): Make sure that each
possible outcome from a condition is tested at least once
if ( i = TRUE) printf("YES\n");else printf("NO\n");
Test cases: 1) i = TRUE; 2) i = FALSE
39
Binary search (Ada)
proc edure
Bina ry _s earc h (Key : ELEM ; T: ELEM_A RRAY ;
Found: i n out BOOL EAN ; L: i n out ELEM_INDEX ) i s
- Precond it io ns
-- T’FI RST < =T’L AST a nd
-- forall i: T’FI RST. .T’LAST-1, T (i) <= T(i+1)
Bot t : ELEM_I NDEX : = T’FI RST ;
Top : ELEM_I NDEX := T’LAST ;
Mid : EL EM_ INDEX;
begi n
L := (T’FI RST + T’LAST ) / 2;
Foun d : = T( L ) = Ke y ;
whil e Bott <= To p and not Found l oop
Mid : = (Top + Bot t) m od 2;
i f T( Mid ) = Key then
Foun d : = true;
L := Mid;
el sif T( Mid ) < Key then
Bot t := Mid + 1;
el se
Top : = Mid - 1;
end if ;
end loop ;
end Bina ry _s earc h;
©Ian Sommerville 1995
40
Binary search (C++)
void Binary_search (elem key, elem* T, int size,
boolean &found, int &L)
{
int bott, top, mid ;
bott = 0 ; top = size -1 ;
L = ( top + bott ) / 2 ;
if (T[L] == key)
found = true ;
else
found = false ;
while (bott <=top && !found)
{
mid = top + bott / 2 ;
if ( T [mid] == key )
{
found = true;
L = mid ;
}
else if (T [mid] < key )
bott = mid + 1 ;
else
top = mid-1 ;
} // while
} //binary_search
41
©Ian Sommerville 1995
Binary search - equiv. partitions
• Pre-conditions satisfied, key element in array
• Pre-conditions satisfied, key element not in
array
• Pre-conditions unsatisfied, key element in array
• Pre-conditions unsatisfied, key element not in
array
• Input array has a single value
• Input array has an even number of values
• Input array has an odd number of values
42
©Ian Sommerville 1995
Binary search equiv. partitions
Eq ui valen ce cl ass bo un dari es
El em ent s < M i d
El em ent s > M i d
M i d-p oi nt
43
©Ian Sommerville 1995
Binary search - test cases
(T )
Input array
17
17
17, 21, 23, 29
9, 16, 18, 30, 31, 41, 45
17, 18, 21, 23, 29, 38, 41
17, 18, 21, 23, 29, 33, 38
12, 18, 21, 23, 32
21, 23, 29, 33, 38
Key
(Key )
17
0
17
45
23
21
23
25
Output
true, 1
false, ? ?
true, 1
true, 7
true, 4
true, 3
true, 4
false, ? ?
(Foun d, L )
44
©Ian Sommerville 1995
Code Coverage
• Statement coverage
– Elementary statements: assignment, I/O, call
– Select a test set T such that by executing P for
each case in T, each statement of P is executed
at least once.
– Read(x); read(y);
if x > 0 then write(“1”);
else
write(“2”);
if y > 0 then write(“3”);
else
write(“4”);
– T: {<x = -13, y = 51>, <x = 2, y = -3>}
45
White-box Testing: Determining the Paths
FindMean (FILE ScoreFile)
{ float SumOfScores = 0.0;
int NumberOfScores = 0;
1
float Mean=0.0; float Score;
Read(ScoreFile, Score);
2 while (! EOF(ScoreFile) {
3 if (Score > 0.0 ) {
SumOfScores = SumOfScores + Score;
NumberOfScores++;
}
5
Read(ScoreFile, Score);
4
6
}
/* Compute the mean and print the result */
7 if (NumberOfScores > 0) {
Mean = SumOfScores / NumberOfScores;
printf(“ The mean score is %f\n”, Mean);
} else
printf (“No scores found in file\n”);
9
}
8
46
Constructing the Logic Flow Diagram
Start
1
F
2
T
3
T
F
5
4
6
7
T
F
9
8
Exit
47
Code Coverage
- Construct a control-flow graph of a program (module)
c
~c
c
c
~c
S1
S2 S1
S1
Assignment,
I/O, call
If c then S1
If c then S1 while c S1
else S2
-Edge coverage
- Select a set T such that, by executing P for each member in T,
each edge of P’s control-flow graph is traversed at least
once.
48
Code Coverage
• Condition coverage
– Edge coverage plus
– All possible values of the constituents of compound
conditions are exercised at least once.
(constituents: atomic formula  rel./Boolean var)
– Example:
• while (~found) and counter <= NumOfItems
49
Code Coverage
• Path coverage
– Path coverage focuses on executing distinct paths rather
than just edges or statements
– Paths through a program can be feasible or infeasible
(e.g., due to contradicting conditions)
– Ideally, one would like to cover all feasible paths, but
the number of feasible paths usually explodes very
quickly with the size of the program
– Therefore, there are various path coverage strategies
targeting specific subsets of the feasible paths
50
Code Coverage
-Path coverage
- Select a test set T such that, by executing P for each member of
T, all paths leading from the initial node to the final node of P’s
control-flow graph are traversed.
- Control-flow graph
S1
S
c
Seq.
If c
do S until c
While c then S
S2
If c then S1 else S2
-Note: Separate node for each member of a compound condition
-How many paths?
51
Code Coverage
• Path testing based on cyclomatic complexity
– (McCabe’s) Cyclomatic complexity
• V(G) = E – N + 2  E: # of edges, N: # of nodes
• V(G) = p + 1  p is the # of predicate nodes
• V(G) = # of regions (area surrounded by nodes/edges)
– V(G): Upper bound on the # of independent paths
• Independent path: A path with at least one new node/edge
– Example:
– V(G) = E – N + 2 = 17 - 13 + 2 = 6
– V(G) = p + 1 = 5 + 1
– V(G) = 6
– Advantage: The number of test cases is proportional to
52
the size of the program
Code Coverage
-Path coverage: example
1
:
I = 1;
1 TI = TV = 0;
sum = 0;
DO WHILE (value[I] <> -999 and TI < 100)
2
3
4 TI++;
if (value[I] >= min and value[I] <= max){
5
2
F
3
R1
6
4
F
then {TV++; sum = sum + value[I];}
}
8 I++;
ENDDO
7
10
9
10
If TV > 0
av = sum/TV; 11
Else av = -999;
12
13  final node
F
T
R4
5
R2
6
12 R5 11
8
R3
T
7
13
9
R6
Outer region
53
V(G) as An Upper Bound
1
2 3
1
2
V(G) = 3
54
V(G)-Preserving Transformation
SWITCH
V(G) = 3
Nested IFs
V(G) = 3
55
Code Coverage
• More complete path testing strategies target
loops, e.g.,
– 0 and 1 passes through every loop is tested
– Multiple passes exercising different paths
through the body of a loop
– n passes (max = n)
• Trying to be more complete on loop testing
quickly becomes infeasible
56
Code Coverage
-Data flow testing ( D = Definition, U = Use)
-DU chain:
-Assign a unique number to each statement
-DEF(S): set of all variables defined in statement number S
-USE(S): set of all variables used in statement number S
-Live variable: Def.of var X at S is live at S’ if
-There is a path from S to S’ that contains no other def of X
-DU chain: [X, S, S’]
-X is in DEF(S) and in USE(S’), and
-Def of X at S is live at S’
-DU coverage
-Every DU chain be covered at least once.
57
Overview
• Basics of Testing
• Testing & Debugging Activities
• Testing Strategies
– Black-Box Testing
– White-Box Testing
• Testing in the Development Process
Unit Test
– Integration Test
– System Test
– Acceptance Test
– Regression Test
• Practical Considerations
58
Unit Testing
Objective: Find differences between specified units and their imps.
Unit: component ( module, function, class, objects, …)
Unit test environment:
Driver
Test cases
Unit under
test
Effectiveness?
Test result
Stub
Stub
• Partitioning
• Code coverage
Dummy modules
59
Overview
• Basics of Testing
• Testing & Debugging Activities
• Testing Strategies
– Black-Box Testing
– White-Box Testing
• Testing in the Development Process
– Unit Test
Integration Test
– System Test
– Acceptance Test
– Regression Test
• Practical Considerations
60
Integration Testing
• Objectives:
• To expose problems arising from the combination
• To quickly obtain a working solution from comps.
• Problem areas
– Internal: between components
• Invocation: call/message passing/…
• Parameters: type, number, order, value
• Invocation return: identity (who?), type, sequence
– External:
• Interrupts (wrong handler?)
• I/O timing
– Interaction
61
Integration Testing
• Types of integration
– Structural
• “Big bang”  no error localization
• Bottom-up: terminal, driver/module, (driver 
module)
• Top-down: top, stubs, (stub  module), early demo
– Behavioral
• (next slide)
62
Integration Testing
(Behavioral: Path-Based)
A
B
C
MM-path: Interleaved sequence of module exec path and messages
Module exec path: entry-exit path in the same module
Atomic System Function: port input, … {MM-paths}, … port output
Test cases: exercise ASFs
63
Overview
• Basics of Testing
• Testing & Debugging Activities
• Testing Strategies
– Black-Box Testing
– White-Box Testing
• Testing in the Development Process
– Unit Test
– Integration Test
System Test
– Acceptance Test
– Regression Test
• Practical Considerations
64
System Testing
• Concerns with the app’s externals
• Much more than functional
–
–
–
–
Load/stress testing
Usability testing
Performance testing
Resource testing
65
System Testing
• Functional testing
– Objective: Assess whether the app does what it
is supposed to do
– Basis: Behavioral/functional specification
– Test case: A sequence of ASFs (thread)
• (Refer to pages 22-24 of ECE 355 PBX Project
Desc.)
66
System Testing
• Functional testing: coverage
• Event-based coverage
–
–
–
–
–
–
PI1: each port input event occurs
PI2: common sequences of port input event occurs
PI3: each port input in every relevant data context
PI4: for a given context, all possible input events
PO1: each port output event
PO2: each port output event occurs for each cause
• Data-based
– DM1: Exercise cardinality of every relationship
– DM2: Exercise (functional) dependencies among relationships
67
System Testing
• Stress testing: push it to its limit + beyond
Volume
Users
:
Application
(System)
response
rate
Resources: phy. + logical
68
System Testing
• Performance testing
– Performance seen by
• users: delay, throughput
• System owner: memory, CPU, comm
– Performance
• Explicitly specified or expected to do well
• Unspecified  find the limit
• Usability testing
– Human element in system operation
• GUI, messages, reports, …
69
Test Stopping Criteria
• Meet deadline, exhaust budget, … 
management
• Achieved desired coverage
• Achieved desired level failure intensity
70
Overview
• Basics of Testing
• Testing & Debugging Activities
• Testing Strategies
– Black-Box Testing
– White-Box Testing
• Testing in the Development Process
– Unit Test
– Integration Test
– System Test
Acceptance Test
– Regression Test
• Practical Considerations
71
Acceptance Testing
•
•
•
•
Purpose: ensure that end users are satisfied
Basis: user expectations (documented or not)
Environment: real
Performed: for and by end users (commissioned
projects)
• Test cases:
– May reuse from system test
– Designed by end users
72
Overview
• Basics of Testing
• Testing & Debugging Activities
• Testing Strategies
– Black-Box Testing
– White-Box Testing
• Testing in the Development Process
– Unit Test
– Integration Test
– System Test
– Acceptance Test
Regression Test
• Practical Considerations
73
Regression Testing
• Whenever a system is modified (fixing a bug,
adding functionality, etc.), the entire test suite
needs to be rerun
– Make sure that features that already worked are not
affected by the change
• Automatic re-testing before checking in changes
into a code repository
• Incremental testing strategies for big systems
74
Overview
• Basics of Testing
• Testing & Debugging Activities
• Testing Strategies
– Black-Box Testing
– White-Box Testing
• Testing in the Development Process
–
–
–
–
–
Unit Test
Integration Test
System Test
Acceptance Test
Regression Test
Practical Considerations
75
How to Design Practical Test Cases
(Experience of Hitachi Software Engineering)
Bug detection during a development phase
100%
50%
0%
Desk
Field failure
Unit Integrate System
0.02%
10 engineers x 12 months  100,000 LOC
1000 bugs  < 1 bug appears at the customer’s site
76
Testing effectiveness
• In an experiment, black-box testing was
found to be more effective than structural
testing in discovering defects
• Static code reviewing was less expensive
and more effective in discovering program
faults
77
Practical test cases
• Document test cases
– New viewpoint to functional spec
– Validation of test cases
• Well-balanced: normal, abnormal, boundary
• Correctness test cases
– Estimating quality
• 100/1000 test cases show 4 bugs  40 bugs
78
Practical test cases
• Schedule (rough idea)
– 10 engineers, 12 months, 100,000 LOC in C
– Apportioning 12 months
• SRS: 2; Design: 3; Coding: 2
• Debugging (U + I + S tests): 3
• QA testing: 2 (shippable??, redesign tests—no reuse)
• How many test cases (30-year empirical study)
–
–
–
–
1 test case per 10-15 LOC
100,000 LOC  1000 test cases
2 weeks to design 1000 test cases
2 months to execute 1000 test cases (25/day)
79
Comparison of White & Black-box
Testing 25.1.2002
• White-box Testing:
– Potentially infinite number of paths
have to be tested
– White-box testing often tests what
is done, instead of what should be
done
– Cannot detect missing use cases
• Black-box Testing:
– Potential combinatorical explosion
of test cases (valid & invalid data)
– Often not clear whether the selected
test cases uncover a particular error
– Does not discover extraneous use
cases ("features")
• Both types of testing are needed
• White-box testing and black box
testing are the extreme ends of a
testing continuum.
• Any choice of test case lies in
between and depends on the
following:
–
–
–
–
Number of possible logical paths
Nature of input data
Amount of computation
Complexity of algorithms and data
structures
80
The 4 Testing Steps
1. Select what has to be
measured
– Analysis: Completeness of
requirements
– Design: tested for cohesion
– Implementation: Code tests
2. Decide how the testing is
done
–
–
–
–
Code inspection
Proofs (Design by Contract)
Black-box, white box,
Select integration testing
strategy (big bang, bottom up,
top down, sandwich)
3. Develop test cases
– A test case is a set of test data
or situations that will be used
to exercise the unit (code,
module, system) being tested
or about the attribute being
measured
4. Create the test oracle
– An oracle contains of the
predicted results for a set of
test cases
– The test oracle has to be
written down before the actual
testing takes place
81
Guidance for Test Case Selection
• Use analysis knowledge
about functional
requirements (black-box
testing):
– Use cases
– Expected input data
– Invalid input data
• Use design knowledge about
system structure, algorithms,
data structures (white-box
testing):
• Use implementation
knowledge about algorithms:
– Examples:
– Force division by zero
– Use sequence of test cases for
interrupt handler
– Control structures
• Test branches, loops, ...
– Data structures
• Test records fields, arrays, ...
82
Unit-testing Heuristics
1. Create unit tests as soon as object
design is completed:
– Black-box test: Test the use
cases & functional model
– White-box test: Test the
dynamic model
– Data-structure test: Test the
object model
2. Develop the test cases
– Goal: Find the minimal
number of test cases to cover
as many paths as possible
3. Cross-check the test cases to
eliminate duplicates
– Don't waste your time!
4. Desk check your source code
– Reduces testing time
5. Create a test harness
– Test drivers and test stubs are
needed for integration testing
6. Describe the test oracle
– Often the result of the first
successfully executed test
7. Execute the test cases
– Don’t forget regression testing
– Re-execute test cases every time a
change is made.
8. Compare the results of the test with the
test oracle
– Automate as much as possible
83
Practical test cases
• Designing test cases (contd.)
– Distribution of test cases
•
•
•
•
Normal: 60%
Boundary: 10%
Error: 15%
Environmental (platform + performance): 15%
– Finishing touch
• 48-hour continuous operation test (basic function)
 memory leakage, deadlock, connection time-out
84
Overview
• Basics of Testing
• Testing & Debugging Activities
• Testing Strategies
– Black-Box Testing
– White-Box Testing
• Testing in the Development Process
–
–
–
–
–
Unit Test
Integration Test
System Test
Acceptance Test
Regression Test
• Practical Considerations
85