Software Testing Strategies based on Chapter 13 - Software Engineering: A Practitioner’s Approach, 6/e copyright © 1996, 2001, 2005 R.S.

Download Report

Transcript Software Testing Strategies based on Chapter 13 - Software Engineering: A Practitioner’s Approach, 6/e copyright © 1996, 2001, 2005 R.S.

Software Testing Strategies
based on
Chapter 13 - Software Engineering: A Practitioner’s Approach, 6/e
copyright © 1996, 2001, 2005
R.S. Pressman & Associates, Inc.
For University Use Only
May be reproduced ONLY for student use at the university level
when used in conjunction with Software Engineering: A Practitioner's Approach.
Any other reproduction or use is expressly prohibited.
1
Software Testing
Testing is the process of exercising a program with the
specific intent of finding errors prior to delivery
to the end user.
errors
requirements conformance
performance
an indication
of quality
2
Who Tests the Software?
developer
Understands the system
but, will test "gently"
and, is driven by "delivery"
independent tester
Must learn about the system,
but, will attempt to break it
and, is driven by quality
3
Levels of Testing









Unit testing
Integration testing
Validation testing

Focus is on software requirements
System testing

Focus is on system integration
Alpha/Beta testing

Focus is on customer usage
Recovery testing

forces the software to fail in a variety of ways and verifies that recovery is properly performed
Security testing

verifies that protection mechanisms built into a system will, in fact, protect it from improper penetration
Stress testing

executes a system in a manner that demands resources in abnormal quantity, frequency, or volume
Performance Testing

test the run-time performance of software within the context of an integrated system
4
Unit Testing
module
to be
tested
results
software
engineer
interface
local data structures
boundary conditions
independent paths
test cases
error handling paths
5
Integration Testing Strategies
Options:
• the “big bang” approach
• an incremental construction strategy
• Top Down, bottom-up, sandwich Integration
6
OOT Strategy

class testing is the equivalent of unit testing



…if there is no nesting of classes
operations within the class are tested
the state behavior of the class is examined
integration applied three different strategies/levels
of abstraction



thread-based testing—integrates the set of classes
required to respond to one input or event
use-based testing—integrates the set of classes
…this
is pushing…
required to respond to one use
case
cluster testing—integrates the set of classes required to
demonstrate one collaboration
7
Debugging: A Diagnostic Process
test cases
new test
regression cases
tests
suspected
causes
corrections
results
Debugging
identified
causes
8
Software Testing Techniques
based on
Chapter 14 - Software Engineering: A Practitioner’s Approach, 6/e
copyright © 1996, 2001, 2005
R.S. Pressman & Associates, Inc.
For University Use Only
May be reproduced ONLY for student use at the university level
when used in conjunction with Software Engineering: A Practitioner's Approach.
Any other reproduction or use is expressly prohibited.
9
What is a “Good” Test?



a high probability of finding an error
not redundant.
neither too simple nor too complex
"Bugs lurk in corners
and congregate at
boundaries ..."
Boris Beizer
OBJECTIVE:
to uncover errors
CRITERIA:
in a complete manner
CONSTRAINT:
with a minimum of effort and time
10
White-Box or Black-Box?
Exhaustive Testing
loop < 20 X
14
There are approx. 10 possible paths! If we execute one
test per millisecond, it would take 3,170 years to test this program!!
Where does 10 14 come from?
11
Level of abstraction
RE in V Model
system
integration
system
requirements
software
requirements
acceptance
test
preliminary
design
detailed
design
code &
debug
software
integration
component
test
unit
test
Time
[Chung]
12
Software Testing
White-Box Testing
Black-Box Testing
requirements
output
... our goal is to ensure that all
statements and conditions have
been executed at least once ...
input
events
13
White-Box Testing
Basis Path Testing
First, we compute the cyclomatic
complexity:
number of simple decisions + 1
or
number of enclosed areas + 1
In this case, V(G) = 4
14
White-Box Testing
Basis Path Testing
Next, we derive the
independent paths:
1
Since V(G) = 4,
there are four paths
2
3
4
5
6
7
8
A number of industry studies have indicated
that the higher V(G), the higher the probability
or errors.
Path 1:
Path 2:
Path 3:
Path 4:
1,2,4,7,8
1,2,3,5,7,8
1,2,3,6,7,8
1,2,4,7,2,4,...7,8
Finally, we derive test
cases to exercise these
paths.
15
White-Box Testing
Loop Testing
Simple
loop
Nested
Loops
Concatenated
Loops
Why is loop testing important?
Unstructured
Loops
16
White-Box Testing
Black-Box Testing
Equivalence Partitioning &
Boundary Value Analysis
 If x = 5 then …

If x > -5 and x < 5 then …
What would be the equivalence classes?
17
Black-Box Testing
Comparison Testing

Used only in situations in which the reliability of software
is absolutely critical (e.g., human-rated systems)



Separate software engineering teams develop independent
versions of an application using the same specification
Each version can be tested with the same test data to ensure
that all provide identical output
Then all versions are executed in parallel with real-time
comparison of results to ensure consistency
18
OOT—Test Case Design
Berard [BER93] proposes the following approach:
1. Each test case should be uniquely identified and should be explicitly
associated with the class to be tested,
2. A list of testing steps should be developed for each test and should
contain [BER94]:
a. a list of specified states for the object that is to be tested
b. a list of messages and operations that will be exercised as a
consequence of the test
how can this be done?
c. a list of exceptions that may occur as the object is tested
d. a list of external conditions (i.e., changes in the environment external
to the software that must exist in order to properly conduct the test)
{people, machine, time of operation, etc.}
19
OOT Methods: Behavior Testing
The tests to be
designed should
achieve all state
coverage [KIR94].
That is, the operation
sequences should
cause the Account
class to make
transition through all
allowable states
open
empty
acct
setup Accnt
set up
acct
deposit
(initial)
deposit
balance
credit
accntInfo
working
acct
withdraw
withdrawal
(final)
dead
acct
close
nonworking
acct
Figure 1 4 .3 St at e diagram f or A ccount class ( adapt ed f rom [ KIR9 4 ] )
Is the set of initial input data enough?
20
Omitted Slides
21
Testability







Operability—it operates cleanly
Observability—the results of each test case are readily observed
Controllability—the degree to which testing can be automated and optimized
Decomposability—testing can be targeted
Simplicity—reduce complex architecture and logic to simplify tests
Stability—few changes are requested during testing
Understandability—of the design
22
Strategic Issues

Understand the users of the software and develop a profile for each user category.

Develop a testing plan that emphasizes “rapid cycle testing.”

Use effective formal technical reviews as a filter prior to testing

Conduct formal technical reviews to assess the test strategy and test cases themselves.
23
NFRs:
Reliability
[Chung, RE Lecture Notes]]
Counting Bugs

Sometimes reliability requirements take the form:
"The software shall have no more than X bugs/1K LOC"
But how do we measure bugs at delivery time?

Bebugging Process - based on a Monte Carlo technique for statistical analysis of random events.
1. before testing, a known number of bugs (seeded bugs) are secretly inserted.
2. estimate the number of bugs in the system
3. remove (both known and new) bugs.
# of detected seeded bugs/ # of seeded bugs = # of detected bugs/ # of bugs in the system
# of bugs in the system = # of seeded bugs x # of detected bugs /# of detected seeded bugs
Example: secretely seed 10 bugs
an independent test team detects 120 bugs (6 for the seeded)
# of bugs in the system = 10 x 120/6 = 200
# of bugs in the system after removal = 200 - 120 - 4 = 76

But, deadly bugs vs. insignifant ones; not all bugs are equally detectable; ( Suggestion [Musa87]:
"No more than X bugs/1K LOC may be detected during testing"
"No more than X bugs/1K LOC may be remain after delivery,
as calculated by the Monte Carlo seeding technique"
24
White-Box Testing
Cyclomatic Complexity
A number of industry studies have indicated
that the higher V(G), the higher the probability
or errors.
modules
V(G)
modules in this range are
more error prone
25