Software Verification

Download Report

Transcript Software Verification

Advanced Software Engineering:
Software Testing
COMP 3702(L2)
Anneliese Andrews
Sada Narayanappa
Thomas Thelin
Carina Andersson
News & Project
News
 Updated course program
 Reading instructions
 The book, deadline 23/3
Project Option 1




IMPORTANT to read the project description thoroughly
Schedule, Deadlines, Activities
Requirements (7-10 papers), project areas
Report, template, presentation
Project Option 2: date!
A Andrews - Software Engineering:
Software Testing'06
Lecture
Some more testing fundamentals
Chapter 4 (Lab 1)
 Black-box testing techniques
Chapter 12 (Lab 2)
 Statistical testing
 Usage modelling
 Reliability
A Andrews - Software Engineering:
Software Testing'06
Terminology
 Unit testing: testing a procedure, function, or class.
 Integration testing: testing connection between units
and components.
 System testing: test entire system.
 Acceptance testing: testing to decide whether to
purchase the software.
A Andrews - Software Engineering:
Software Testing'06
Terminology (2)
Alpha testing: system testing by a user group
within the developing organization.
Beta testing: system testing by select
customers.
Regression testing: retesting after a software
modification.
A Andrews - Software Engineering:
Software Testing'06
Test Scaffolding
Allows us to test incomplete systems.
Test drivers: test components.
Stubs: test a system when some components
it uses are not yet implemented.
Often a short, dummy program --- a method with an
empty body.
A Andrews - Software Engineering:
Software Testing'06
Test Oracles
 Determine whether a test run completed with or
without errors.
 Often a person, who monitors output.
 Not a reliable method.
 Automatic oracles check output using another
program.
 Requires some kind of executable specification.
A Andrews - Software Engineering:
Software Testing'06
Testing Strategies:
Black Box Testing
Test data derived solely from specifications.
Also called “functional testing”.
Statistical testing.
Used for reliability measurement and prediction.
A Andrews - Software Engineering:
Software Testing'06
Testing Theory:
Why Is Testing So Difficult?
Theory often tells us what we can’t do.
Testing theory main result: perfect testing
is impossible.
A Andrews - Software Engineering:
Software Testing'06
An Abstract View of Testing
Let program P be a function with an input
domain D (i.e., set of all integers).
We seek test data T, which will include
selected inputs of type D.
 T is a subset of D.
 T must be of finite size.
Why?
A Andrews - Software Engineering:
Software Testing'06
We Need a Test Oracle
Assume the best possible oracle --- the
specification S, which is function with input
domain D.
On a single test input i, our program passes
the test when
P(i) = S(i)
Or if we think of a spec as a Boolean function that
compares the input to the output: S(i, P(i))
A Andrews - Software Engineering:
Software Testing'06
Requirement For Perfect Testing
[Howden 76]
1. If all of our tests pass, then the program
is correct.
x[x  T  P(x) = S(x)]
 y[y  D  P(y) = S(y)]
•
•
If for all tests t in test set T, P(t) = S(t), then
we are sure that the program will work
correctly for all elements in D.
If any tests fail we look for a bug.
A Andrews - Software Engineering:
Software Testing'06
Requirement For Perfect
Testing
2. We can tell whether the program will
eventually halt and give a result for any t
in our test set T.
x[x T  “ a computable procedure for
determining if P halts on input x”]
A Andrews - Software Engineering:
Software Testing'06
But, Both Requirements Are
Impossible to Satisfy.
1st requirement can be satisfied only if T= D.
We test all elements of the input domain.
2nd requirement depends on a solution to the
halting problem, which has no solution.
We can demonstrate the problem with
Requirement 1 [Howden 78].
A Andrews - Software Engineering:
Software Testing'06
Other Undecidable Testing
Problems
Is a control path feasible?
Can I find data to execute a program control path?
Is some specified code reachable by any input
data?
These questions cannot, in general, be
answered.
A Andrews - Software Engineering:
Software Testing'06
Software Testing Limitations
There is no perfect software testing.
Testing can show defects, but can never show
correctness.
We may never find all of the program errors
during testing.
A Andrews - Software Engineering:
Software Testing'06
Why test techniques?
Exhaustive testing (use of all possible inputs and conditions) is
impractical
 must use a subset of all possible test cases
 want must have high probability of detecting faults
Need processes that help us selecting test cases
 Different people – equal probability to detect faults
Effective testing – detect more faults
 Focus attention on specific types of fault
 Know you’re testing the right thing
Efficient testing – detect faults with less effort
 Avoid duplication
 Systematic techniques
are measurable
A Andrews - Software Engineering:
Software Testing'06
Dimensions of testing
Testing combines techniques that focus on
 Testers – who does the testing
 Coverage – what gets tested
 Potential problems – why you're testing (risks /
quality)
 Activities – how you test
 Evaluation – how to tell whether the test passed or
failed
All testing should involve all five dimensions
A Andrews - Software Engineering:
Software Testing'06
 Testing standards (e.g. IEEE)
Black-box testing
Input test data
I
Inputs causing
anomalous
behaviour
e
System
Output test results
Oe
A Andrews - Software Engineering:
Software Testing'06
Outputs which reveal
the presence of
defects
Equivalence partitioning
mouse picks on menu
Partitioning is based
on input conditions
user
queries
numerical
data
output
format
requests
responses
to prompts
command key input
A Andrews - Software Engineering:
Software Testing'06
Equivalence partitioning
Input condition:
is a range
Invalid inputs
Valid inputs
 one valid and two invalid
classes are defined
System
requires a specific value
 one valid and two invalid
classes are defined
is a boolean
 one valid and one invalid
Outputs
class are defined
A Andrews - Software Engineering:
Software Testing'06
Test Cases
Which test cases have the best chance of
successfully uncovering faults?
 as near to the mid-point of the partition as possible
 the boundaries of the partition and
Mid-point of a partition typically represents the
“typical values”
Boundary values represent the atypical or unusual
values
Usually identify equivalence partitions based on
specs and experience
A Andrews - Software Engineering:
Software Testing'06
Equivalence Partitioning Example
Consider a system specification which
states that a program will accept between 4
and 10 input values (inclusive), where the
input values must be 5 digit integers greater
than or equal to 10000
What are the equivalence partitions?
A Andrews - Software Engineering:
Software Testing'06
Example Equivalence
Partitions
3
4
Less than 4
7
11
10
Between 4 and 10
More than 10
Number of input values
9999
10000
Less than 10000
Input values
50000
100000
99999
Between 10000 and 99999
A Andrews - Software Engineering:
Software Testing'06
More than 99999
Boundary value analysis
mouse picks on menu
output
domain
user
queries
numerical
data
output
format
requests
responses
to prompts
command key input
A Andrews - Software Engineering:
Software Testing'06
Boundary value analysis
Range a..b  a, b, just above a, just below b
Number of values: max, min, just below min, just
above max
Output bounds should be checked
Boundaries of externally visible data structures shall
be checked (e.g. arrays)
–1 0…99 100
–1 0…99 100
Compone nt
A Andrews - Software Engineering:
Software Testing'06
- 10 –9…499 500
- 10 –9…499 500
Some other black-box techniques
Risk-based testing, random testing
Stress testing, performance testing
Cause-and-effect graphing
State-transition testing
A Andrews - Software Engineering:
Software Testing'06
Black Box Testing:
Random Testing
Generate tests randomly.
“Poorest methodology of all” [Meyers].
Promoted by others.
Statistical testing:
 Test inputs from an operational profile.
 Based on reliability theory.
 Adds discipline to random testing.
A Andrews - Software Engineering:
Software Testing'06
Black Box Testing:
Cause-Effect Analysis
Rely on pre-conditions and post-conditions
and dream up cases.
Identify impossible combinations.
Construct decision table between input and
output conditions.
Each column corresponds to a test case.
A Andrews - Software Engineering:
Software Testing'06
Error guessing
Exploratory testing, happy testing, ...
Always worth including
Can detect some failures that systematic techniques
miss
Consider






Past failures (fault models)
Intuition
Experience
Brain storming
”What is the craziest thing we can do?”
Lists in literature A Andrews - Software Engineering:
Software Testing'06
Black Box Testing:
Error Guessing
“Some people have a knack for ‘smelling out’
errors” [Meyers].
Enumerate a list of possible errors or error
prone situations.
Write test cases based on the list.
Depends upon having fault models, theories on
the causes and effects of program faults.
A Andrews - Software Engineering:
Software Testing'06
Usability testing
Characteristics
Environments
 Accessibility
 Free form tasks
 Responsiveness
 Procedure scripts
 Efficiency
 Paper screens
 Comprehensibility
 Mock-ups
 Field trial
A Andrews - Software Engineering:
Software Testing'06
Specification-based testing
Formal method
Test cases derived from a (formal)
specification (requirements or design)
Specification
Model
(state chart)
Test case
generation
A Andrews - Software Engineering:
Software Testing'06
Test
execution
Model-based Testing
Model
Usag
e
Requirements
VALIDATION
Specification
Top-level
Design
Integration
Detailed
Design
Unit Test
Test phase
Coding
A Andrews - Software Engineering:
Software Testing'06
Need For Test Models
 Testing is a search problem.
 Search for specific input & state combinations that cause
failures.
 These combinations are rare.
 Brute force cannot be effective.
Brute force can actually lead to overconfidence.
 Must target & test specific combinations.
 Targets based on fault models.
 Testing is automated to insure repeatable coverage of
targets: “target coverage”.
A Andrews - Software Engineering:
Software Testing'06
Model-Based Coverage
 We cannot enumerate all state/input combinations
for the implementation under test (IUT).
 We can enumerate these combinations for a model.
 Models allow automated target testing.
“Automated testing replaces the tester’s slingshot with a
machine gun.”
“The model paints the target & casts the bullets.”
A Andrews - Software Engineering:
Software Testing'06
Test Model Elements
Subject: the IUT.
Perspective: focus on aspects likely to be
buggy based on a fault model.
Representation: often graphs (one format is
UML) or checklists.
Technique: method for developing a model
and generate tests to cover model targets.
A Andrews - Software Engineering:
Software Testing'06
The Role of Models in Testing
Model validation: does it model the right thing?
Verification: implementation correct?
Informal: checklist
Formal: proof
Consistency checking: is representation instance
of meta model?
Meta-model: UML, graphs, etc + technique
Instance model: representation constructed
A Andrews - Software Engineering:
Software Testing'06
Models Roles in Testing
Responsibility-based testing.
Does behavior conform to the model
representation?
Implementation-based testing.
Does behavior conform to a model of the
implementation.
Product validation.
Does behavior conform to a requirements model,
for example, Use Case models?
A Andrews - Software Engineering:
Software Testing'06
Models That Support
Testability
Represent all features to be tested.
Limit detail to reduce testing costs, while
preserving essential detail.
Represents all state events so that we can
generate them.
Represents all state & state actions so that we
can observe them.
A Andrews - Software Engineering:
Software Testing'06
Model Types
Combinational models:
 Model combinations of input conditions.
 Based on decision tables.
State Machines:
 Output depends upon current & past inputs.
 Based on finite state machines.
UML models: model OO structures.
A Andrews - Software Engineering:
Software Testing'06
Combinational Model - Spin Lock
[Binder Fig. 6.1]
8
5
2
7
4
0
6
3
9
A Andrews - Software Engineering:
Software Testing'06
Combinational Model: Use a
Decision Table
1 of several responses selected based on
distinct input variables.
Cases can be modeled as mutually exclusive
Boolean expressions of input variables.
Response does not depend upon prior input or
output.
A Andrews - Software Engineering:
Software Testing'06
Combinational Models
Decision Table Construction
1. Identify decision variables & conditions.
2. Identify resultant actions to be selected.
3. Identify the actions to be produced in
response to condition combinations.
4. Derive logic function.
A Andrews - Software Engineering:
Software Testing'06
Combinational Models
Auto Insurance Model
Renewal decision table: Table 6.1.
Column-wise format: Table 6.2.
Decision tree: Fig. 6.2.
Truth table format: Fig. 6.3.
Tractability note:
 Decision table with n conditions has a maximum
n
of 2 variants.
 Some variants are implausible.
A Andrews - Software Engineering:
Software Testing'06
Insurance Renewal Decision Table
[Binder Table 6.1]
Condition Sec.
Variant
# Claims
Age
Action Section
Prem. Incr.
Send Warning
Cancel
1
0
<26
50
no
no
2
0
>25
25
no
no
3
1
<26
100
yes
no
4
1
>25
50
no
no
5
2 to 4
<26
400
yes
no
6
2 to 4
>25
200
yes
no
7
5 or >
Any
0
yes
yes
A Andrews - Software Engineering:
Software Testing'06
Insurance Renewal Decision Table Column-wise Format
Variant
Yes
1
Condition
Section
Action
Section
2
3
4
5
6
7
Number of
Claims
0
0
1
1
2 to 4
2 to 4
5 or
more
Insured
age
25 or
younger
26 or
older
25 or
younger
26 or
older
25 or
younger
26 or
older
Any
Premium
increase($)
50
25
100
50
400
200
0
Send
warning
No
No
Yes
No
Yes
Yes
No
Cancel
No
No
No
No
No
No
Yes
A Andrews - Software Engineering:
Software Testing'06
Implicit Variants
Variants that can be inferred, but not given.
 In the insurance example, we don’t care about
age if there are five or more claims. The action is
to cancel for any age.
 Other implicit variants are those that cannot
happen --- one cannot be both under 25 yrs old
and over 25.
A Andrews - Software Engineering:
Software Testing'06
Test Strategies
All explicit variants at least once
– When decision boundaries are systematically
exercised
– weak, if can’t happen conditions or undefined
boundaries result in implicit variants
A Andrews - Software Engineering:
Software Testing'06
Non binary Variable Domain
Analysis
Exactly 0 claims, age 16-25
Exactly 0 claims, age 26-85
Exactly 1 claim, age 16-25
Exactly 1 claim, age 26-85
Two, 3, or 4 claims, age 16-25
Two, 3, or 4 claims, age 26-85
Five to 10 claims, age 16-85
A Andrews - Software Engineering:
Software Testing'06
Test Cases
Variant 1 Boundaries
==0
Number
Of Claims
1
On
2
3
4
5
0
0
0
Off
(above)
1
Off
(below)
-1
Typical
In
0
>= 16
On
16
Insured
Age
Off
<= 25
15
On
25
Off
Typical
26
20
20
20
Accept
V3
Reject
Premium increase
50
100
Send warning
No
Cancel
No
Expected result
In
0
Accept
V2
50
50
25
Yes
No
No
No
No
No
No
No
A Andrews - Software Engineering:
Software Testing'06
Accept
Reject
Additional Heuristics
Vary order of input variables
Scramble test order
Purposely corrupt inputs
A Andrews - Software Engineering:
Software Testing'06
Statistical testing /
Usage based testing
Usage
specification
Item 1
Up
Select
Hidden
Select
Up
Test
execution
Failure
logging
Test Case
Select
Item 2
Down
Up
Shown
Down
Down
Down
Test case
generation
Failure
Report
1.1.3
Setup
1.1.4
Call
#13
Output failure
Invoke
Item 3
A Andrews - Software Engineering:
Software Testing'06
Reliability
estimation
Usage specification models
Algorithmic
models
Grammar model
State hierarchy
model
<test_case> ::=
<no_commands> @
<command> <select>;
<no_commands> ::=
( <unif_int>(0,2) [prob(0.9)]
| <unif_int>(3,5) [prob(0.1)]);
<command> ::=(<up>
[prob(0.5)] | <down>
[prob(0.5)]);
A Andrews - Software Engineering:
Software Testing'06
Usage
User
type
Sub
type
User
Service
User
type
Sub
type
User
Servic e
Sub
type
User
User
Service
Se rvice Service
Usage specification models
Markov model
40%
Select menu
25%
User
90%
50%
30%
Item 1
Down
Down
10%
Item 1
Item 2
30% Item 3
Function x
Normal use
25%
Function y
A Andrews - Software Engineering:
Software Testing'06
Shown
Select
Up
Item 2
Down
System mode Z
Down
Operational
profile
Up
Domain based
models
Select
Hidden
Select
Up
Item 3
Invoke
Operational profiles
A Andrews - Software Engineering:
Software Testing'06
Operational profiles
A Andrews - Software Engineering:
Software Testing'06
Statistical testing /
Usage-based testing
Usage model
Random sample
Code
A Andrews - Software Engineering:
Software Testing'06
Usage Modelling
Invoke
Click on OK with
non-valid hour
Right-click
Move
Dialog
Box
Main
Window
Resize
CANCEL or OK
with Valid Hour
Close Window
Terminate
Each transition
corresponds to an
external event
Probabilities are
set according to the
future use of the
system
Reliability
prediction
A Andrews - Software Engineering:
Software Testing'06
Markov model
P41
P21
N1
 System states, seen as
nodes
 Probabilities of
transitions
P12
N2
P31
P14
P24
P13
N3
P34
N4
To Node
Transition matrix
N1
Conditions for a Markov
N
P
model:
N
P
From
 Probabilities are
Node
N
P
constants
N
P
A Andrews - Software Engineering:
 No memory of past
Software Testing'06
N2
N3
N4
1
11
P12
P13
P14
2
21
P22
0
P24
3
31
0
P33
P34
4
41
0
0
P44
Model of a program
 The program is seen as a graph
 One entry node (invoke) and one exit node
(terminate)
 Every transition from node Ni to node Nj has a
probability of Pij
 If no connection between Ni and Nj, then PijP=21 0
Input
N1
P12
P31
N2
P14
F
P24
P13
A Andrews - Software Engineering: N3
Software Testing'06
P34
N4
Output
Clock Software
Clock
Options
Analog
Digital
Clock Only
Seconds
Date
Change Time/Date…
Info…
Exit
24 Aug 1997
Change Time/Date
Info
current time: 11:10:27a
new time: ________
current date: Sat 24 Aug 1996
new date: ___________
Vaporware Clock, version 1.0
OK
A Andrews - Software Engineering:
Software Testing'06
OK
Input Domain – Subpopulations
Human users – keystrokes, mouse clicks
System clock – time/date input
Combination usage - time/date changes from
the OS while the clock is executing
Create one Markov chain to model the input
from the user
A Andrews - Software Engineering:
Software Testing'06
Operation modes of the clock
Window = {main window, change window, info
window}
Setting = {analog, digital}
Display = {all, clock only}
Cursor = {time, date, none}
A Andrews - Software Engineering:
Software Testing'06
State of the system
A state of the system under test is an element of the
set S, where S is the cross product of the operational
modes.
States of the clock
{main window, analog, all, none}
{main window, analog, clock-only, none}
{main window, digital, all, none}
{main window, digital, clock-only, none}
{change window, analog, all, time}
{change window, analog, all, date}
{change window, digital, all, time}
{change window, digital, all, date}
A Andrews
- Software Engineering:
{info window, analog,
all, none}
Software Testing'06
Top Level Markov Chain
Window operational mode is chosen as the primary
modeling mode
info
options.info
window
(prob=1/3)
not
invoked
invoke
(prob=1)
main
window
ok (prob=1)
options.exit(prob=1/3)
terminated
end (prob=1)
options.change
(prob=1/3)
change
window
Rules for Markov chains
Each arc is assigned a probability between 0 and 1 inclusive,
The sum of the exit arc probabilities from each state is
A Andrews - Software Engineering:
exactly 1.
Software Testing'06
Top Level Model – Data Dictionary
Arc Label
Input to be Applied
Comments/Notes for Tester
invoke
Invoke the clock software
Main window displayed in full
Tester should verify window appearance, setting, and that it
accepts no illegal input
options.change
Select the “Change Time/Date...” item
from the “Options” menu
All window features must be displayed in order to execute this
command
The change window should appear and be given the focus
Tester should verify window appearance and modality and
ensure that it accepts no illegal input
options.info
Select the “Info...” item from the
“Options” menu
The title bar must be on to apply this input
The info window should appear and be given the focus
Tester should verify window appearance and modality and
ensure that it accepts no illegal input
options.exit
Select the “Exit” option from the
“Options” menu
The software will terminate, end of test case
end
Choose any action and return to the
main window
The change window will disappear and the main window will
be given the focus
ok
Press the ok button on the info window
The info window will disappear and the main window will be
given the focus
A Andrews - Software Engineering:
Software Testing'06
Level 2 Markov Chain
Submodel for the Main Window
options.seconds
options.date
options.info
ok
analog,
clock only
double-click
analog, all
options.clock-only
invoke
options.
digital
options.
analog
options.exit
double-click
digital,
clock only
digital, all
options.clock-only
options.seconds
options.date
A Andrews - Software Engineering:options.change
Software Testing'06
end
Data Dictionary – Level 2
Arc Label
Input to be Applied
Comments/Notes
invoke
Invoke the clock software
•Main window displayed in full
•Invocation may require that the software be calibrated by issuing
either an options.analog or an options.digital input
•Tester should verify window appearance, setting, and ensure that
it accepts no illegal input
options.change
Select the “Change
Time/Date...” item from the
“Options” menu
•All window features must be displayed in order to execute this
command
•The change window should appear and be given the focus
•Tester should verify window appearance and modality and ensure
that it accepts no illegal input
options.info
Select the “Info...” item from
the “Options” menu
•The title bar must be on to apply this input
•The info window should appear and be given the focus
•Tester should verify window appearance and modality and ensure
that it accepts no illegal input
options.exit
Select the “Exit” option from
the “Options” menu
•The software will terminate, end of test case
end
Choose any action (cancel or
•The change window will disappear and the main window will be
change the time/date) and
given the focus
return to the main window
•Note: this action may require that the software be calibrated by
A Andrews - Software Engineering:
issuing either an options.analog or an options.digital input
Software Testing'06
Data Dictionary – Level 2
Arc Label
Input to be Applied
Comments/Notes
ok
Press the ok button on the info
window
•The info window will disappear and the main window
will be given the focus
•Note: this action may require that the software be
calibrated by issuing either an options.analog or an
options.digital input
options.analog
Select the “Analog” item from the
“Options” menu
•The digital display should be replaced by an analog
display
options.digital
Select the “Digital” item from the
“Options” menu
•The analog display could be replaced by a digital
display
options.clockonly
Select the “Clock Only” item from
the “Options” menu
•The clock window should be replace by a display
containing only the face of the clock, without a title,
menu or border
options.seconds
Select the “Seconds” item from the
“Options” menu
•The second hand/counter should be toggled either on
or off depending on its current status
options.date
Select the “Date” item from the
“Options” menu
•The date should be toggled either on or off depending
on its current status
double-click
Double click, using the left mouse
•The clock face should be replaced by the entire clock
A
Andrews
Software
Engineering:
button, on the face of the clock
window
Software Testing'06
Level 2 Markov Chain
Submodel for the Change Window
end
move
options.
change
time
move
edit time
A Andrews - Software Engineering:
Software Testing'06
date
edit date
Data Dictionary
Arc Label
Input to be Applied
Comments/Notes for Tester
options.change
Select the “Change Time/Date...” item
from the “Options” menu
•All window features must be displayed in order to
execute this command
•The change window should appear and be given the
focus
•Tester should verify window appearance and
modality and ensure that it accepts no illegal input
end
Choose either the “Ok” button or hit the
cancel icon and return to the main
window
•The change window will disappear and the main
window will be given the focus
•Note: this action may require that the software be
calibrated by issuing either an options.analog or an
options.digital input
move
Hit the tab key to move the cursor to the
other input field or use the mouse to
select the other field
•Tester should verify cursor movement and also
verify both options for moving the cursor
edit time
Change the time in the “new time” field
or enter an invalid time
•The valid input format is shown on the screen
edit date
Change the date in the “new date” field •The valid input format is shown on the screen
or enter an invalid date
A Andrews - Software Engineering:
Software Testing'06
Software Reliability
Techniques
 Markov models
Failure Rate
 Reliability growth models
Time
A Andrews - Software Engineering:
Software Testing'06
Dimensions of dependability
Dependability
Availability
The ability of the
system to deliver
services when
requested
Reliability
The ability of the
system to deliver
services as specified?
Safety
The ability of the
system to operate
without catastrophic
failure
A Andrews - Software Engineering:
Software Testing'06
Security
The ability of the
system to protect itelf
against accidental or
deliverate intrusion
Costs of increasing
dependability
Co st
Dependability
Lo w
Medium
Hig h
Very
h igh
A Andrews - Software Engineering:
Software Testing'06
Ultrah igh
Availability and reliability
Reliability
 The probability of failure-free system operation
over a specified time in a given environment for a
given purpose
Availability
 The probability that a system, at a point in time,
will be operational and able to deliver the
requested services
Andrews - Software Engineering:
Both of these Aattributes
can be expressed
Software Testing'06
Reliability terminology
Term
System failure
System error
System fault
Human error or
mi stake
Description
An even t that occurs at some point in tim e when
the system doe s no t deliver a se rvice as expec ted
by its users
Erroneous system behav iour whe re the behav iour
of the system does no t conform to it s
specification .
An incor rect system state i. e. a sys tem state that
is unexpec ted by the designer s of the system.
Human behav iour that result s in the introduc tion
of fault s into a system.
A Andrews - Software Engineering:
Software Testing'06
Usage profiles / Reliability
Removing X% of
the faults in a
system will not
necessarily
improve
the reliability by
X%!
Possible
inputs
User 1
User 3
A Andrews - Software Engineering:
Software Testing'06
Erroneous
inputs
User 2
Reliability achievement
Fault avoidance
 Minimise the possibility of mistakes
 Trap mistakes
Fault detection and removal
 Increase the probability of detecting and correcting
faults
Fault tolerance
 Run-time techniques
A Andrews - Software Engineering:
Software Testing'06
Reliability quantities
Execution time
 is the CPU time that is actually spent by the computer in
executing the software
Calendar time
 is the time people normally experience in terms of years,
months, weeks, etc
Clock time
 is the elapsed time from start to end of computer execution
in running the software
A Andrews - Software Engineering:
Software Testing'06
Reliability metrics
A Andrews - Software Engineering:
Software Testing'06
Nonhomogeneous Poisson
Process (NHPP) Models
N(t) follows a Poisson distribution. The
probability that N(t) is a given integer n is:
n
[m(t )]  m ( t )
P{N (t )  n} 
e
, n  0,1,2,...
n!
m(t) = (t) is called mean value function, it
describes the expected cumulative number of
failures in [0,t)
A Andrews - Software Engineering:
Software Testing'06
The Goel-Okumoto (GO)
model
Assumptions
The cumulative number of failures detected at time t
follows a Poisson distribution
All failures are independent and have the same chance of
being detected
All detected faults are removed immediately and no new faults
are introduced
 The failure process is modelled by an NHPP model with
mean value function (t) given by:
 (t )  a(1  e ), a  0, b  0;
 bt
A Andrews - Software Engineering:
Software Testing'06
Goel-Okumoto
The shape of the mean value function ((t)) and
the intensity function ((t)) of the GO model
(t)
(t)
t
A Andrews - Software Engineering:
Software Testing'06
S-shaped NHPP model
 (t)=a[1-(1+bt)e-bt], b>0
(t)
t
d (t )
bt
bt
2 bt
 (t ) 
 ab(1  bt )e  abe  ab te .
dt
A Andrews - Software Engineering:
Software Testing'06
The Jelinski-Moranda (JM) model
Assumptions
1. Times between failures are independent,
exponential distributed random quantities
2. The number of initial failures is an unknown but fixed
constant
3. A detected fault is removed immediately and no new
fault is introduced
4. All remaining faults contribute the same amount of the
software failure intensity
A Andrews - Software Engineering:
Software Testing'06
Next weeks
Next week (April 11)
 Lab 1 – Black-box testing
A Andrews - Software Engineering:
Software Testing'06