New Methods of Software Reliability: Estimations and Projections

Download Report

Transcript New Methods of Software Reliability: Estimations and Projections

Rivier College Mathematics & Computer Science Lecture Series
March 23, 2006
Software Reliability Estimates/ Projections,
Cumulative & Instantaneous
Presented by Dave Dwyer
With help from:
Ann Marie Neufelder,
John D. Musa, Martin Trachtenberg, Thomas
Downs, Ernest O. Codier and Faculty of Rivier
College Grad. School Math and Computer
Science
Martin Trachtenberg (1985):
• Simulation shows that, with respect to the number
of detected errors:
– Testing the functions of the software system in a random or roundrobin order gives linearly decaying system error rates.
– Testing each function exhaustively one at a time gives flat systemerror rates
– Testing different functions at widely different frequencies gives
exponentially decaying system error rates [Operational Profile
Testing], and
– Testing strategies which result in linear decaying error rates tend to
require the fewest tests to detect a given number of errors.
03/23/2006
Software Reliability
2
Thomas Downs (1985):
“In this paper, an approach to the modeling of
software testing is described. A major aim of this
approach is to allow the assessment of the effects
of different testing (and debugging) strategies in
different situations. It is shown how the techniques
developed can be used to estimate, prior to the
commencement of testing, the optimum allocation
of test effort for software which is to be
nonuniformly executed in its operational phase.”
03/23/2006
Software Reliability
3
There are Two Basic Types of Software
Reliability Models
• Predictors - predict reliability of software at
some future time. Prediction made prior to
development or test as early as concept
phase. Normally based on historical data.
• Estimators - estimate reliability of software
at some present or future time based on data
collected from current development and/or
test. Normally used later in life cycle than
predictors.
03/23/2006
Software Reliability
4
A Pure Approach Reflects the
True Nature of Software
• The execution of software takes the form of the
execution of a sequence of M paths.
• The actual number of paths affected by an
arbitrary fault is unknown and can be treated as a
random variable, c.
• Not all paths are equally likely to be executed in a
randomly selected execution profile.
03/23/2006
Software Reliability
5
Start
x1
xN
x2
3
1
M
2
2 paths affected by x1
‘M’ total paths
1 path affected by x2
‘N’ total faults initially
‘c’ paths affected by an arbitrary fault
03/23/2006
Software Reliability
6
Further...
 In the operational phase of many large software
systems, some sections of code are executed
much more frequently than others.
 Faults located in heavily used sections of code
are much more likely to be detected early.
03/23/2006
Software Reliability
7
Downs (IEEE Trans. on SW Eng. April, 1985)
Showed that approximations can be made
 Each time a path is selected for testing, all
paths are equally likely to be selected.
• The actual number of paths affected by an
arbitrary fault is a constant
03/23/2006
Software Reliability
8
My Data Assumptions
• Cumulative 8 Hr. test shifts are recorded VS
the number of errors.
• Each first instance is plotted
 The last data point will be at the end of the
test time, even though there was no error,
because a long interval without error is
more significant than an interval with an
error.
03/23/2006
Software Reliability
9
Other Assumptions
 Only integration & system test data are used.
 Problems will be designated as priority 1, 2 or 3
(Ref DoD-STD-2167A) where:
 Priority 1: “Prevents mission essential capability”
 Priority 2: “Adversely affects mission essential
capability with no alternative workaround”
 Priority 3: “Adversely affects mission essential
capability with alternative workaround”
03/23/2006
Software Reliability
10
Downs Showed:  ~ faults/path
• j = (N – j), where:
– N = the total number of faults,
– j = the number of corrected faults,
–  = -r log(1 – c/M),
• r = the number of paths executed/unit time,
• c = the average number of paths effected by each
fault and
• M = the total number of paths
03/23/2006
Software Reliability
11
Failure Rate is proportional to failure
number, Downs: j  (N – j)r(c/M)
Given:
N
(0)
j
j
i
T
= total initial number of faults
= initial failure intensity => 0 errors detected/corrected (start of testing)
= cumulative failure intensity after some number of faults is detected, ‘j’
= the number of faults removed over time
= instantaneous failure intensity
= time
N
j
j = j/T
03/23/2006
0
Software Reliability
12
Failure rate plots against failure number for a range
of non-uniform testing profiles, M1, M2 paths and
N1, N2 initial faults in those paths. (Logarithmic?)
03/23/2006
Software Reliability
13
Imagine two main segments
Segment 1
03/23/2006
Segment 2
Software Reliability
14
After testing segment 1, someone asks:
• Given 10 faults found, what’s the reliability
of the code?
• Responses:
– Don’t know how many other faults remain in
section 1, let alone are in section 2
– Don’t know how often sections 1, 2 are used.
– Did we plot failure intensity vs faults?
– Why didn’t we test to the operational profile?
03/23/2006
Software Reliability
15
By reference to Duane’s derivation
for hardware reliability,
(Ref. E. O. Codier, RAMS - 1968)
c  F / T
 kT (  m )
F  kT (1 m)
i  F / T
 k (1  m)T (  m )
i  (1  m)c
m  slope
03/23/2006
Software Reliability
16
Failure Intensity (Instantaneous Failure Rate)
Derivation - Hardware & Software
Duane’s Instantaneous  for HW
Dave’s Instantaneous  for SW
 j  ( N  j )
c  F / T
 kT (  m )
F  kT (1 m)
i  F / T
‘Failure Intensity’
 ( N  j )  T (j / T )
i   j  T (i )
 k (1  m)T (  m )
i  (1  m)c
m  slope
 j /T
j  T ( N  j )
i  j / T
i (1  T )   j
Similar Result
i   j /(1  T )
  1 /( slope )
03/23/2006
Software Reliability
17
Priority 1 Data Plotted
45.0
Sum Failures (n)
40.0
35.0
30.0
25.0
20.0
y = -43.964x + 38.803
15.0
10.0
5.0
0.0
0
0.2
0.4
0.6
Failures/8 Hours
03/23/2006
Software Reliability
18
Priority 1 and 2 Data Plotted
400.0
Series1
350.0
Series2
Series3
Linear (Series2)
300.0
Linear (Series3)
Sum Failures
250.0
200.0
y = -176.83x + 349.85
150.0
100.0
y = -179.88x + 288.61
50.0
0.0
0
0.2
0.4
0.6
0.8
1
1.2
Failures/8 Hours
03/23/2006
Software Reliability
19
Point Estimates vs Instantaneous
FRP Censored
75
70
65
60
55
Critical Failure Number
50
45
Series1
40
Series2
35
Linear (Series1)
Linear (Series2)
30
25
20
15
y = -676.24x + 58.829
R2 = 0.9193
y = -338.12x + 58.829
R2 = 0.9193
10
5
0
0.0000 0.0100 0.0200 0.0300 0.0400 0.0500 0.0600 0.0700 0.0800 0.0900 0.1000 0.1100 0.1200 0.1300 0.1400
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
Failure Rate
03/23/2006
Software Reliability
20
From the previous graph, top curve j = -338.12c + 58.89
j
41.98
40.29
43.67
i
c
0.050
0.055
0.045
T= j/c
839.6
732.55
970.44
= (43.67 – 40.29)/(970.44 – 732.55)
= 3.38/237.89
= 0.014
A point estimate over the range including a few points
From the formula for instantaneous failure intensity:
i

T
= c/(1 + T)
= 1/338.12 (-1/slope of curve)
= 839.6
i
= 0.050/(1 + 839/338)
= 0.050/(1 + 2.48)
= 0.050/3.48
= 0.014
A much better estimate, including all points up to c = 0.050
03/23/2006
Software Reliability
21
For copy of paper, e-mail request to:
[email protected]