Causes of faults during development Chapter 9 9-1 Lecturer: Dr. AJ Bieszczad

Download Report

Transcript Causes of faults during development Chapter 9 9-1 Lecturer: Dr. AJ Bieszczad

Causes of faults during development
Chapter 9
Lecturer: Dr. AJ Bieszczad
9-1
System testing process
• Function testing: does the integrated
system perform as promised by the
requirements specification?
• Performance testing: are the non-functional
requirements met?
• Acceptance testing: is the system what the
customer expects?
• Installation testing: does the system run at
the customer site(s)?
Chapter 9
Lecturer: Dr. AJ Bieszczad
9-2
Steps in testing process
Chapter 9
Lecturer: Dr. AJ Bieszczad
9-3
Techniques used in system
testing
• Build or spin plan for gradual testing
• Configuration management
– versions and releases
– production system vs. development system
– deltas, separate files and conditional
compilation
– change control
• Regression testing
Chapter 9
Lecturer: Dr. AJ Bieszczad
9-4
Test team
• Professional testers: organize and run the
tests
• Analysts: who created requirements
• System designers: understand the proposed
solution
• Configuration management specialists: to
help control fixes
• Users: to evaluate issues that arise
Chapter 9
Lecturer: Dr. AJ Bieszczad
9-5
Function testing
• A test should:
– have a high probability of detecting faults
– use a test team independent of the designers and
programmers
– know the expected actions and output
– test both valid and invalid input
– never modify the system to make testing easier
– have stopping criteria
Chapter 9
Lecturer: Dr. AJ Bieszczad
9-6
Cause-and-effect graphs (1)
INPUT: The syntax of the function is LEVEL(A,B)
where A is the height in meters of the water behind the dam,
and B is the number of centimeters of rain in the last 24hour period.
PROCESSING: The function calculates whether the water level
is within a safe range, is too high, or is too low.
OUTPUT: The
1. “LEVEL =
2. “LEVEL =
3. “INVALID
depending on
screen shows one of the following messages:
SAFE” when the result is safe or low.
HIGH” when the result is high.
SYNTAX”
the result of the calculation.
Chapter 9
Lecturer: Dr. AJ Bieszczad
9-7
Cause-and-effect graphs (2)
Causes:
1.
2.
3.
4.
5.
The first five characters of the command “LEVEL.”
The command contains exactly two parameters separated by a comma and enclosed
in parentheses.
The parameters A and B are real numbers such that the water level is calculated to
be LOW.
The parameters A and B are real numbers such that the water level is calculated to
be SAFE.
The parameters A and B are real numbers such that the water level is calculated to
be HIGH.
Chapter 9
Lecturer: Dr. AJ Bieszczad
9-8
Cause-and-effect graphs (3)
Effects:
1.
2.
3.
The message “LEVEL = SAFE” is displayed on the screen.
The message “LEVEL = HIGH” is displayed on the screen.
The message “INVALID SYNTAX” is printed out.
Intermediate nodes:
1.
2.
The command is syntactically valid.
The operands are syntactically valid.
Chapter 9
Lecturer: Dr. AJ Bieszczad
9-9
Notation for cause-and-effect graph
Chapter 9
Lecturer: Dr. AJ Bieszczad
9-10
Additional graph notation
Chapter 9
Lecturer: Dr. AJ Bieszczad
9-11
Cause-and-effect graph
Chapter 9
Lecturer: Dr. AJ Bieszczad
9-12
Decision table for cause-and-effect
graph
Cause 1
Cause 2
Cause 3
Cause 4
Cause 5
Effect 1
Effect 2
Effect 3
Test 1
I
I
I
S
S
P
A
A
Test 2
I
I
S
I
S
P
A
A
Test 3
I
I
S
S
I
A
P
A
Chapter 9
Lecturer: Dr. AJ Bieszczad
Test 4
S
X
X
X
X
A
A
P
Test 5
I
S
X
X
X
A
A
P
9-13
Performance tests
•
•
•
•
•
•
•
Stress tests
Volume tests
Configuration tests
Compatibility tests
Regression tests
Security tests
Timing tests
•
•
•
•
•
•
Chapter 9
Lecturer: Dr. AJ Bieszczad
Environmental tests
Quality tests
Recovery tests
Maintenance tests
Documentation tests
Human factors
(usability) tests
9-14
Software Reliability, Availability and
Maintainability
• Reliability
– the probability that the system will operate without failure under given
conditions for a given time interval
• Availability
– the probability that a system is operating successfully according to
specifications at a given point in time
• Maintainability
– the probability that, for a given condition of use, a maintenance activity
can be carried out within a stated time interval and using stated resources
and procedures
Chapter 9
Lecturer: Dr. AJ Bieszczad
9-15
Example: Inter-failure times (read
left to right, in rows)
3
138
325
36
97
148
0
44
445
724
30
729
75
1045
30
50
55
4
263
21
232
129
296
2323
143
1897
482
648
113
77
242
0
452
233
330
810
1755
2930
108
447
5509
5485
81
24
68
8
255
134
365
290
1064
1461
0
386
100
1160
115
108
422
227
197
357
1222
300
1783
843
3110
446
10
1864
9
88
180
65
193
193
543
529
860
12
1247
122
1071
4116
Chapter 9
Lecturer: Dr. AJ Bieszczad
2
670
10
176
6
236
10
281
983
261
943
990
371
91
120
1146
58
79
31
16
160
707
1800
700
948
790
112
26
600
457
816
369
529
828
33
865
875
1082
6150
15
114
15
300
1351
748
379
1011
868
1435
245
22
3321
9-16
Example: Inter-failure graph
Chapter 9
Lecturer: Dr. AJ Bieszczad
9-17
Measuring Reliability, Availability
and Maintainability
• Mean Time to Failure (MTTF)
R = MTTF/(1 + MTTF)
• Mean Time to Repair (MTTR)
M = 1/(1+MTTR)
• Mean Time between Failures (MTBF)
MTBF = MTTF + MTTR
A = MTBF/(1+MTBF)
Chapter 9
Lecturer: Dr. AJ Bieszczad
9-18
Reliability function
R (t )  1  F (t )
t
F (t )   f (t )
0
f(t) failure probability density function
• from statistics
F(t) failure distribution function
Chapter 9
Lecturer: Dr. AJ Bieszczad
9-19
Uniform probability density function
Chapter 9
Lecturer: Dr. AJ Bieszczad
9-20
Example: Inter-failure times (read left
to right, in rows) for reliability
prediction
3
138
325
36
97
148
0
44
445
724
30
729
75
1045
30
50
55
4
263
21
232
129
296
2323
143
1897
482
648
113
77
242
0
452
233
330
810
1755
2930
108
447
5509
5485
Predictions:
81
24
68
8
255
134
365
290
1064
1461
0
386
100
1160
115
108
422
227
197
357
1222
300
1783
843
3110
446
10
1864
9
88
180
65
193
193
543
529
860
12
1247
122
1071
4116
91
120
1146
58
79
31
16
160
707
1800
700
948
790
112
26
600
457
816
369
529
828
33
865
875
1082
6150
15
114
15
300
1351
748
379
1011
868
1435
245
22
3321
t4=71.5, because t2=30 and t3=113
t5=97, because t3=113 and t4=81
etc.
Chapter 9
Lecturer: Dr. AJ Bieszczad
2
670
10
176
6
236
10
281
983
261
943
990
371
9-21
Reliability prediction graph
Averaging previous two failure times to predict the third.
Chapter 9
Lecturer: Dr. AJ Bieszczad
9-22
Acceptance tests
• Benchmark test: testing on pre-defined sets
of typical (“important”) test cases
• Pilot test: install on experimental basis
• Alpha test: in-house test
• Beta test: customer pilot
• Parallel testing: new system operates in
parallel with old system
Chapter 9
Lecturer: Dr. AJ Bieszczad
9-23
Test documentation
• Test plan: describes system and plan for
exercising all functions and characteristics
• Test specification and evaluation: details
each test and defines criteria for evaluating
each feature
• Test description: test data and procedures
for each test
• Test analysis report: results of each test
Chapter 9
Lecturer: Dr. AJ Bieszczad
9-24
Documents produced during testing
Chapter 9
Lecturer: Dr. AJ Bieszczad
9-25
Parts of a test plan
Chapter 9
Lecturer: Dr. AJ Bieszczad
9-26
EXAMPLE: Test SORT
INPUT DATA:
Input data are to be provided by the LIST program. The program generates randomly a list of N words of
alphanumeric characters; each word is of length M. The program is invoked by calling
RUN LIST(N,M)
in your test driver. The output is placed in global data area LISTBUF. The test datasets to be used for this test are as
follows:
Case 1: Use LIST with N=5, M=5
Case 2: Use LIST with N=10, M=5
Case 3: Use LIST with N=15, M=5
Case 4: Use LIST with N=50, M=10
Case 5: Use LIST with N=100, M=10
Case 6: Use LIST with N=150, M=10
INPUT COMMANDS:
The SORT routine is invoked by using the command
RUN SORT (INBUF,OUTBUF) or
RUN SORT (INBUF)
OUTPUT DATA:
If two parameters are used, the sorted list is placed in OUTBUF. Otherwise, it is placed in INBUF.
SYSTEM MESSAGES:
During the sorting process, the following message is displayed:
“Sorting ... please wait ...”
Upon completion, SORT displays the following message on the screen:
“Sorting completed”
To halt or terminate the test before the completion message is displayed, press CONTROL-C on the keyboard.
Chapter 9
Lecturer: Dr. AJ Bieszczad
9-27
Example: Test script
Step N:
Step N+1:
Step N+2:
Step N+3:
Step N+4:
Step N+5:
Step N+6:
Step N+7:
Step N+8:
Step N+9:
Step N+10:
Step N+11:
Press function key 4: Access data file.
Screen will ask for the name of the date file.
Type ‘sys:test.txt’
Menu will appear, reading
* delete file
* modify file
* rename file
Place cursor next to ‘modify file’ and press RETURN
key.
Screen will ask for record number. Type ‘4017’.
Screen will fill with data fields for record 4017:
Record number: 4017
X: 0042 Y: 0036
Soil type: clay
Percolation: 4 mtrs/hr
Vegetation: kudzu
Canopy height: 25 mtrs
Water table: 12 mtrs
Construct: outhouse
Maintenance code: 3T/4F/9R
Press function key 9: modify
Entries on screen will be highlighted. Move cursor
to VEGETATION field. Type ‘grass’ over ‘kudzu’ and
press RETURN key.
Entries on screen will no longer be highlighted.
VEGETATION field should now read ‘grass’.
Press function key 16: Return to previous screen.
Menu will appear, reading
* delete file
* modify file
* rename file
To verify that the modification has been recorded,
place cursor next to ‘modify file’ and press RETURN
key.
Screen will ask for record number. Type ‘4017’.
Screen will fill with data fields for record 4017:
Record number: 4017
X: 0042 Y: 0036
Soil type: clay
Percolation: 4 mtrs/hr
Vegetation: grass
Canopy height: 25 mtrs
Water table: 12 mtrs
Construct: outhouse
Maintenance code: 3T/4F/9R
Chapter 9
Lecturer: Dr. AJ Bieszczad
9-28
Problem report forms
•
•
•
•
•
•
Location
Timing
Symptom
End result
Mechanism
Cause
Chapter 9
Lecturer: Dr. AJ Bieszczad
9-29
Testing safety-critical systems
• Design diversity: use different kinds of
designs, designers
• Software safety cases: make explicit the
ways the software addresses possible
problems
– failure modes and effects analysis
– hazard and operability studies
• Cleanroom: certifying software with respect
to the specification
Chapter 9
Lecturer: Dr. AJ Bieszczad
9-30
Perspectives for safety analysis
Known effect
Unknown effect
Known cause
Description of system behavior
Inductive analysis, including failure
modes and effects analysis
Chapter 9
Lecturer: Dr. AJ Bieszczad
Unknown cause
Deductive analysis, including fault tree
analysis
Exploratory analysis, including hazard
and operability studies
9-31
Cleanroom
• The cleanroom approach addresses two
fundamental principles:
– to certify with respect to the specifications,
rather than wait for unit testing to find faults
– to produce zero-fault or near-zero-fault
software
• Blending of numerous techniques
Chapter 9
Lecturer: Dr. AJ Bieszczad
9-32