Transcript PowerPoint

CS 501: Software Engineering
Lecture 23
Reliability III
1
CS 501 Spring 2002
Administration
2
CS 501 Spring 2002
Static and Dynamic Verification
Static verification: Techniques of verification that
do not include execution of the software.
• May be manual or use computer tools.
Dynamic verification:
• Testing the software with trial data.
• Debugging to remove errors.
3
CS 501 Spring 2002
Test Design
Testing can never prove that a system is correct.
It can only show that (a) a system is correct in a
special case, or (b) that it has a fault.
• The objective of testing is to find faults.
• Testing is never comprehensive.
• Testing is expensive.
4
CS 501 Spring 2002
Testing and Debugging
Testing is most effective if divided into stages:
•
Unit testing at various levels of granularity
tests by the developer
emphasis is on accuracy of actual code
•
System and sub-system testing
uses trial data
emphasis is on integration and interfaces
• Acceptance testing
uses real data in realistic situations
emphasis is on meeting requirements
5
CS 501 Spring 2002
Acceptance Testing
Alpha Testing: Clients operate the system in a realistic
but non-production environment
Beta Testing: Clients operate the system in a carefully
monitored production environment
Parallel Testing: Clients operate new system alongside
old production system with same data and compare
results
6
CS 501 Spring 2002
The Testing Process
System and Acceptance Testing is a major part of a software
project
•
It requires time on the schedule
• It may require substantial investment in datasets,
equipment, and test software.
•
Good testing requires good people!
• Management and client reports are important parts of
testing.
What is the definition of "done"?
7
CS 501 Spring 2002
Testing Strategies
• Bottom-up testing. Each unit is tested with its own test
environment.
8
•
Top-down testing. Large components are tested with
dummy stubs.
user interfaces
work-flow
client and management demonstrations
•
Stress testing. Tests the system at and beyond its limits.
real-time systems
transaction processing
CS 501 Spring 2002
Test Cases
Test cases are specific tests that are chosen because they are
likely to find faults.
Test cases are chosen to balance expense against chance of
finding serious faults.
9
•
Cases chosen by the development team are effective in
testing known vulnerable areas.
•
Cases chosen by experienced outsiders and clients will be
effective in finding gaps left by the developers.
•
Cases chosen by inexperienced users will find other
faults.
CS 501 Spring 2002
Test Case Selection: Coverage of Inputs
Objective is to test all classes of input
•
Classes of data -- major categories of transaction and
data inputs.
Cornell example: (undergraduate, graduate, transfer, ...)
by (college, school, program, ...) by (standing) by (...)
10
•
Ranges of data -- typical values, extremes
•
Invalid data, reversals, and special cases.
CS 501 Spring 2002
Test Case Selection: Program
Objective is to test all functions of each computer program
•
Paths through the computer programs
Program flow graph
Check that every path is executed at least once
•
Dynamic program analyzers
Count number of times each path is executed
Highlight or color source code
Can not be used with time critical software
11
CS 501 Spring 2002
Program Flow Graph
if-then-else
12
loop-while
CS 501 Spring 2002
Statistical Testing
•
Determine the operational profile of the software
•
Select or generate a profile of test data
• Apply test data to system, record failure patterns
•
13
Compute statistical values of metrics under test conditions
CS 501 Spring 2002
Statistical Testing
Advantages:
•
•
•
Can test with very large numbers of transactions
Can test with extreme cases (high loads, restarts, disruptions)
Can repeat after system modifications
Disadvantages:
•
•
•
14
Uncertainty in operational profile (unlikely inputs)
Expensive
Can never prove high reliability
CS 501 Spring 2002
Regression Testing
REGRESSION TESTING IS ONE OF THE KEY
TECHNIQUES OF SOFTWARE ENGINEERING
Applied to modified software to provide confidence that
modifications behave as intended and do not adversely
affect the behavior of unmodified code.
• Basic technique is to repeat entire testing process after
every change, however small.
15
CS 501 Spring 2002
Regression Testing: Program Testing
1. Collect a suite of test cases, each with its expected
behavior.
2. Create scripts to run all test cases and compare with
expected behavior. (Scripts may be automatic or have
human interaction.)
3. When a change is made, however small (e.g., a bug is
fixed), add a new test case that illustrates the change (e.g.,
a test case that revealed the bug).
4. Before releasing the changed code, rerun the entire test
suite.
16
CS 501 Spring 2002
Discussion of Pfleeger, Chapter 9
Format:
State a question.
Ask a member of the class to answer.
(Sorry if I pronounce your name wrongly.)
Provide opportunity for others to comment.
When answering:
Stand up.
Give your name or NetID. Make sure the TA hears it.
Speak clearly so that all the class can hear.
17
CS 501 Spring 2002
Question 1: Configuration Management
(a) Explain the problem of configuration management.
(b) What techniques are used to avoid software faults in
configuration management?
18
CS 501 Spring 2002
Question 2: Deltas
(a) Explain the term delta in system management.
(b) What is the alternative?
(c) What do you think are the benefits of the two
approaches?
19
CS 501 Spring 2002
Question 3: Function Testing
(a) What is function testing?
(b) What is not tested during the function testing phase?
20
CS 501 Spring 2002
Question 4: Reliability Testing
(a) Why is testing software reliability different from testing
hardware reliability?
(b) Why does six-sigma testing not apply to software?
(c) There is a trade-off between expenditure and software
reliability:
(i) What are the implications before release of the
software?
(ii) What are the implications after release of the
software?
21
CS 501 Spring 2002
Question 5: Documentation of Testing
Explain the purpose of each of the following:
(a) Test plan
(b) Test specification and evaluation
(c) Test description
(d) Test analysis report
22
CS 501 Spring 2002
Question 6: Safety Critical Software
A software system fails and several lives are lost. An inquiry
discovers that the test plan did not consider the case that caused the
failure. Who is responsible:
(a) The testers for not noticing the missing cases?
(b) The Test planners for not writing the complete test plan?
(c) The managers for not having checked the test plan?
(d) The customer for not having done a thorough acceptance test?
23
CS 501 Spring 2002