Verification and Validation

Download Report

Transcript Verification and Validation

Software Testing:
Finding Software Faults
Dr. Pedro Mejia Alvarez
Software Testing
Slide 1
Topics covered







Introduction to testing
Sources of errors
Why do we need testing ?
Verification and validation planning
Software inspections.
Static and dynamic verification.
Automated static analysis
Dr. Pedro Mejia Alvarez
Software Testing
Slide 2
Failures in Production Software





NASA’s Mars lander, September 1999, crashed due to a units integration
fault—over $50 million US !
Huge losses due to web application failures
•
Financial services : $6.5 million per hour
•
Credit card sales applications : $2.4 million per hour
In Dec 2006, amazon.com’s BOGO offer turned into a double discount
NIST report, “The Economic Impacts of Inadequate Infrastructure for
Software Testing” (2002)
– Inadequate Software Testing costs the US alone between $22 and $59
billion annually
– Better approaches could cut this amount in half
Stronger testing could solve most of these problems
Dr. Pedro Mejia Alvarez
Software Testing
3
Thanks to Dr. Sreedevi Sampath
Slide 3
Airbus 319 Safety Critical
Software Control
Loss of autopilot
Loss of most flight deck lighting and intercom
Loss of both the commander’s and the
co-pilot’s primary flight and navigation displays
Dr. Pedro Mejia Alvarez
Software Testing
Slide 4
Northeast Blackout of 2003
508 generating
units and 256
power plants shut
down
Affected 10 million
people in Ontario,
Canada
Affected 40 million
people in 8 US
states
Financial losses of
$6 Billion USD
The alarm system in the energy management system failed
due to a software error and operators were not informed of
the power overload in the system
Dr. Pedro Mejia Alvarez
Software Testing
Slide 5
Testing Results
v
— Vasileios Papadimitriou. Masters thesis, Automating Bypass Testing for Web Applications, GMU 2006
Dr. Pedro Mejia Alvarez
Software Testing
Slide 6
Software is a Skin that Surrounds Our
Civilization
Dr. Pedro Mejia Alvarez
Software Testing
Slide 7
Testing in the 21st Century





We are going through a time of change
Software defines behavior
•
network routers, finance, switching
networks, other infrastructure
Today’s software market :
•
is much bigger
•
is more competitive
•
has more users
Industry is going
through a revolution in
what testing means to
the success of software
products
Agile processes put increased pressure on testers
Embedded Control Applications
– PDAs
•
airplanes, air traffic control
– memory seats
•
spaceships
– DVD players
•
watches
– garage door openers
•
ovens
– cell phones
•
remote controllers
Dr. Pedro Mejia Alvarez
Software Testing
Slide 8
Testing in the 21st Century






More safety critical, real-time software
Enterprise applications means bigger programs, more users
Embedded software is ubiquitous … check your pockets
Paradoxically, free software increases our expectations !
Security is now all about software faults
•
Secure software is reliable software
The web offers a new deployment platform
•
Very competitive and very available to more users
•
Web apps are distributed
•
Web apps must be highly reliable
Dr. Pedro Mejia Alvarez
Software Testing
Slide 9
Central themes




SE is concerned with
BIG programs
complexity is an issue
software evolves
development must be
efficient




Dr. Pedro Mejia Alvarez
you’re doing it
together
software must
effectively support
users
involves different
disciplines
SE is a balancing act
Software Testing
Slide 10
How do we get here ?
operating system growth size in millions of lines of code
From Frans Kaashoek and Jerome Saltzer, Topics in the Engineering of Computer Systems.
Dr. Pedro Mejia Alvarez
Software Testing
Slide 11
Where do we get our faults from:
operating systems ?
Dr. Pedro Mejia Alvarez
Software Testing
Slide 12
Global distribution of effort
Integration 10%
engineering 10%
coding 20%
design 15%
specification 10%
testing 45%
Do we follow a Methodology when building software ?
Dr. Pedro Mejia Alvarez
Software Testing
Slide 13
Here! Test This!
My first “professional” job
Big software program
Jan/2007
MicroSteff – big
software system
for the mac
V.1.5.1
Jan/2007
MF2-HD
1.44 MB
DataLife
Verdatim
A stack of computer printouts—and no documentation
Dr. Pedro Mejia Alvarez
Software Testing
Slide 14
Cost of Testing
You’re going to spend at least half of
your development budget on testing,
whether you want to or not



In the real-world, testing is the principle post-design
activity
Restricting early testing usually increases cost
Extensive hardware-software integration requires
more testing
Dr. Pedro Mejia Alvarez
Software Testing
15
Slide 15
Cost of Late Testing
60
50
40
30
20
10
0
Assume $1000 unit cost, per fault, 100 faults
Fault origin (%)
Fault detection (%)
Unit cost (X)
Software Testing Institute; Carnegie Mellon University; Handbook
CMU/SEI-96-HB-002
Dr. Pedro Mejia Alvarez
Software Testing
Slide 16
Cost of Not Testing
Program Managers often say:
“Testing is too expensive.”




Not testing is even more expensive
Planning for testing after development is prohibitively
expensive
A test station for circuit boards costs half a million dollars
…
Software test tools cost less than $10,000 !!!
Dr. Pedro Mejia Alvarez
Software Testing
Slide 17
Lessons

Many errors are made in the early phases

These errors are discovered late

Repairing those errors is costly

 It pays off to start testing real early
Dr. Pedro Mejia Alvarez
Software Testing
Slide 18
Summary: Why Do We Test Software ?
A tester’s goal is to eliminate faults
as early as possible
• Improve quality
• Reduce cost
• Preserve customer satisfaction
Dr. Pedro Mejia Alvarez
Software Testing
Slide 19
How then to proceed?



Exhaustive testing most often is not feasible
Random statistical testing does not work
either if you want to find errors
Therefore, we look for systematic ways to
proceed during testing
Dr. Pedro Mejia Alvarez
Software Testing
Slide 20
State-of-the-Art on Faults




30-85 errors are made per 1000 lines of source code
extensively tested software contains 0.5-3 errors per
1000 lines of source code
testing is postponed, as a consequence: the later an
error is discovered, the more it costs to fix it.
error distribution: 60% design, 40% implementation.
66% of the design errors are not discovered until the
software has become operational.
Dr. Pedro Mejia Alvarez
Software Testing
Slide 21
How Faults are introduced




In software development projects, a “software fault" can be
introduced at any stage during development.
Faults are a consequence of the nature of human factors
during development. They arise from oversights or mutual
misunderstandings made by a software team during
specification, design, coding, data entry and documentation.
More complex faults can arise from unintended interactions
between different parts of the software, the operating
systems and the computer network.
66% of the design errors are not discovered until the software
has become operational.
Dr. Pedro Mejia Alvarez
Software Testing
Slide 22
Most Common Software problems
•
•
•
•
•
•
Incorrect calculation
Incorrect data edits & ineffective data edits
Incorrect matching and merging of data
Data searches that yields incorrect results
Incorrect processing of data relationship
Incorrect coding / implementation of business
rules
• Inadequate software performance
Dr. Pedro Mejia Alvarez
Software Testing
Slide 23
Most Common Software problems
• Confusing or misleading data
• Software usability by end users & Obsolete
Software
• Inconsistent processing
• Unreliable results or performance
• Inadequate support of business needs
• Incorrect or inadequate interfaces with other
systems
• Inadequate performance and security controls
• Incorrect file handling
Dr. Pedro Mejia Alvarez
Software Testing
Slide 24
Types of bugs
Arithmetic bugs
• Division by zero.
• Arithmetic overflow or underflow.
• Loss of arithmetic precision due to rounding or numerically unstable
algorithms.
Resource bugs
• Null pointer dereference.
• Using an uninitialized variable.
• Using an otherwise valid instruction on the wrong data type (see packed
decimal/binary coded decimal).
• Access violations.
• Resource leaks, where a finite system resource (such as memory or file
handles) become exhausted.
• Buffer overflow.
• Excessive recursion.
Dr. Pedro Mejia Alvarez
Software Testing
Slide 25
Types of bugs
Logic bugs
• Infinite loops and infinite recursion.
• Off by one error, counting one too many or too few when looping.
Syntax bugs: Those found by the compilers.
Multi-threading programming bugs
• Dedlock.
• Race condition.
• Concurrency errors.
Interfacing bugs.
Incorrect hardware handling.
Performance bugs
Teamworking bugs
Dr. Pedro Mejia Alvarez
Software Testing
Slide 26
Some preliminary questions

What exactly is an error?

How does the testing process look like?

When is test technique A superior to test technique
B?

What do we want to achieve during testing?

When to stop testing?
Dr. Pedro Mejia Alvarez
Software Testing
Slide 27
Software Faults, Errors & Failures



Software Fault : A static defect in the software
Software Error : An incorrect internal state that is
the manifestation of some fault
Software Failure : External, incorrect behavior with
respect to the requirements or other description of
the expected behavior
Introduction toDr.
Software
Testing,
Pedro
Mejia Alvarez
Software Testing
28
Slide 28
A Concrete Example
Fault: Should start
searching at 0, not 1
public static int numZero (int [ ] arr)
Test 1
{ // Effects: If arr is null throw NullPointerException
[ 2, 7, 0 ]
// else return the number of occurrences of 0 in arr
Expected: 1
int count = 0;
Actual: 1
for (int i = 1; i < arr.length; i++)
{
Error: i is 1, not 0, on
Test 2
if (arr [ i ] == 0)
the first iteration
[ 0, 2, 7 ]
{
Failure: none
Expected: 1
count++;
Error: i is 1, not 0
Actual: 0
}
Error propagates to the variable
}
count
return count;
}
Failure: count is 0 at the return
statement
Dr. Pedro Mejia Alvarez
Software Testing
Slide 29
Dynamics of Faults




Fault Detection: “Waiting” or causing (or finding) for
the error or failure to ocurr.
Fault Location: Finding where the fault(s) ocurred,
its causes and its consequences.
Fault Recovery: Fixing the fault and NOT causing
others.
Regresion Testing: Testing the software again with
the same data that caused the original fault
Introduction toDr.
Software
Testing,
Pedro
Mejia Alvarez
Software Testing
30
Slide 30
Verification and Validation


Verification: evaluate a product to see whether it
satisfies the conditions specified at the start:
Have we built the system right?
Validation: evaluate a product to see whether it does
what we think it should do:
Have we built the right system?
Dr. Pedro Mejia Alvarez
Software Testing
Slide 31
What is our goal during testing?
Objective 1: find as many faults as possible
 Objective 2: make you feel confident that the
software works OK
 Objective 3: make sure that the software
fulfills its specification.
 Objective 4: make sure that the customer
approves the software.
Do we know what to test and how to test it ?

Dr. Pedro Mejia Alvarez
Software Testing
Slide 32
The structure of a software test plan







The testing process.
Requirements traceability.
Tested items.
Testing schedule.
Test recording procedures.
Hardware and software requirements.
Constraints.
Dr. Pedro Mejia Alvarez
Software Testing
Slide 33
Static and dynamic verification




Software inspections. Concerned with analysis of
the static system representation to discover problems (static
verification)
•
May be supplement by tool-based document and code
analysis
Software Testing. Concerned with exercising and
observing product behaviour (dynamic verification)
•
The system is executed with test data and its operational
behaviour is observed.
•
Used to Detect Faults.
Software Debugging. Concerned with finding and removing
the faults.
Fault Tolerance. If faults still ocurr during operation: detect,
remove and recover the system from faults.
Dr. Pedro Mejia Alvarez
Software Testing
Slide 34
Software inspections




These involve people examining the source
representation with the aim of discovering anomalies
and defects.
Inspections not require execution of a system so
may be used before implementation.
They may be applied to any representation of the
system (requirements, design,configuration data,
test data, etc.).
They have been shown to be an effective technique
for discovering program errors.
Dr. Pedro Mejia Alvarez
Software Testing
Slide 35
Inspection checklists




Checklist of common errors should be used to
drive the inspection.
Error checklists are programming language
dependent and reflect the characteristic errors that
are likely to arise in the language.
In general, the 'weaker' the type checking, the larger
the checklist.
Examples: Initialisation, Constant naming, loop
termination, array bounds, etc.
Dr. Pedro Mejia Alvarez
Software Testing
Slide 36
Inspection checks 1
Data faults
Are all program variables initialised before their values are
used?
Have all constants been named?
Should the upper bound of arrays be equal to the size of the
array or Size -1?
If character strings are used, is a de limiter explicitly
assigned?
Is there any possibility of b uffer overflow?
Control faults
For each conditional statement, is the condition correct?
Is each loop certain to terminate?
Are comp ound statements correctly bracketed?
In case statements, are all possible cases accounted for?
If a break is required after each case in case statements, has
it been included?
Input/output faults
Are all input variables used?
Are all output variables assigned a value before they are
output?
Can unexpected inputs cause corruption?
Dr. Pedro Mejia Alvarez
Software Testing
Slide 37
Inspection checks 2
Interface faults
Do all function and method calls have the correct number
of parameters?
Do fo rmal and actual parameter types match?
Are the parameters in the right order?
If comp onents access shared memo ry, do they have the
same mo del of the shared memo ry structure?
Storage
If a linked structure is modified, have all links been
manageme nt faults correctly reassigned?
If dynami c storage is used, has space been allocated
correctly?
Is space explicitly de-allocated after it is no longer
required?
Exception
Have all possible error conditions been taken into account?
manageme nt faults
Dr. Pedro Mejia Alvarez
Software Testing
Slide 38
Stages of static analysis



Control flow analysis. Checks for loops with
multiple exit or entry points, finds unreachable
code, etc.
Data use analysis. Detects uninitialised
variables, variables written twice without an
intervening assignment, variables which are
declared but never used, etc.
Interface analysis. Checks the consistency of
routine and procedure declarations and their
use
Dr. Pedro Mejia Alvarez
Software Testing
Slide 39
Stages of static analysis



Information flow analysis. Identifies the
dependencies of output variables. Does not
detect anomalies itself but highlights
information for code inspection or review
Path analysis. Identifies paths through the
program and sets out the statements
executed in that path. Again, potentially
useful in the review process
Both these stages generate vast amounts of
information. They must be used with care.
Dr. Pedro Mejia Alvarez
Software Testing
Slide 40
The
Software Testing process
Dr. Pedro Mejia Alvarez
Software Testing
Slide 41
Testing process goals

Validation testing
•
•

To demonstrate to the developer and the system
customer that the software meets its requirements;
A successful test shows that the system operates as
intended.
Defect testing
•
•
To discover faults or defects in the software where its
behaviour is incorrect or not in conformance with its
specification;
A successful test is a test that makes the system perform
incorrectly and so exposes a defect in the system.
Dr. Pedro Mejia Alvarez
Software Testing
Slide 42
The testing phases

Component (unit) testing
•
•
•

Testing of individual program components;
Usually the responsibility of the component developer
(except sometimes for critical systems);
Tests are derived from the developer’s experience.
System testing
•
•
•
Testing of groups of components integrated to create a
system or sub-system;
The responsibility of an independent testing team;
Tests are based on a system specification.
Dr. Pedro Mejia Alvarez
Software Testing
Slide 43
Testing phases
Dr. Pedro Mejia Alvarez
Software Testing
Slide 44
Component testing



Component or unit testing is the process of
testing individual components in isolation.
It is a defect testing process.
Components may be:
•
•
•
Individual functions or methods within an object;
Object classes with several attributes and
methods;
Composite components with defined interfaces
used to access their functionality.
Dr. Pedro Mejia Alvarez
Software Testing
Slide 45
System testing



Involves integrating components to create a
system or sub-system.
May involve testing an increment to be
delivered to the customer.
Two phases:
•
•
Integration testing - the test team have access
to the system source code. The system is tested
as components are integrated.
Release testing - the test team test the
complete system to be delivered as a black-box.
Dr. Pedro Mejia Alvarez
Software Testing
Slide 46
Integration testing


Involves building a system from its
components and testing it for problems that
arise from component interactions.
Top-down integration
•

Bottom-up integration
•

Develop the skeleton of the system and
populate it with components.
Integrate infrastructure components then add
functional components.
To simplify error localisation, systems should
be incrementally integrated.
Dr. Pedro Mejia Alvarez
Software Testing
Slide 47
Incremental integration testing
A
T1
A
T1
T2
A
T2
T2
B
T3
B
T3
B
C
T3
T4
C
T4
D
Test sequence 1
Dr. Pedro Mejia Alvarez
T1
Test sequence 2
Software Testing
T5
Test sequence 3
Slide 48
Interface testing


Objectives are to detect faults due to
interface errors or invalid assumptions about
interfaces.
Particularly important for object-oriented
development as objects are defined by their
interfaces.
Dr. Pedro Mejia Alvarez
Software Testing
Slide 49
Interface testing
Dr. Pedro Mejia Alvarez
Software Testing
Slide 50
Interface types

Parameter interfaces
•

Shared memory interfaces
•

Block of memory is shared between procedures or
functions.
Procedural interfaces
•

Data passed from one procedure to another.
Sub-system encapsulates a set of procedures to be called
by other sub-systems.
Message passing interfaces
•
Sub-systems request services from other sub-system.s
Dr. Pedro Mejia Alvarez
Software Testing
Slide 51
Interface errors

Interface misuse
•

Interface misunderstanding
•

A calling component calls another component and makes
an error in its use of its interface e.g. parameters in the
wrong order.
A calling component embeds assumptions about the
behaviour of the called component which are incorrect.
Timing errors
•
The called and the calling component operate at different
speeds and out-of-date information is accessed.
Dr. Pedro Mejia Alvarez
Software Testing
Slide 52
Interface testing guidelines





Design tests so that parameters to a called
procedure are at the extreme ends of their ranges.
Always test pointer parameters with null pointers.
Design tests which cause the component to fail.
Use stress testing in message passing systems.
In shared memory systems, vary the order in which
components are activated.
Dr. Pedro Mejia Alvarez
Software Testing
Slide 53
Performance testing


Part of release testing may involve testing
the emergent properties of a system, such
as performance and reliability.
Performance tests usually involve planning a
series of tests where the load is steadily
increased until the system performance
becomes unacceptable.
Dr. Pedro Mejia Alvarez
Software Testing
Slide 54
Stress testing



Exercises the system beyond its maximum design
load. Stressing the system often causes defects to
come to light.
Stressing the system test failure behaviour..
Systems should not fail catastrophically. Stress
testing checks for unacceptable loss of service or
data.
Stress testing is particularly relevant to distributed
systems that can exhibit severe degradation as a
network becomes overloaded.
Dr. Pedro Mejia Alvarez
Software Testing
Slide 55
Release testing



The process of testing a release of a system
that will be distributed to customers.
Primary goal is to increase the supplier’s
confidence that the system meets its
requirements.
Release testing is usually black-box or
functional testing
•
•
Based on the system specification only;
Testers do not have knowledge of the system
implementation.
Dr. Pedro Mejia Alvarez
Software Testing
Slide 56
Test case design



Involves designing the test cases (inputs and
outputs) used to test the system.
The goal of test case design is to create a
set of tests that are effective in validation and
defect testing.
Design approaches:
•
•
Black Box Testing: Partition testing;
White Box Testing:
• Structural testing.
• Criteria based on Structures.
Dr. Pedro Mejia Alvarez
Software Testing
Slide 57
Black-box testing
Dr. Pedro Mejia Alvarez
Software Testing
Slide 58
Testing guidelines

Testing guidelines are hints for the testing
team to help them choose tests that will
reveal defects in the system
•
•
•
•
•
Choose inputs that force the system to generate
all error messages;
Design inputs that cause buffers to overflow;
Repeat the same input or input series several
times;
Force invalid outputs to be generated;
Force computation results to be too large or too
small.
Dr. Pedro Mejia Alvarez
Software Testing
Slide 59
Partition testing



Input data and output results often fall into
different classes where all members of a
class are related.
Each of these classes is an equivalence
partition or domain where the program
behaves in an equivalent way for each class
member.
Test cases should be chosen from each
partition.
Dr. Pedro Mejia Alvarez
Software Testing
Slide 60
Equivalence partitioning
Dr. Pedro Mejia Alvarez
Software Testing
Slide 61
Equivalence partitions
Dr. Pedro Mejia Alvarez
Software Testing
Slide 62
Search routine specification
procedure Search (Key : ELEM ; T: SEQ of ELEM;
Found : in out BOOLEAN; L: in out ELEM_INDEX) ;
Pre-condition
-- the sequence has at least one element
T’FIRST <= T’LAST
Post-condition
-- the element is found and is referenced by L
( Found and T (L) = Key)
or
-- the element is not in the array
( not Found and
not (exists i, T’FIRST >= i <= T’LAST, T (i) = Key ))
Dr. Pedro Mejia Alvarez
Software Testing
Slide 63
Search routine - input partitions




Inputs which conform to the pre-conditions.
Inputs where a pre-condition does not hold.
Inputs where the key element is a member of
the array.
Inputs where the key element is not a
member of the array.
Dr. Pedro Mejia Alvarez
Software Testing
Slide 64
Testing guidelines (sequences)




Test software with sequences which have
only a single value.
Use sequences of different sizes in different
tests.
Derive tests so that the first, middle and last
elements of the sequence are accessed.
Test with sequences of zero length.
Dr. Pedro Mejia Alvarez
Software Testing
Slide 65
Search routine - input partitions
Sequence
Single value
Single value
More than 1 value
More than 1 value
More than 1 value
More than 1 value
Input sequence (T)
17
17
17, 29, 21, 23
41, 18, 9, 31, 30, 16, 45
17, 18, 21, 23, 29, 41, 38
21, 23, 29, 33, 38
Dr. Pedro Mejia Alvarez
Element
In sequence
Not in sequence
First element in sequence
Last element in sequence
Middle eleme nt in sequence
Not in sequence
Key (Key)
17
0
17
45
23
25
Output (Found, L)
true, 1
false, ? ?
true, 1
true, 7
true, 4
false, ? ?
Software Testing
Slide 66
Structural testing



Sometime called white-box testing.
Derivation of test cases according to
program structure. Knowledge of the
program is used to identify additional test
cases.
Objective is to exercise all program
statements (not all path combinations).
Dr. Pedro Mejia Alvarez
Software Testing
Slide 67
Structural testing
Dr. Pedro Mejia Alvarez
Software Testing
Slide 68
Binary search - equiv. partitions







Pre-conditions satisfied, key element in array.
Pre-conditions satisfied, key element not in
array.
Pre-conditions unsatisfied, key element in array.
Pre-conditions unsatisfied, key element not in array.
Input array has a single value.
Input array has an even number of values.
Input array has an odd number of values.
Dr. Pedro Mejia Alvarez
Software Testing
Slide 69
Binary search equiv. partitions
Dr. Pedro Mejia Alvarez
Software Testing
Slide 70
Binary search - test cases
Input array (T)
17
17
17, 21, 23, 29
9, 16, 18, 30, 31, 41, 45
17, 18, 21, 23, 29, 38, 41
17, 18, 21, 23, 29, 33, 38
12, 18, 21, 23, 32
21, 23, 29, 33, 38
Dr. Pedro Mejia Alvarez
Key (Key)
17
0
17
45
23
21
23
25
Output (Found, L)
true, 1
false, ? ?
true, 1
true, 7
true, 4
true, 3
true, 4
false, ? ?
Software Testing
Slide 71
Path testing



The objective of path testing is to ensure that
the set of test cases is such that each path
through the program is executed at least
once.
The starting point for path testing is a
program flow graph that shows nodes
representing program decisions and arcs
representing the flow of control.
Statements with conditions are therefore
nodes in the flow graph.
Dr. Pedro Mejia Alvarez
Software Testing
Slide 72
Binary search flow graph
Dr. Pedro Mejia Alvarez
Software Testing
Slide 73
Independent paths






1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 14
1, 2, 3, 4, 5, 14
1, 2, 3, 4, 5, 6, 7, 11, 12, 5, …
1, 2, 3, 4, 6, 7, 2, 11, 13, 5, …
Test cases should be derived so that all of
these paths are executed
A dynamic program analyser may be used to
check that paths have been executed
Dr. Pedro Mejia Alvarez
Software Testing
Slide 74
Criteria Based on Structures
Structures : Four ways to model software
1. Graphs
2. Logical Expressions
3. Input Domain
Characterization
4. Syntactic Structures
Dr. Pedro Mejia Alvarez
(not X or not Y) and A and
B
A: {0, 1, >1}
B: {600, 700, 800}
C: {swe, cs, isa, infs}
if (x > y)
z = x - y;
else
z = 2 * x;
Software Testing
Slide 75
1. Graph Coverage – Structural
2
1
6
5
7
Node
Edge
(Branch)
Path (Statement)
3
This graph may
represent
Cover every
Cover
every
node
Cover
everypath
edge
12567
•••12567
12567
4
• 1257
•• 1343567
1343567
• statements & branches
• 13567
• methods & calls
•• 1357
1357
• components & signals
• 1343567
• states and transitions
Dr. Pedro Mejia Alvarez
Software Testing
• 134357 …76
Slide 76
1. Graph Coverage – Data Flow
def = {m}
def = {a , m}
use = {y}
2
def = {x, y}
1
use = {x}
5
6
use = {a}
use = {a}
use = {m}
7
use = {x} def = {a}
3
4
All Uses
This graph contains:
Defs & Uses Pairs
• defs: nodes & edges where
variables get values
• (x, 1, (1,2)), (x, 1, (1,3))
every use
• (y, 1, 4), (y, 1, 6)
• 1, 2, 5, 6, 7
• uses: nodes & edges where
values are accessed
• (a, 2, (5,6)), (a, 2, (5,7)), (a,
3, (5,6)), (a, 3, (5,7)),
• 1, 2, 5, 7
• (m, 2, 7), (m,
4, 7),Testing
(m, 6, 7)
Software
• 1, 3, 5, 6,Slide
7
Dr. Pedro Mejia Alvarez
def = {m}
use = {y}
Every def “reaches”
77
77
2. Logical Expressions
( (a > b) or G ) and (x < y)
Transitions
Program Decision Statements
Logical
Expressions
Software Specifications
Dr. Pedro Mejia Alvarez
Software Testing
Slide 78
2. Logical Expressions
( (a > b) or G ) and (x < y)

Predicate Coverage : Each predicate must be true and false
•

Clause Coverage : Each clause must be true and false
•
•
•

( (a>b) or G ) and (x < y) = True, False
(a > b) = True, False
G = True, False
(x < y) = True, False
Combinatorial Coverage : Various combinations of clauses
•
Active Clause Coverage: Each clause must determine the predicate’s
result
Dr. Pedro Mejia Alvarez
Software Testing
Slide 79
2. Logic – Active Clause
Coverage
( (a > b) or G ) and (x < y)
With these values
for G and (x<y),
(a>b) determines
the value of the
predicate
1
T
F
T
2
F
F
T
3
F
T
T
4
F
F
T
5
T
T
T
6
T
T
F
Dr. Pedro Mejia Alvarez
Software Testing
duplicate
Slide 80
3. Input Domain Characterization

Describe the input domain of the software
•
•
•

System level
•
•
•

Identify inputs, parameters, or other categorization
Partition each input into finite sets of representative values
Choose combinations of values
Number of students
Level of course
Major
{ 0, 1, >1 }
{ 600, 700, 800 }
{ swe, cs, isa, infs }
Unit level
•
•
•
Parameters
F (int X, int Y)
Possible values
X: { <0, 0, 1, 2, >2 }, Y : { 10, 20, 30 }
Tests
• F (-5, 10), F (0, 20), F (1, 30), F (2, 10), F (5, 20)
Dr. Pedro Mejia Alvarez
Software Testing
Slide 81
4. Syntactic Structures


Based on a grammar, or other syntactic definition
Primary example is mutation testing
1.
2.
3.
4.

Induce small changes to the program: mutants
Find tests that cause the mutant programs to fail: killing mutants
Failure is defined as different output from the original program
Check the output of useful tests on the original program
Example program and mutants
if (x > y)
z = x - y;
else
z = 2 * x;
Dr. Pedro Mejia Alvarez
Software Testing
if (x > y)
if (x >= y)
z = x - y;
 z = x + y;
 z = x – m;
else
z = 2 * x;
Slide 82
Testing and debugging




Defect testing and debugging are distinct
processes.
Verification and validation is concerned with
establishing the existence of defects in a program.
Debugging is concerned with locating and
repairing these errors.
Debugging involves formulating a hypothesis
about program behaviour then testing these
hypotheses to find the system error.
Dr. Pedro Mejia Alvarez
Software Testing
Slide 83
The debugging process
Dr. Pedro Mejia Alvarez
Software Testing
Slide 84
Debugging
What can debuggers do?





Run programs
Make the program stops on specified places or on specified
conditions
Give information about current variables’ values, the memory and
the stack
Let you examine the program execution step by step - stepping
Let you examine the change of program variables’ values - tracing
! To be able to debug your program, you must compile it with the -g
option (creates the symbol table) !
CC –g my_prog
Dr. Pedro Mejia Alvarez
Software Testing
Slide 85
GDB – Running Programs
Running a program:
run (or r) -- creates an inferior process that runs your program.


if there are no execution errors the program will finish and results will
be displayed
in case of error, the GDB will show:
- the line the program has stopped on and
- a short description of what it believes has caused the error
There is a certain information that affects the execution of a program:
 program’s arguments
 program’s environment
 program’s working directory
 the standard input and output
Dr. Pedro Mejia Alvarez
Software Testing
Slide 86
GDB – Breakpoints and watchpoints
Breakpoints and watchpoints allow you to specify the places or the
conditions where you want your program to stop.
break arg – stops when the execution reaches the specified line
/ arg – function name, line number, +/- offset /
watch expr – stops whenever the value of the expression changes
clear [arg]
Without arguments deletes any breakpoint at the next instruction to be
executed in the current stack frame
delete [bnum]
Without arguments deletes all breakpoints.
Dr. Pedro Mejia Alvarez
Software Testing
Slide 87
GDB – Examining variables
! Global variables can be examined from every point in the source file.
! Local variables – can be examined only in their scope or using:
file::variable or function::variable
The variable type:
Current value:
print var
Automatic display:
Dr. Pedro Mejia Alvarez
ptype var
display var - adds var to the automatic
display list.
undisplay dnum
Software Testing
Slide 88
GDB – Value history
The value history keeps the values printed by the print command.
Previously printed values can be accessed by typing $ followed by
their history number.
$ - refers to the most recent value and
$$n - refers to the n-th value from the end.
show values [n|+]
Without argument – the last 10 values.
n – 10 values centered around n
+ – 10 values after the last printed
Dr. Pedro Mejia Alvarez
Software Testing
Slide 89
GDB – Stepping through the program
step [count] – program execution continue to next source line
going into function calls.
next [count] – program execution continue to the next source
line omitting function calls.
continue – resume program execution
until – continue until the next source line in the current stack
frame
is reached. /useful to exit from loops/
Dr. Pedro Mejia Alvarez
Software Testing
Slide 90
GDB – Altering execution
Returning from a function
finish - forced return
return [ret_value] – pops the current stack frame
Continuing at different address
jump line_num|*address
Altering the value of a variable
set i=256
Proceeding to a specified point:
until [line_num|*address |function_name]
Dr. Pedro Mejia Alvarez
Software Testing
Slide 91
GDB – The stack frame
Stack frames are identified by their addresses, which are kept in the
frame pointer register.
 Selecting a frame:
frame n|addr
up n
0
1
2
down n
 Information about the current frame
frame – brief description
info args – shows function arguments
info locals – shows local variables
Dr. Pedro Mejia Alvarez
Software Testing
Slide 92
GDB – Convenience variables


Convenience variables are used to store values that you may want to
refer later. Any string preceded by $ is regarded as a convenience
variable.
Ex.: $table = *table_ptr
There are several automatically created convenience variables:
$pc – program counter
$sp – stack pointer
$fp – frame pointer
$ps – processor status
$_ - contains the last examined address
$__ - the value in the last examined address
$_exitcode - the exit code of the debugged program
Dr. Pedro Mejia Alvarez
Software Testing
Slide 93
GDB – Examining memory
The x command (for “examine”):

x/nfu addr – specify the number of units (n), the display format (f) and
the unit size (u) of the memory you want to examine, starting from the
address addr. Unit size can be – b, h (half), w and g (giant).

x addr – start printing from the address addr, others default

x – all default
Registers
Registers names are different for each machine. Use info registers to see
the names used on your machine.
GDB has four “standard” registers names that are available on most
machines: program counter, stack pointer, frame pointer and processor
status.
Dr. Pedro Mejia Alvarez
Software Testing
Slide 94
GDB – Additional process information
info proc – summarize available information about the current
process.
info proc mappings – address range accessible in the
program.
info proc times – starting time, user CPU time and system
CPU time for your program and its children.
help info !
info signals – information about the system signals and how
GDB handles them.
Dr. Pedro Mejia Alvarez
Software Testing
Slide 95
DDD – View
argument
field
comman
d
tool
source
window
debugger
console
Dr. Pedro Mejia Alvarez
Software Testing
Slide 96
Command Tool
Dr. Pedro Mejia Alvarez
Software Testing
Slide 97
Fault Tolerance
Fault Tolerance is achieved with Redundancy.




Fault Detection:
Fault Location: Finding where the fault(s) ocurred,
its causes and its consequences.
Fault Recovery: Fixing the fault.
Continue the execution.
Introduction toDr.
Software
Testing,
Pedro
Mejia Alvarez
Software Testing
98
Slide 98
Test automation




Testing is an expensive process phase. Testing
workbenches provide a range of tools to reduce the
time required and total testing costs.
Systems such as Junit support the automatic
execution of tests.
Most testing workbenches are open systems
because testing needs are organisation-specific.
They are sometimes difficult to integrate with closed
design and analysis workbenches.
Dr. Pedro Mejia Alvarez
Software Testing
Slide 99
A testing workbench
Dr. Pedro Mejia Alvarez
Software Testing
Slide 100
What do we need to improve testing?

Run programs
Dr. Pedro Mejia Alvarez
Software Testing
Slide 101