Document 7473322

Download Report

Transcript Document 7473322

Formal methods:
Model Checking and Testing
Prof. Doron A. Peled
University of Warwick, UK
and
Bar Ilan University, Israel
Some related books:
Mainly:
Also:
Modeling Software Systems
for Analysis
(Book: Chapter 4)
Modelling and specification for
verification and validation



How to specify what the software is
supposed to do?
Can we use the UML model or parts of
it?
How to model it in a way that allows us
to check it?
Systems of interest


Sequential systems.
Concurrent systems (multi-threaded).
1.
Distributive systems.
2.
Reactive systems.
3.
Embedded systems
(software + hardware).
Sequential systems.




Perform some computational task.
Have some initial condition, e.g.,
0in A[i] integer.
Have some final assertion, e.g.,
0in-1 A[i]A[i+1].
(What is the problem with this spec?)
Are supposed to terminate.
Concurrent Systems
Involve several computation agents.
Termination may indicate an abnormal
event (interrupt, strike).
May exploit diverse computational power.
May involve remote components.
May interact with users (Reactive).
May involve hardware components
(Embedded).
Problems in modeling systems


Representing concurrency:
- Allow one transition at a time, or
- Allow coinciding transitions.
Granularity of transitions.



Assignments and checks?
Application of methods?
Global (all the system) or local (one
thread at a time) states.
Modeling.
The states based model.




V={v0,v1,v2, …} - a set of variables, over some
domain.
p(v0, v1, …, vn) - a parametrized assertion, e.g.,
v0=v1+v2 /\ v3>v4.
A state is an assignment of values to the program
variables. For example:
s=<v0=1,v2=3,v3=7,…,v18=2>
For predicate (first order assertion) p:
p(s) is p under the assignment s.
Example: p is x>y /\ y>z. s=<x=4, y=3, z=5>.
Then we have 4>3 /\ 3>5, which is false.
State space


The state space of a program is the set
of all possible states for it.
For example, if V={a, b, c} and the
variables are over the naturals, then the
state space includes:
<a=0,b=0,c=0>,<a=1,b=0,c=0>,
<a=1,b=1,c=0>,<a=932,b=5609,c=6658>…
Atomic Transitions



Each atomic transition represents a
small piece of code such that no smaller
piece of code is observable.
Is a:=a+1 atomic?
In some systems, e.g., when a is a
register and the transition is executed
using an inc command.
Non atomicity





Execute the
following when a=0
in two concurrent
processes:
P1:a=a+1
P2:a=a+1
Result: a=2.
Is this always the
case?
Consider the actual
translation:
P1:load R1,a
inc R1
store R1,a
P2:load R2,a
inc R2
store R2,a
 a may be also 1.

Scenario
P1:load R1,a
inc R1
store R1,a
a=0
R1=0
R2=0
P2:load R2,a
R1=1
R2=1
inc R2
a=1
store R2,a
a=1
Representing transitions

Each transition has two parts:



The enabling condition: a predicate.
The transformation: a multiple assignment.
For example:
a>b  (c,d):=(d,c)
This transition can be executed in states
where a>b. The result of executing it is
switching the value of c with d.
Initial condition



A predicate I.
The program can
start from states s
such that I (s)
holds.
For example:
I (s)=a>b /\ b>c.
A transition system



A (finite) set of variables V over some
domain.
A set of states S.
A (finite) set of transitions T, each
transition et has



an enabling condition e, and
a transformation t.
An initial condition I.
Example





V={a, b, c, d, e}.
S: all assignments of natural numbers
for variables in V.
T={c>0(c,e):=(c-1,e+1),
d>0(d,e):=(d-1,e+1)}
I: c=a /\ d=b /\ e=0
What does this transition system do?
The interleaving model



An execution is a finite or infinite
sequence of states s0, s1, s2, …
The initial state satisfies the initial
condition, I.e., I (s0).
Moving from one state si to si+1 is by
executing a transition et:


e (si), I.e., si satisfies e.
si+1 is obtained by applying t to si.
Example:
T={c>0(c,e):=(c-1,e+1),

s0=<a=2, b=1, c=2, d=1, e=0>

s1=<a=2, b=1, c=1, d=1, e=1>

s2=<a=2, b=1, c=1, d=0, e=2>

s3=<a=2, b=1 ,c=0, d=0, e=3>
d>0(d,e):=(d-1,e+1)}
I: c=a /\ d=b /\ e=0
The transitions
L0:While True do
NC0:wait(Turn=0);
CR0:Turn=1
endwhile ||
L1:While True do
NC1:wait(Turn=1);
CR1:Turn=0
endwhile
T0:PC0=L0PC0:=NC0
T1:PC0=NC0/\Turn=0
PC0:=CR0
T2:PC0=CR0
(PC0,Turn):=(L0,1)
T3:PC1=L1PC1=NC1
T4:PC1=NC1/\Turn=1
PC1:=CR1
T5:PC1=CR1
(PC1,Turn):=(L1,0)
Initially: PC0=L0/\PC1=L1
The state graph:Successor
relation between states.
Turn=1
L0,L1
Turn=0
L0,L1
Turn=0
L0,NC1
Turn=0
NC0,NC1
Turn=0
NC0,L1
Turn=0
CR0,L1
Turn=1
L0,NC1
Turn=1
L0,CR1
Turn=0
Turn=1
CR0,NC1
NC0,CR1
Turn=1
NC0,L1
Turn=1
NC0,NC1
Some observations


Executions: the set of maximal paths (finite
or terminating in a node where nothing is
enabled).
Nondeterministic choice: when more than a
single transition is enabled at a given state.
We have a nondeterministic choice when at
least one node at the state graph has more
than one successor.
Always ¬(PC0=CR0/\PC1=CR1)
(Mutual exclusion)
Turn=1
L0,L1
Turn=0
L0,L1
Turn=0
L0,NC1
Turn=0
NC0,NC1
Turn=0
NC0,L1
Turn=0
CR0,L1
Turn=1
L0,NC1
Turn=1
L0,CR1
Turn=0
Turn=1
CR0,NC1
NC0,CR1
Turn=1
NC0,L1
Turn=1
NC0,NC1
Always if Turn=0 the at
some point Turn=1
Turn=1
L0,L1
Turn=0
L0,L1
Turn=0
L0,NC1
Turn=0
NC0,NC1
Turn=0
NC0,L1
Turn=0
CR0,L1
Turn=1
L0,NC1
Turn=1
L0,CR1
Turn=0
Turn=1
CR0,NC1
NC0,CR1
Turn=1
NC0,L1
Turn=1
NC0,NC1
Always if Turn=0 the at
some point Turn=1
Turn=1
L0,L1
Turn=0
L0,L1
Turn=0
L0,NC1
Turn=0
NC0,NC1
Turn=0
NC0,L1
Turn=0
CR0,L1
Turn=1
L0,NC1
Turn=1
L0,CR1
Turn=0
Turn=1
CR0,NC1
NC0,CR1
Turn=1
NC0,L1
Turn=1
NC0,NC1
Interleaving semantics:
Execute one transition at a time.
Turn=0
L0,L1
Turn=0
L0,NC1
Turn=1
L0,NC1
Turn=0
NC0,NC1
Turn=1
L0,CR1
Turn=0
CR0,NC1
Need to check the property
for every possible interleaving!
Interleaving semantics
Turn=0
L0,L1
Turn=0
L0,NC1
Turn=0
Turn=0
NC0,NC1
CR0,NC1
Turn=1
L0,NC1
Turn=1
L0,CR1
Turn=0
L0,L1
Turn=0
L0,NC1
Busy waiting
L0:While True do
NC0:wait(Turn=0);
CR0:Turn=1
endwhile ||
L1:While True do
NC1:wait(Turn=1);
CR1:Turn=0
endwhile
T0:PC0=L0PC0:=NC0
T1:PC0=NC0/\Turn=0PC0:=CR0
T1’:PC0=NC0/\Turn=1PC0:=NC0
T2:PC0=CR0(PC0,Turn):=(L0,1)
T3:PC1==L1PC1=NC1
T4:PC1=NC1/\Turn=1PC1:=CR1
T4’:PC1=NC1/\Turn=0PC1:=N1
T5:PC1=CR1(PC1,Turn):=(L1,0)
Initially: PC0=L0/\PC1=L1
Always when Turn=0 then
sometimes Turn=1
Turn=1
L0,L1
Turn=0
L0,L1
Turn=0
L0,NC1
Turn=0
NC0,NC1
Turn=0
NC0,L1
Turn=0
CR0,L1
Turn=1
L0,NC1
Turn=1
L0,CR1
Turn=0
Turn=1
CR0,NC1
NC0,CR1
Turn=1
NC0,L1
Turn=1
NC0,NC1
Now it does not hold!
(Red subgraph generates a counterexample execution.)
Specification Formalisms
(Book: Chapter 5)
Properties of formalisms




Formal. Unique interpretation.
Intuitive. Simple to understand (visual).
Succinct. Spec. of reasonable size.
Effective.




Check that there are no contradictions.
Check that the spec. is implementable.
Check that the implementation satisfies spec.
Expressive.
May be used to generate initial code.
Specifying the implementation or its properties?

A transition system





A (finite) set of variables V.
A set of states S.
A (finite) set of transitions T, each transition et
has
 an enabling condition e and a transformation t.
An initial condition I.
Denote by R(s, s’) the fact that s’ is a successor of s.
The interleaving model




An execution is a finite or infinite sequence of states s0, s1,
s2, …
The initial state satisfies the initial condition, I.e., I (s0).
Moving from one state si to si+1 is by executing a transition
et:
 e(si), I.e., si satisfies e.
 si+1 is obtained by applying t to si.
Lets assume all sequences are infinite by extending finite
ones by “stuttering” the last state.
Temporal logic




Dynamic, speaks about several “worlds”
and the relation between them.
Our “worlds” are the states in an
execution.
There is a linear relation between them,
each two sequences in our execution
are ordered.
Interpretation: over an execution,
later over all executions.
LTL: Syntax
 ::= () | ¬ | /\  \/ U
 |O  | p
“box”, “always”, “forever”
“diamond”, “eventually”, “sometimes”
O “nexttime”
U“until”
Propositions p, q, r, … Each represents some
state property (x>y+1, z=t, at_CR, etc.)
Semantics over suffixes of execution








O
U









Combinations



[]<>p “p will happen infinitely often”
<>[]p “p will happen from some point
forever”.
([]<>p) --> ([]<>q) “If p happens
infinitely often, then q also happens
infinitely often”.
Some relations:


[](a/\b)=([]a)/\([]b)
But <>(a/\b)(<>a)/\(<>b)
b


a
<>(a\/b)=(<>a)\/(<>b)
But [](a\/b)([]a)\/([]b)
b
a
b
b
a
b
a
b
a
What about

([]<>A)/\([]<>B)=[]<>(A/\B)? No, just <--

([]<>A)\/([]<>B)=[]<>(A\/B)? Yes!!!

(<>[]A)/\(<>[]B)=<>[](A/\B)? Yes!!!

(<>[]A)\/(<>[]B)=<>[](A\/B)? No, just -->
Can discard some operators


Instead of <>p, write true U p.
Instead of []p, we can write ¬<>¬p,
or ¬(true U ¬p).
Because []p=¬¬[]p.
¬[]p means it is not true that p holds
forever, or at some point ¬p holds or
<>¬p.
Formal semantic definition









Let  be a sequence s0 s1 s2 …
Let i be a suffix of : si si+1 si+2 … (0 = )
i |= p, where p a proposition, if si|=p.
i |= /\ if i |=  and i |= .
i |= \/ if i |=  or i |= .
i |= ¬ if it is not the case that i |= .
i |= <> if for some ji, j |= .
i |= [] if for each ji, j |= .
i |= U  if for some ji, j|=.
and for each ik<j, k |=.
Then we interpret:

For a state:

For an execution:

For a system/program:
s|=p as in propositional logic.
|= is interpreted over a sequence, as
in previous slide.
P|= holds if |= for every sequence
 of P.
Spring Example
release
s1
s2
pull
release
extended
r0 = s1 s2 s1 s2 s1 s2 s1 …
r1 = s1 s2 s3 s3 s3 s3 s3 …
r2 = s1 s2 s1 s2 s3 s3 s3 …
…
s3
extended
malfunction
LTL satisfaction by a single
sequence
r2 = s1 s2 s1 s2 s3 s3 s3 …
release
s1
pull
s2
release
extended
r2 |= extended ??
r2 |= O extended ??
r2 |= O O extended ??
r2 |= <> extended ??
r2 |= [] extended ??
s3
extended
malfunction
r2 |= <>[] extended ??
r2 |= ¬ <>[] extended ??
r2 |= (¬extended) U malfunction ??
r2 |= [](¬extended->O extended) ??
LTL satisfaction by a system
release
s1
pull
s2
release
extended
P |= extended ??
P |= O extended ??
P |= O O extended ??
P |= <> extended ??
P|= [] extended ??
s3
extended
malfunction
P |= <>[] extended ??
P |= ¬ <>[] extended ??
P |= (¬extended) U malfunction ??
P |= [](¬extended->O extended) ??
More specifications



[] (PC0=NC0  <> PC0=CR0)
[] (PC0=NC0 U Turn=0)
Try at home:
- The processes alternate in entering
their critical sections.
- Each process enters its critical section
infinitely often.
Proof system







¬<>p<-->[]¬p
[](pq)([]p[]q)
[]p(p/\O[]p)
O¬p<-->¬Op
[](pOp)(p[]p)
(pUq)<-->(q\/(p/\O(pUq)))
(pUq)<>q


+ propositional logic
axiomatization.
+ axiom:
_p_
[]p
Traffic light example
Green --> Yellow --> Red --> Green
Always has exactly one light:
[](¬(gr/\ye)/\¬(ye/\re)/\¬(re/\gr)/\(gr\/ye\/re))
Correct change of color:
[]((grUye)\/(yeUre)\/(reUgr))
Another kind of traffic light
Green-->Yellow-->Red-->Yellow-->Green
First attempt:
[](((gr\/re) U ye)\/(ye U (gr\/re)))
Correct specification:
[]( (gr(gr U (ye /\ ( ye U re ))))
/\(re(re U (ye /\ ( ye U gr ))))
/\(ye(ye U (gr \/ re))))
Needed only when we
can start with yellow
Properties of sequential
programs






init-when the program starts and
satisfies the initial condition.
finish-when the program terminates and
nothing is enabled.
Partial correctness: init/\[](finish)
Termination: init/\<>finish
Total correctness: init/\<>(finish/\ )
Invariant: init/\[]
Automata over finite words






A=<S, S, , I, F>
S (finite) - the alphabet.
S (finite) - the states.
  S x S x S - the transition relation.
I  S - the starting states.
F  S - the accepting states.
a
a
s0
s1
b
b
The transition relation




(s0,
(s0,
(s1,
(s1,
a, s0)
b, s1)
a, s0)
b, s1)
a
a
s0
s1
b
b
A run over a word




A word over S, e.g., abaab.
A sequence of states, e.g. s0 s0 s1 s0 s0 s1.
Starts with an initial state.
Accepting if ends at accepting state.
a
a
s0
s1
b
b
The language of an automaton




The words that are accepted by the
automaton.
Includes aabbba, abbbba.
Does not include abab, abbb.
What is the language?
a
a
s0
s1
b
b
Nondeterministic automaton


Transitions: (s0,a ,s0), (s0,b ,s0),
(s0,a ,s1),(s1,a ,s1).
What is the language of this
automaton?
s0
a,b
a
s1
a
Equivalent deterministic automaton
s0
a,b
a
s1
a
a
b
s0
s1
b
a
Automata over infinite words



Similar definition.
Runs on infinite words over S.
Accepts when an accepting state occurs
infinitely often in a run.
a
a
s0
s1
b
b
Automata over infinite words



Consider the word abababab…
There is a run s0s0s1s0s1s0s1 …
This run in accepting, since s0
appears infinitely many times.
a
a
s0
s1
b
b
Other runs



For the word bbbbb… the run is
s0 s1 s1 s1 s1… and is not accepting.
For the word aaabbbbb …, the
run is s0 s0 s0 s0 s1 s1 s1 s1 …
What is the run for ababbabbb …?
a
a
s0
s1
b
b
Nondeterministic automaton


What is the language of this automaton?
What is the LTL specification if
b -- PC0=CR0, a=¬b?
s0
a,b
a
s1
a
•Can you find a deterministic automaton with same language?
•Can you prove there is no such deterministic automaton?
No deterministic automaton
for (a+b)*aω






In a deterministic automaton there is one path for
each run.
After some sequence of a’s, i.e., aaa…a must reach
some accepting state.
Now add b, obtaining aaa…ab.
After some more a’s, i.e., aaa…abaaa…a must reach
some accepting state.
Now add b, obtaining aaa…abaaa…ab.
Continuing this way, one obtains a run that has
infinitely many b’s but reaches an accepting state
(in a finite automaton, at least one would repeat)
infinitely often.
Specification using Automata



Let each letter correspond to some propositional
property.
Example:
a -- P0 enters critical section,
b -- P0 does not enter section.
[]<>PC0=CR0
a
a
s0
s1
b
b
Mutual Exclusion




a -- PC0=CR0/\PC1=CR1
b -- ¬(PC0=CR0/\PC1=CR1)
c -- true
[]¬(PC0=CR0/\PC1=CR1)
b
s0
a
s1
c
Apply now to our
program:
L0:While True do
NC0:wait(Turn=0);
CR0:Turn=1
endwhile ||
L1:While True do
NC1:wait(Turn=1);
CR1:Turn=0
endwhile
T0:PC0=L0PC0=NC0
T1:PC0=NC0/\Turn=0
PC0:=CR0
T2:PC0=CR0
(PC0,Turn):=(L0,1)
T3:PC1==L1PC1=NC1
T4:PC1=NC1/\Turn=1
PC1:=CR1
T5:PC1=CR1
(PC1,Turn):=(L1,0)
Initially: PC0=L0/\PC1=L1
The state space
Turn=1
L0,L1
Turn=0
L0,L1
Turn=0
L0,NC1
Turn=0
NC0,NC1
Turn=0
NC0,L1
Turn=0
CR0,L1
Turn=1
L0,NC1
Turn=1
L0,CR1
Turn=0
Turn=1
CR0,NC1
NC0,CR1
Turn=1
NC0,L1
Turn=1
NC0,NC1
[]¬(PC0=CR0/\PC1=CR1)
(Mutual exclusion)
Turn=1
L0,L1
Turn=0
L0,L1
Turn=0
L0,NC1
Turn=0
NC0,NC1
Turn=0
NC0,L1
Turn=0
CR0,L1
Turn=1
L0,NC1
Turn=1
L0,CR1
Turn=0
Turn=1
CR0,NC1
NC0,CR1
Turn=1
NC0,L1
Turn=1
NC0,NC1
[](Turn=0 --> <>Turn=1)
Turn=1
L0,L1
Turn=0
L0,L1
Turn=0
L0,NC1
Turn=0
NC0,NC1
Turn=0
NC0,L1
Turn=0
CR0,L1
Turn=1
L0,NC1
Turn=1
L0,CR1
Turn=0
Turn=1
CR0,NC1
NC0,CR1
Turn=1
NC0,L1
Turn=1
NC0,NC1
Interleaving semantics:
Execute one transition at a time.
Turn=0
L0,L1
Turn=0
L0,NC1
Turn=1
L0,NC1
Turn=0
NC0,NC1
Turn=1
L0,CR1
Turn=0
CR0,NC1
Need to check the property
for every possible interleaving!
[](Turn=0 --> <>Turn=1)
Turn=1
L0,L1
Turn=0
L0,L1
Turn=0
L0,NC1
Turn=0
NC0,NC1
Turn=0
NC0,L1
Turn=0
CR0,L1
Turn=1
L0,NC1
Turn=1
L0,CR1
Turn=0
Turn=1
CR0,NC1
NC0,CR1
Turn=1
NC0,L1
Turn=1
NC0,NC1
Correctness condition

We want to find a correctness condition
for a model to satisfy a specification.
Language of a model: L(Model)
Language of a specification: L(Spec).

We need: L(Model)  L(Spec).


Correctness
Sequences satisfying Spec
Program executions
All sequences
Incorrectness
Counter
examples
Sequences satisfying Spec
Program executions
All sequences
Automatic Verification
(Book: Chapter 6)
How can we check the model?



The model is a graph.
The specification should refer the the
graph representation.
Apply graph theory algorithms.
What properties can we check?



Invariant: a property that needs to
hold in each state.
Deadlock detection: can we reach a
state where the program is blocked?
Dead code: does the program have
parts that are never executed.
How to perform the checking?



Apply a search strategy (Depth first
search, Breadth first search).
Check states/transitions during the
search.
If property does not hold, report
counter example!
If it is so good, why learn deductive
verification methods?

Model checking works only for finite
state systems. Would not work with



Unconstrained integers.
Unbounded message queues.
General data structures:




queues
trees
stacks
parametric algorithms and systems.
The state space explosion

Need to represent the state space of a
program in the computer memory.


Each state can be as big as the entire
memory!
Many states:


Each integer variable has 2^32 possibilities.
Two such variables have 2^64 possibilities.
In concurrent protocols, the number of states
usually grows exponentially with the number of
processes.
If it is so constrained, is it of any use?




Many protocols are finite state.
Many programs or procedure are finite state
in nature. Can use abstraction techniques.
Sometimes it is possible to decompose a
program, and prove part of it by model
checking and part by theorem proving.
Many techniques to reduce the state space
explosion.
Depth First Search
Program DFS
For each s such that
Init(s)
dfs(s)
end DFS
Procedure dfs(s)
for each s’ such that
R(s,s’) do
If new(s’) then
dfs(s’)
end dfs.
Start from an initial state
Hash table:
q1
Stack:
q1
q2
q1
q3
q4
q5
Continue with a successor
Hash table:
q1
Stack:
q1
q2
q2
q1 q2
q3
q4
q5
One successor of q2.
Hash table:
q1
Stack:
q1
q2
q4
q2
q1 q2 q4
q3
q4
q5
Backtrack to q2 (no new successors for q4).
Hash table:
q1
Stack:
q1
q2
q2
q1 q2 q4
q3
q4
q5
Backtracked to q1
Hash table:
q1
Stack:
q1
q2
q1 q2 q4
q3
q4
q5
Second successor to q1.
Hash table:
q1
Stack:
q1
q3
q2
q1 q2 q4 q3
q3
q4
q5
Backtrack again to q1.
Hash table:
q1
Stack:
q1
q2
q1 q2 q4 q3
q3
q4
q5
How can we check properties with DFS?



Invariants: check that all reachable states
satisfy the invariant property. If not, show
a path from an initial state to a bad state.
Deadlocks: check whether a state where no
process can continue is reached.
Dead code: as you progress with the DFS,
mark all the transitions that are executed at
least once.
The state graph:Successor
relation between states.
Turn=1
L0,L1
Turn=0
L0,L1
Turn=0
L0,NC1
Turn=0
NC0,NC1
Turn=0
NC0,L1
Turn=0
CR0,L1
Turn=1
L0,NC1
Turn=1
L0,CR1
Turn=0
Turn=1
CR0,NC1
NC0,CR1
Turn=1
NC0,L1
Turn=1
NC0,NC1
¬(PC0=CR0/\PC1=CR1) is
an invariant!
Turn=1
L0,L1
Turn=0
L0,L1
Turn=0
L0,NC1
Turn=0
NC0,NC1
Turn=0
NC0,L1
Turn=0
CR0,L1
Turn=1
L0,NC1
Turn=1
L0,CR1
Turn=0
Turn=1
CR0,NC1
NC0,CR1
Turn=1
NC0,L1
Turn=1
NC0,NC1
Want to do more!




Want to check more properties.
Want to have a unique algorithm to
deal with all kinds of properties.
This is done by writing specification in
more complicated formalisms.
We will see that in the next lecture.
[](Turn=0 --> <>Turn=1)
Turn=1
L0,L1
Turn=0
L0,L1
Turn=0
L0,NC1
Turn=0
NC0,NC1
Turn=0
NC0,L1
Turn=0
CR0,L1
Turn=1
L0,NC1
Turn=1
L0,CR1
Turn=0
Turn=1
CR0,NC1
NC0,CR1
Turn=1
NC0,L1
Turn=1
NC0,NC1
Convert graph into
Buchi automaton
New initial state
init
Turn=1
L0,L1
Turn=0
L0,L1
Turn=0
L0,NC1
Turn=0
NC0,NC1
Turn=0
NC0,L1
Turn=0
CR0,L1
Turn=1
L0,NC1
Turn=1
L0,CR1
Turn=0
Turn=1
CR0,NC1
NC0,CR1
Turn=1
NC0,L1
Turn=1
NC0,NC1
Turn=0
L0,L1
init
Turn=0
L0,L1
Turn=1
L0,L1
Turn=1
L0,L1
•Propositions are attached to incoming nodes.
•All nodes are accepting.
Correctness condition

We want to find a correctness condition
for a model to satisfy a specification.
Language of a model: L(Model)
Language of a specification: L(Spec).

We need: L(Model)  L(Spec).


Correctness
Sequences satisfying Spec
Program executions
All sequences
How to prove correctness?



Show that L(Model)  L(Spec).
Equivalently:
______
Show that L(Model)  L(Spec) = Ø.
Also: can obtain L(Spec) by translating
from LTL!
What do we need to know?



How to intersect two automata?
How to complement an automaton?
How to translate from LTL to an
automaton?
Intersecting M1=(S1,S,T1,I1,A1)
and M2=(S2,S,T2,I2,S2)





Run the two automata in parallel.
Each state is a pair of states: S1 x S2
Initial states are pairs of initials: I1 x I2
Acceptance depends on first
component: A1 x S2
Conforms with transition relation:
(x1,y1)-a->(x2,y2) when
x1-a->x2 and y1-a->y2.
Example
(all states of second
automaton accepting!)
a
a
s0
s1
b,c
b,c
a
t0
b
t1
c
States: (s0,t0), (s0,t1), (s1,t0), (s1,t1).
Accepting: (s0,t0), (s0,t1). Initial: (s0,t0).
a
a
s0
s1
b,c
t1
c
b,c
a
t0
b
a
s0,t1
s0,t0
b
c
s1,t1
c
s1,t0
a
b
More complicated when A2S2
a
a
s0
b,c
s1
b,c
a
t0
t1
b
c
a
s0,t0
s0,t1
b a
s1,t1
c
c
Should we have acceptance when both components
accepting? I.e., {(s0,t1)}?
No, consider (ba)
It should be accepted, but never passes that state.
More complicated when A2S2
a
a
s0
b,c
s1
b,c
a
s0,t1
a
t0
t1
b
c
c
s0,t0
b
a
s1,t1
c
Should we have acceptance when at least one
components is accepting? I.e., {(s0,t0),(s0,t1),(s1,t1)}?
No, consider b c
It should not be accepted, but here will loop through
(s1,t1)
Intersection - general case
q0
q2
b
a
a, c
q1
c, b
q3
c
b
q0,q3
c
q1,q2
a
q1,q3
c
Version 0: to catch q0
Version 1: to catch q2
Version 0
b
q0,q3
c
q1,q2
a
c
Move when see accepting of left (q0)
Move when see accepting of right (q2)
c
b
q0,q3
q1,q3
q1,q2
a
Version 1
q1,q3
c
Version 0: to catch q0
Version 1: to catch q2
Version 0
b
q0,q3
c
q1,q2
a
c
Move when see accepting of left (q0)
Move when see accepting of right (q2)
c
b
q0,q3
q1,q3
q1,q2
a
Version 1
q1,q3
c
Make an accepting state in one of the
version according to a component
accepting state
Version 0
c
a
q0,q3,0
q1,q2,0
c
a
b
c
b
q0,q3,1
q1,q3,0
q1,q2 ,1
q1,q3 ,1
c
Version 1
How to check for emptiness?
a
s0,t1
c
s0,t0
b
a
s1,t1
c
Emptiness...
Need to check if there exists an accepting
run (passes through an accepting state
infinitely often).
Strongly Connected
Component (SCC)
A set of states with a path between each
pair of them.
Can use Tarjan’s DFS algorithm for finding
maximal SCC’s.
Finding accepting runs
If there is an accepting run, then at least one accepting state
repeats on it forever.
Look at a suffix of this run where all the states appear infinitely
often.
These states form a strongly connected component on the
automaton graph, including an accepting state.
Find a component like that and form an accepting cycle including
the accepting state.
Equivalently...

A strongly connected component: a set
of nodes where each node is reachable
by a path from each other node. Find a
reachable strongly connected
component with an accepting node.
How to complement?



Complementation is hard!
Can ask for the negated property (the
sequences that should never occur).
Can translate from LTL formula  to
automaton A, and complement A. But:
can translate ¬ into an automaton
directly!
Model Checking under Fairness
Express the fairness as a property φ.
To prove a property ψ under fairness,
model check φψ.
Counter
example
Fair (φ)
Bad (¬ψ)
Program
Model Checking under Fairness
Specialize model checking. For weak
process fairness: search for a
reachable strongly connected
component, where for each process
P either

it contains on occurrence of a
transition from P, or

it contains a state where P is
disabled.
Translating from logic to
automata
(Book: Chapter 6)
Why translating?


Want to write the specification in some
logic.
Want model-checking tools to be able
to check the specification automatically.
Generalized Büchi automata


Acceptance condition F is a set
F={f1 , f2 , … , fn } where each fi is a set
of states.
To accept, a run needs to pass infinitely
often through a state from every set fi .
Translating into simple Büchi
automaton
Version 0
b
q0
c
q1
a
c
c
b
q0
q2
q1
a
Version 1
q2
c
Translating into simple Büchi
automaton
Version 0
c
q0
q1
q2
a
c
b
c
b
q0
q1
a
Version 1
q2
c
Translating into simple Büchi
automaton
Version 0
c
q0
q1
q2
a
c
b
c
b
q0
q1
a
Version 1
q2
c
Preprocessing





Convert into normal form, where negation
only applies to propositional variables.
¬[] becomes <>¬.
¬<> becomes [] ¬.
What about ¬ ( U )?
Define operator V such that
¬ (  U ) = (¬) R (¬),
¬ (  R ) = (¬) U (¬).
Semantics of pR q
¬p
¬p
¬p
¬p
¬p
¬p
¬p
¬p
¬p
q
q
q
q
q
q
q
q
q
¬p
¬p
¬p
¬p
p
q
q
q
q
q


Replace ¬true by false, and ¬false by
true.
Replace ¬ ( \/ ) by (¬) /\ (¬) and
¬ ( /\ ) by (¬) \/ (¬)
Eliminate implications, <>, []



Replace  ->  by (¬ ) \/ .
Replace <> by (true U ).
Replace [] by (false R ).
Example




Translate ( []<>P )  ( []<>Q )
Eliminate implication ¬( []<>P ) \/ ( []<>Q )
Eliminate [], <>:
¬( false R ( true U P ) ) \/ ( false R ( true U Q ) )
Push negation inwards:
(true U (false U ¬ P ) ) \/ ( false V ( true U Q ) )
The data structure
Incoming
New
Old
Name
Next
The main idea


 U  =  \/ (  /\ O (  U  ) )
 V  =  /\ (  \/ O (  R  ) )
This separates the formulas to two
parts:
one holds in the current state, and the
other
in the next state.
How to translate?


Take one formula from “New” and add it
to “Old”.
According to the formula, either


Split the current node into two, or
Evolve the node into a new version.
Splitting
Copy incoming
edges, update
other field.
Incoming
New
Old
Next
Incoming
New
Old
Next
Incoming
New
Old
Next
Evolving
Copy incoming
edges, update
other field.
Incoming
New
Old
Next
Incoming
New
Old
Next
Possible cases:

 U  , split:
Add  to New, add  U  to Next.
 Add  to New.
Because U  =  \/ (  /\ O (U  )).


 R  , split:
Add  to New.
 Add  to New,  R  to Next.
Because  R  =  /\ (  \/ O ( R  )).

More cases:

 \/ , split:



 /\ , evolve:


Add  to New.
Add  to New.
Add  to New.
O , evolve:

Add  to Next.
How to start?
init
Incoming
New
aU(bUc)
Old
Next
init
Incoming
aU(bUc)
init
init
Incoming
a
aU(bUc)
aU(bUc)
Incoming
bUc
aU(bUc)
Incoming
bUc
aU(bUc)
init
init
Incoming
b
aU(bUc)
(bUc)
Incoming
c
aU(bUc)
When to stop splitting?


When “New” is empty.
Then compare against a list of existing nodes
“Nodes”:


If such a with same “Old”, “Next” exists,
just add the incoming edges of the new version
to the old one.
Otherwise, add the node to “Nodes”. Generate a
successor with “New” set to “Next” of father.
init
Incoming
a,aU(bUc)
Creating a
successor node.
aU(bUc)
Incoming
aU(bUc)
How to obtain the automaton?
X
There is an edge from
node X to Y labeled
with propositions P
(negated or non
negated), if X is in the
incoming list of Y, and
Y has propositions P
in field “Old”.
Initial node is init.
Incoming
New
Old
a, b, ¬c
Next
Node Y
The resulted nodes.
a, aU(bUc)
b, bUc, aU(bUc)
b, bUc
c, bUc
c, bUc, aU(bUc)
Initial nodes
a, aU(bUc)
b, bUc, aU(bUc)
b, bUc
c, bUc
c, bUc, aU(bUc)
All nodes with incoming edge from “init”.
Include only atomic propositions
a
b
b
c
Init
c
Acceptance conditions


Use “generalized Buchi automata”, where
there are several acceptance sets F1, F2, …,
Fn, and each accepted infinite sequence must
include at least one state from each set
infinitely often.
Each set corresponds to a subformula of form
U. Guarantees that it is never the case that
U holds forever, without .
Accepting w.r.t. bU c
a, aU(bUc)
b, bUc, aU(bUc)
b, bUc
c, bUc
All nodes with c, or without bUc.
c, bUc, aU(bUc)
Acceptance w.r.t. aU (bU c)
a, aU(bUc)
b, bUc, aU(bUc)
b, bUc
c, bUc
All nodes with bUc or without aU(bUc).
c, bUc, aU(bUc)
Why testing?






Reduce design/programming errors.
Can be done during development,
before
production/marketing.
Practical, simple to do.
Check the real thing, not a model.
Scales up reasonably.
Being state of the practice for decades.
Part 1: Testing of
black box finite state
machine
Wants to know:
Know:
In what state we started?
Transition relation
In what state we are?
Size or bound on size
Transition relation
Conformance
Satisfaction of a
temporal property
Finite
automata
(Mealy machines)
S - finite set of states. (size n)
S– set of inputs.
(size d)
O – set of outputs, for each transition.
(s0  S - initial state).
δ  S  S S - transition relation.
  S  SO – output on edge.
Notation: δ(s,a1a2..an)= δ(… (δ(δ(s,a1),a2) … ),an)
(s,a1a2..an)=
(s,a1)(δ(s,a1),a2)…(δ(… δ(δ(s,a1),a2) … ),an)
Finite
automata
(Mealy machines)
S - finite set of states. (size n)
S– set of inputs.
(size d)
O – set of outputs, for each
transition.
(s0  S - initial state).
  S  S S - transition relation.
  S  SO – output on edge.
S={s1, s2, s3}, S={a, b}, O={0,1}.
δ(s1,a)=s3 (also s1=a=>s3),
δ(s1,b)=s2,(also s1=b=>s2)…
(s1,a)=0 , δ(s1,b)=1,…
δ(s1,ab)=s1, (s1,ab)=01
s1
a/0
b/1
a/0
b/1
s3
s2
b/
0
a/0
Why deterministic machines?





Otherwise no amount of experiments would
guarantee anything.
If dependent on some parameter (e.g., temperature),
we can determinize, by taking parameter as
additional input.
We still can model concurrent system. It means just
that the transitions are deterministic.
All kinds of equivalences are unified into language
equivalence.
Also: connected machine (otherwise we may never
get to the completely separate parts).
Determinism
a/1
b/1
a/1
When the black box is nondeterministic, we
might never test some choices.
Preliminaries: separating
sequences
b/1
s1
a/0
a/0
b/1
s2
b/0
s3
a/0
Start with one block containing
all states {s1, s2, s3}.
A: separate to blocks of
states with different output.
b/1
s1
a/0
a/0
b/1
s2
b/0
s3
a/0
Two sets, separated using
the string b {s1, s3}, {s2}.
Repeat B: Separate blocks based on
moving to different blocks.
b/1
s1
a/0
a/0
b/1
s2
b/0
s3
a/0
Separate first block using b to three singleton blocks.
Separating sequences: b, bb.
Max rounds: n-1, sequences: n-1, length: n-1.
For each pair of states there is a separating sequence.
Want to know the state of the
machine (at end): Homing sequence.
Depending on output, we would know in what
state we are.
Find a sequence µ such that
δ(s, µ )≠δ(t, µ )  (s, µ )≠(s, µ )
So, given an input µ that is executed from state s,
we look at a table of outputs and according to a
table, know in which state r we ended.
For any other state, the output will be different.
Want to know the state of the
machine (at end): Homing sequence.
Algorithm: Put all the states in one block (initially
we do not know what is the current state).
Then repeatedly partitions blocks of states, as long
as they are not singletons, as follows:
 Take a non singleton block, append a
distinguishing sequence µ that separates at least
two states in that block.
 Update each block to the states after executing µ.
Max length: (n-1)2
(Lower bound: n(n-1)/2.)
Example (homing sequence)
b/1
s1
a/0
a/0
b/1
s2
b/0
s3
{s1, s2, s3}
1
0
{s1, s2} {s3}
1
a/0
1
b
1
0
b
{s1} {s2} {s3}
On input b and output 1, we still don’t know if we
were in s1 or s3, i.e., if we are currently in s2 or s1. So
separate these cases with another b.
Synchronizing sequence

One sequence takes the machine
to the same final state,
a/0
regardless of the initial state or
the outputs.
That is: find a sequence µ such
that
For each states s, t,
b/1
δ(s, µ )=δ(t, µ )


Not every machine has a
synchronizing sequence.
a/1
Can be checked whether exists
and can be found in polynomial
time.
a/0
b/1
b/0
Algorithm for synchronizing
sequeneces
Construct a graph with ordered
pairs of nodes (s,t) such that
(s,t)=a=>(s’,t’)
when s=a=>s’, t=a=>t’.
(Ignore self loops, e.g., on (s2,s2).)
a/0
s1
b/1
s2
a/1
a/0
b/1
b/0
s1,s1
b
s3
s2,s2
b
b
s3,s3
a
a
s1,s2
b
s2,s3
a
b
b
s1,s3
Algorithm continues (2)
There is an input
sequence from s≠t to
some r iff there is a
path in this graph
from (s,t) to (r,r).
There is a
synchronization
sequence iff some
node (r,r) is reachable
from every pair of
distinct nodes.
In this case it is (s2,s2).
s1,s1
b
s2,s2
b
b
s3,s3
a
a
s1,s2
b
s2,s3
a
b
b
s1,s3
Algorithm continues (3)
Notation:δ(S,x)=
{t|sS,δ(s,x)=t}
1.
i=1; Si=S
2.
Take some nodes
s≠tSi, and find a
shortest path labeled
xi to some (r,r).
3.
i=i+1; Si:=δ(Si-1,x).
If |Si|>1,goto 4., else
S:=S1, and goto 3.
4.
Concatenate x1x2…xk.
s1,s1
b
s2,s2
b
b
s3,s3
a
a
s1,s2
b
s2,s3
a
b
b
s1,s3
Number of sequences ≤ n-1.
Each of length ≤ n(n-1)/2.
Overall O(n(n-1)2/2).
Example:
(s2,s3)=a=>(s2,s2)
x1:=a
δ({s1,s2,s3},a)={s1,s2}
(s1,s2)=ba=>(s2,s2)
x2:=ba
δ({s1,s2},ba)={s2}
So x1x2=aba is a
synchronization
sequence, bringing
every state into
state s2.
s1,s1
b
s2,s2
b
b
s3,s3
a
a
s1,s2
b
s2,s3
a
b
b
s1,s3
State identification:




Want to know in which state the
system has started (was reset).
Can be a preset distinguishing
sequence (fixed), or a tree
(adaptive).
May not exist (PSPACE complete
to check if preset exists,
polynomial for adaptive).
Best known algorithm:
exponential length for preset,
polynomial for adaptive [LY].
Sometimes cannot identify
initial state
a/1
s1
b/1
s2
a/1
a/1
b/0
b/1
s3
Start with a:
in case of being in s1
or s3 we’ll move to s1
and cannot distinguish.
Start with b:
In case of being in s1
or s2 we’ll move to s2
and cannot distinguish.
The kind of experiment we do affects what we can
distinguish. Much like the Heisenberg principle in Physics.
So…
We’ll assume resets from now on!
Conformance testing





Unknown deterministic finite state system B.
Known: n states and alphabet S.
An abstract model C of B. C satisfies all the properties we
want from B. C has m states.
Check conformance of B and C.
Another version: only a bound n on the number of states l
is known.

Check conformance with a
given state machine
b/1
a/1
?
=



s1
b/1
a/1
a/1
b/0
s2
s3
Black box machine has no more states than specification machine
(errors are mistakes in outputs, mistargeted edges).
Specification machine is reduced, connected, deterministic.
Machine resets reliably to a single initial state (or use homing
sequence).
Conformance testing [Ch,V]
a/1
b/1
a/1

b/1
b/1
a/1
a/1
b/1

a/1
b/1
Cannot distinguish if reduced or not.
Conformance testing (cont.)
b
a
a
a
a
b
a
b
a
b
a
b
Need: bound on number of states of B.
Preparation:
Construct a spanning tree
a/1
s1
b/1
s2
a/1
a/1
b/0
b/1
s3
a/1
s3
s1
b/1
s2
Given an initial state, we
can reach any state of
the automaton.
How the algorithm works?
a/1
s3
s1
b/1
Reset
Reset
According to the spanning tree,
force a sequence of inputs
to go to each state.
1.
From each state, perform
the distinguishing
sequences.
2.
From each state, make a
single transition, check
output, and use
distinguishing sequences to
check that in correct target
state.
s2
Distinguishing sequences
Comments
1.
2.
3.
4.
Checking the different distinguishing sequences
(m-1 of them) means each time resetting and
returning to the state under experiment.
A reset can be performed to a distinguished
state through a homing sequence. Then we can
perform a sequence that brings us to the
distinguished initial state.
Since there are no more than m states, and
according to the experiment, no less than m
states, there are m states exactly.
Isomorphism between the transition relation is
found, hence from minimality the two automata
recognize the same languages.
Combination lock automaton


Assume accepting states.
Accepts only words with a specific suffix
(cdab in the example).
s1
c
s2
d
s3
a
Any other input
s4
b
s5
When only a bound on size of
black box is known…

Black box can “pretend” to behave as a
specification automaton for a long time,
then upon using the right combination,
make a mistake.
a/1
a/1
s1
a/1
b/1
s2
b/1
b/1
Pretends to be S1
a/1
s3
b/0
Pretends to be S3
Conformance testing algorithm
[VC]


m2
dn-m+1
Complexity:
n
Probabilistic complexity: Polynomial.
a/1
s3
s1
b/1
Reset

The worst that can happen is a
combination lock automaton that
behaves differently only in the last
state. The length of it is the
difference between the size n of the
black box and the specification m.
Reach every state on the spanning
tree and check every word of length
n-m+1 or less. Check that after the
combination we are at the state we
are supposed to be, using the
distinguishing sequences.
No need to check transitions: already
included in above check.
Reset

s2
Words of length n-m+1
Distinguishing sequences
Model Checking





Finite state description of a system B.
LTL formula . Translate  into an automaton P.
Check whether L(B)  L(P)=.
If so, S satisfies . Otherwise, the intersection includes a
counterexample.
Repeat for different properties.


Buchi automata (-automata)
S - finite set of states. (B has l  n states)
S0  S - initial states. (P has m states)
S - finite alphabet.
(contains p letters)
 S  S  S - transition relation.
F  S - accepting states.
Accepting run: passes a state in F infinitely often.
System automata: F=S, deterministic, one initial state.
Property automaton: not necessarily deterministic.
Example: check a
a
a
<>a
a, a
Example: check <>a
a
a
<>a
a
a
Example: check  <>a
a, a
a
<>a
a
Use automatic translation algorithms, e.g.,
[Gerth,Peled,Vardi,Wolper 95]
System
a
c
b
Every element in the product is a counter
example for the checked property.
s1
a
c
s2
q1
b
a
s3
Acceptance is
determined by
automaton P.
s1,q1
s1,q2
a
<>a
a
q2
a s ,q
2 1
b
a
c
s3,q2
a
Model Checking / Testing






Given Finite state
system B.
Transition relation of B
known.
Property represent by
automaton P.
Check if L(B)  L(P)=.
Graph theory or BDD
techniques.
Complexity: polynomial.





Unknown Finite state
system B.
Alphabet and number of
states of B or upper
bound known.
Specification given as
an abstract system C.
Check if B C.
Complexity: polynomial
if number states known.
Exponential otherwise.
Black box checking [PVY]




Property represent by
automaton P.
Check if L(B)  L(P)=.
Graph theory
techniques.



Unknown Finite state
system B.
Alphabet and Upper
bound on Number of
states of B known.
Complexity:
exponential.
Experiments
reset
a
b
c
a
c
b
try c
try b
a
c
b
a
c
b
a
c
b
a
c
b
fail
Simpler problem: deadlock?


Nondeterministic algorithm:
guess a path of length  n from the initial
state to a deadlock state.
Linear time, logarithmic space.
Deterministic algorithm:
systematically try paths of length n, one
after the other (and use reset), until
deadlock is reached.
Exponential time, linear space.
Deadlock complexity




Nondeterministic algorithm:
Linear time, logarithmic space.
Deterministic algorithm:
Exponential (p n-1) time, linear space.
Lower bound: Exponential time (use
combination lock automata).
How does this conform with what we
know about complexity theory?
Modeling black box checking



Cannot model using Turing machines:
not all the information about B is
given. Only certain experiments are
allowed.
We learn the model as we make the
experiments.
Can use the model of games of
incomplete information.
Games of incomplete
information







Two players: player, player (here, deterministic).
Finitely many configurations C. Including:
Initial Ci , Winning : W+ and W- .
An equivalence relation @ on C (the player cannot
distinguish between equivalent states).
Labels L on moves (try a, reset, success, fail).
The player has the moves labeled the same from
configurations that are equivalent.
Deterministic strategy for the player: will lead to a
configuration in W+  W-. Cannot distinguish between
equivalent configurations.
Nondeterministic strategy: Can distinguish between
equivalent configurations..
Modeling BBC as games



Each configuration contains an automaton
and its current state (and more).
Moves of the player are labeled with
try a, reset... Moves of the -player with
success, fail.
c1 @ c2 when the automata in c1 and c2
would respond in the same way to the
experiments so far.
A naive strategy for BBC





Learn first the structure of the black box.
Then apply the intersection.
Enumerate automata with n states (without
repeating isomorphic automata).
For a current automata and new automata,
construct a distinguishing sequence. Only one
of them survives.
Complexity: O((n+1)p (n+1)/n!)
On-the-fly strategy





Systematically (as in the deadlock
case), find two sequences v1 and v2 of
length <=m n.
Applying v1 to P brings us to a state t
that is accepting.
Applying v2 to P brings us back to t.
Apply v1 v2 n to B. If this succeeds,
there is a cycle in the intersection
labeled with v2, with t as the P
(accepting) component.
Complexity: O(n2p2mnm).
v1
v2
Learning an automaton




Use Angluin’s algorithm for learning an
automaton.
The learning algorithm queries whether
some strings are in the automaton B.
It can also conjecture an automaton Mi
and asks for a counterexample.
It then generates an automaton with
more states Mi+1 and so forth.
A strategy based on learning





Start the learning algorithm.
Queries are just experiments to B.
For a conjectured automaton Mi ,
check if Mi  P = 
If so, we check conformance of Mi with B
([VC] algorithm).
If nonempty, it contains some v1 v2 . We
test B with v1 v2n. If this succeeds: error,
otherwise, this is a counterexample for Mi .
Complexity



l - actual size of B.
n - an upper bound of size of B.
d - size of alphabet.
Lower bound: reachability is similar to deadlock.
 O(l 3 d l + l 2mn) if there is an error.
 O(l 3 d l + l 2 n dn-l+1+ l 2mn) if there is no error.
If n is not known, check while time allows.
 Probabilistic complexity: polynomial.

Some experiments




Basic system written in SML (by Alex
Groce, CMU).
Experiment with black box using Unix
I/O.
Allows model-free model checking of C
code with inter-process communication.
Compiling tested code in SML with BBC
program as one process.
Part 2: Software testing
(Book: chapter 9)
Testing is not about showing that there are
no errors in the program.
 Testing cannot show that the program
performs its intended goal correctly.
So, what is software testing?
Testing is the process of executing the program
in order to find errors.
A successful test is one that finds an error.

Some software testing
stages







Unit testing – the lowest level, testing
some procedures.
Integration testing – different pieces of code.
System testing – testing a system as a whole.
Acceptance testing – performed by the
customer.
Regression testing – performed after updates.
Stress testing – checking the code under
extreme conditions.
Mutation testing – testing the quality of the test
suite.
Some drawbacks of testing




There are never sufficiently many test
cases.
Testing does not find all the errors.
Testing is not trivial and requires
considerable time and effort.
Testing is still a largely informal task.
Black-Box (data-driven,
input-output) testing
The testing is not based on the structure
of the program (which is unknown).
In order to ensure correctness, every
possible input needs to be tested - this
is impossible!
The goal: to maximize the number of
errors found.
testing
Is based on the internal structure of the
program.
There are several alternative criterions for
checking “enough” paths in the
program.
Even checking all paths (highly
impractical) does not guarantee finding
all errors (e.g., missing paths!)
Some testing principles






A programmer should not test his/her own program.
One should test not only that the program does what it
is supposed to do, but that it does not do what it is not
supposed to.
The goal of testing is to find errors, not to show that
the program is errorless.
No amount of testing can guarantee error-free
program.
Parts of programs where a lot of errors have already
been found are a good place to look for more errors.
The goal is not to humiliate the programmer!
Inspections and Walkthroughs





Manual testing methods.
Done by a team of people.
Performed at a meeting
(brainstorming).
Takes 90-120 minutes.
Can find 30%-70% of errors.
Code Inspection





Team of 3-5 people.
One is the moderator. He
distributes materials and
records the errors.
The programmer
explains the program
line by line.
Questions are raised.
The program is analyzed
w.r.t. a checklist of
errors.
Checklist for
inspections
Data declaration
All variables declared?
Default values
understood?
Arrays and strings
initialized?
Variables with similar
names?
Correct initialization?

Control flow
Each loop terminates?
DO/END statements
match?

Input/output
OPEN statements
correct?
Format specification
correct?
End-of-file case handled?

Walkthrough




Team of 3-5 people.
Moderator, as
before.
Secretary, records
errors.
Tester, play the role
of a computer on
some test suits on
paper and board.
Selection of test cases
(for white-box testing)
The main problem is to select a good coverage
criterion. Some options are:





Cover all paths of the program.
Execute every statement at least once.
Each decision has a true or false value at least
once.
Each condition is taking each truth value at least
once.
Check all possible combinations of conditions in
each decision.
Cover all the paths of the program
Infeasible.
Consider the flow diagram
on the left.
It corresponds to a loop.
The loop body has 5 paths.
If the loops executes 20
times there are 5^20
different paths!
May also be unbounded!
How to cover the executions?
IF (A>1) & (B=0) THEN X=X/A;
END;
IF (A=2) | (X>1) THEN X=X+1;
END;



Choose values for A,B,X.
Value of X may change, depending on A,B.
What do we want to cover? Paths? Statements?
Conditions?
Statement coverage
Execute every statement at least
once
By choosing
IF (A>1) & (B=0)
THEN X=X/A;
A=2,B=0,X=3
END;
each statement will be
IF (A=2) | (X>1)
chosen.
THEN X=X+1;
The case where the
END;
tests fail is not
checked!
Now x=1.5
Decision coverage
Each decision has a true and false
outcome at least once.
Can be achieved using
IF (A>1) & (B=0)
THEN X=X/A;
 A=3,B=0,X=3
END;
 A=2,B=1,X=1
Problem: Does not test IF (A=2) | (X>1)
THEN X=X+1;
individual conditions.
END;
E.g., when X>1 is
erroneous in second
decision.
Decision coverage
A=3,B=0,X=3
Now x=1
IF (A>1) & (B=0)
THEN X=X/A;
END;
IF (A=2) | (X>1)
THEN X=X+1;
END;
Decision coverage
A=2,B=1,X=1

The case where A1
and the case
where x>1 where
not checked!
IF (A>1) & (B=0)
THEN X=X/A;
END;
IF (A=2)|(X>1)
THEN X=X+1;
END;
Condition coverage
Each condition has a true and false
value at least once.
For example:


A=1,B=0,X=3
A=2,B=1,X=0
lets each condition be
true and false once.
Problem:covers only the
path where the first
test fails and the
second succeeds.
IF (A>1) & (B=0)
THEN X=X/A;
END;
IF (A=2) | (X>1)
THEN X=X+1;
END;
Condition coverage
A=1,B=0,X=3

IF (A>1) & (B=0)
THEN X=X/A;
END;
IF (A=2) | (X>1)
THEN X=X+1;
END;
Condition coverage
A=2,B=1,X=0

Did not check the
first THEN part at
all!!!
Can use
condition+decisio
n coverage.
IF (A>1) & (B=0)
THEN X=X/A;
END;
IF (A=2) | (X>1)
THEN X=X+1;
END;
Multiple Condition Coverage
Test all combinations of all
conditions in each test.








A>1,B=0
A>1,B≠0
A1,B=0
A1,B≠0
A=2,X>1
A=2,X1
A≠2,X>1
A≠2,X1
IF (A>1) & (B=0)
THEN X=X/A;
END;
IF (A=2) | (X>1)
THEN X=X+1;
END;
A smaller number of cases:
A=2,B=0,X=4
 A=2,B=1,X=1
 A=1,B=0,X=2
 A=1,B=1,X=1
Note the X=4 in the first
case: it is due to the fact
that X changes before
being used!

IF (A>1) & (B=0)
THEN X=X/A;
END;
IF (A=2) | (X>1)
THEN X=X+1;
END;
Further optimization: not all combinations.
For C /\ D, check (C, D), (C, D), (C, D).
For C \/ D, check (C, D), (C, D), (C, D).
Preliminary:Relativizing assertions
(Book: Chapter 7)
(B) : x1= y1 * x2 + y2 /\ y2 >= 0
Relativize B) w.r.t. the assignment
becomes B) [Y\g(X,Y)]
I.e.( B) expressed w.r.t. variables at A.)
(B)A = x1=0 * x2 + x1 /\ x1>=0
Think about two sets of variables,
before={x, y, z, …} after={x’,y’,z’…}.
Rewrite (B) using after, and the
assignment as a relation between the
set of variables. Then eliminate after.
Here:
x1’=y1’ * x2’ + y2’ /\ y2’>=0 /\
x1=x1’ /\ x2=x2’ /\ y1’=0 /\ y2’=x1
now eliminate x1’, x2’, y1’, y2’.
A
(y1,y2)=(0,x1)
Y=g(X,Y)
B
A
(y1,y2)=(0,x1)
B
Verification conditions: tests
T
C)  B)= t(X,Y) /\
C)
D)  B)=t(X,Y) /\
D)
B
F
t(X,Y)
C
D
B
T
C
B)= D) /\ y2x2
y2>=x2
F
D
How to find values for coverage?
•Put true at end of path.
•Propagate path backwards.
•On assignment, relativize
expression.
•On “yes” edge of decision,
add decision as conjunction.
•On “no” edge, add negation
of decision as conjunction.
•Can be more specific when
calculating condition with
multiple condition coverage.
A>1 & B=0
no
yes
X=X/A
A=2 | X>1
no
yes
true
X=X+1
true
How to find values for coverage?
(A≠2 /\ X/A>1) /\ (A>1 & B=0)
A>1 & B=0
Need to find a satisfying
assignment:
no
yes
X=X/A
A=3, X=6, B=0
Can also calculate path
condition forwards.
A≠2 /\ X/A>1
A≠2 /\ X>1
A=2 | X>1
no
yes
true
X=X+1
true
How to cover a flow chart?





Cover all nodes, e.g., using search strategies:
DFS, BFS.
Cover all paths (usually impractical).
Cover each adjacent sequence of N nodes.
Probabilistic testing. Using random number
generator simulation. Based on typical use.
Chinese Postman: minimize edge traversal
Find minimal number of times time to travel each
edge using linear programming or dataflow
algorithms.
Duplicate edges and find an Euler path.
Test cases based on data-flow
analysis



Partition the program
into pieces of code with
a single entry/exit point.
For each piece find which
variables are
set/used/tested.
Various covering criteria:
 from each set to each
use/test
 From each set to
some use/test.
X:=3
t>y
x>y
z:=z+x
Test case design for black box
testing



Equivalence partition
Boundary value analysis
Cause-effect graphs
Equivalence partition


Goals:
 Find a small number of test cases.
 Cover as much possibilities as you can.
Try to group together inputs for which the program is
likely to behave the same.
Specification
condition
Valid equivalence
class
Invalid equivalence
class
Example: A legal variable



Begins with A-Z
Contains [A-Z0-9]
Has 1-6 characters.
Valid equivalence
class
Invalid equivalence
class
Starting char
Starts A-Z 1
Starts other 2
Chars
[A-Z0-9]
Has others
Length
1-6 chars 5
Specification
condition
3
4
0 chars, >6 chars
6
7
Equivalence partition (cont.)


Add a new test case until all valid equivalence classes
have been covered. A test case can cover multiple
such classes.
Add a new test case until all invalid equivalence class
have been covered. Each test case can cover only one
such class.
Specification
condition
Valid equivalence
class
Invalid equivalence
class
Example



AB36P
(1,3,5)
1XY12
(2)
A17#%X (4)


(6)
VERYLONG (7)
Valid equivalence
class
Invalid equivalence
class
Starting char
Starts A-Z 1
Starts other 2
Chars
[A-Z0-9]
Has others
Length
1-6 chars 5
Specification
condition
3
4
0 chars, >6 chars
6
7
Boundary value analysis

In every element class, select values
that are closed to the boundary.


If input is within range -1.0 to +1.0, select
values -1.001, -1.0, -0.999, 0.999, 1.0,
1.001.
If needs to read N data elements, check
with
N-1, N, N+1. Also, check with
N=0.
Test case generation based on
LTL specification
LTLAut
Flow
Compiler chart
Model
Checker
Transitions
Path
Path condition
calculation
First order
instantiator
Test
monitoring
Goals




Verification of software.
Compositional verification. Only a unit of code.
Parametrized verification.
Generating test cases.
A path found with some truth assignment satisfying the
path condition. In deterministic code, this assignment
guarantees to derive the execution of the path.
In nondeterministic code, this is one of the possibilities.
Can transform the code to force replying the path.
Divide and Conquer




Intersect property automaton with the
flow chart, regardless of the statements and
program variables expressions.
Add assertions from the property automaton
to further restrict the path condition.
Calculate path conditions for sequences found
in the intersection.
Calculate path conditions on-the-fly.
Backtrack when condition is false.
Thus, advantage to forward calculation of
path conditions (incrementally).
Spec:
¬at l2U (at l2/\ ¬at l2/\(¬at l2U at l2))
¬at l2
l2:x:=x+z
X
l3:x<t
at l2
l2:x:=x+z
=
l3:x<t
l1:…
¬at l2
l2:x:=x+z
at l2
Spec:
¬at l2U (at l2/\ xy /\
(¬at l2/\(¬at l2U at l2 /\ x2y )))
¬at l2
l2:x:=x+z
X
at l2/\
xy
l3:x<t
l2:x:=x+z
=
l1:…
¬at l2
at l2/\
x2y
xy
l3:x<t
x2y
l2:x:=x+z
l0
Example: GCD
l1:x:=a
l2:y:=b
l3:z:=x rem y
l4:x:=y
l5:y:=z
no
l6:z=0?
yes
l7
l0
Example: GCD
l1:x:=a
l2:y:=b
Oops…with an
error (l4 and l5
were switched).
l3:z:=x rem y
l4:y:=z
l5:x:=y
no
l6:z=0?
yes
l7
Why use Temporal specification



Temporal specification for sequential
software?
Deadlock? Liveness? – No!
Captures the tester’s intuition about the
location of an error:
“I think a problem may occur when the
program runs through the main while loop
twice, then the if condition holds, while t>17.”
l0
Example: GCD
l1:x:=a
l2:y:=b
a>0/\b>0/\at l0 /\at l7
l3:z:=x rem y
at l0/\
a>0/\
b>0
l4:y:=z
l5:x:=y
no
at l7
l6:z=0?
yes
l7
l0
Example: GCD
l1:x:=a
l2:y:=b
a>0/\b>0/\at l0/\at l7
l3:z:=x rem y
Path 1: l0l1l2l3l4l5l6l7
a>0/\b>0/\a rem b=0
Path 2: l0l1l2l3l4l5l6l3l4l5l6l7
a>0/\b>0/\a rem b0
l4:y:=z
l5:x:=y
no
l6:z=0?
yes
l7
Potential explosion
Bad point: potential explosion
Good point: may be chopped on-the-fly
l0
Drivers and Stubs




l1:x:=a
l2:y:=b
Driver: represents the program
or procedure that called our
checked unit.
l3:z’=x rem y
/\x’=x/\y’=x
Stub: represents a procedure
called by our checked unit.
l4:y:=z
In our approach: replace both of
them with a formula
representing the effect the
l5:x:=y
missing code has on the program
variables.
Integrate the driver and stub
no
l6:z=0?
specification into the calculation
of the path condition.
yes
l7
Conclusions



Black box testing: Know transition relation,
or bound on number of states, want to find initial
state, structure, conformance, temporal property.
Software testing:
Unit testing, code inspection, coverage, test case
generation.
Model checking and testing have a lot in
common.