Тестирование распределенного программн

Download Report

Transcript Тестирование распределенного программн

Testing Distributed
Software
on .NET Platform
Victor Kuliamin
ISP RAS, Moscow
[email protected]
Outline
Introduction
 Some Theory
 UniTesK Method
 UniTesK Tools and Practice

Distributed Software Blessing and Damn

Distributed software is a need
– it helps more people to get more services

Distributed software is very complex
– human mind operates mostly sequentially
– errors are getting much more subtle

.NET makes its development much easier
– making the future more splendid
– making the future more dangerous
How to Mitigate Risks?


Development technologies should be
augmented with software quality assurance
There are a lot of ways
 Correctness
by construction
 Formal proofs
 Model checking
 Search for fault patterns and vulnerabilities
 Testing

Testing is necessary anyway
Outline
Introduction
 Some Theory
 UniTesK Method
 UniTesK Tools and Practice

Testing Fundamentals
How to test?

We act upon the system under test

We watch its reaction

We check whether that reaction is what
should be

We repeat this until all the reasonable
situations are exhausted
Testing of Distributed System

Organize distributed action
– harder then for non-distributed system, but possible

Watch distributed reaction
– harder then for non-distributed system, but possible

Check whether it is expected
– how to learn what is expected?

Ensure reasonable exhaustiveness
– what does this mean at all?
Main
sources of
complexity
How to Describe Expectations?

The description should be
 Sufficiently expressive
 As clear as possible
 Scalable to rather complex
systems
– preferably, component-wise
 Suitable to distributed systems
– include several sides

How do people describe their expectations in
complex cases?
– By means of contracts!
Contract Specifications

Pre- and postconditions (Hoare, 1969)
– means for reasoning about program behavior

Design by Contract (Meyer, 1992)
– pre- and postconditions are defined for interface operations of a
component
– constraints on data integrity are stated in invariants
– together they form software contract between a component and
its environment


Insufficient for distributed systems – does not consider asynchronous
behavior
Event contracts ([7], 2005)
Event Contracts
Obligations of the environment
Obligations of the system
Precondition
Pre-state
System
Input event
says in what
states such
an event is
possible
Environment
Post-state
Obligations of the system
Output event
Environment
Postcondition
says what poststates can follow
such an event in
such a pre-state
Pre-state
System
Post-state
Obligations of the system
Relation with Classic Contracts
If no actions are possible in
the intermediate states, we
can exclude them from
consideration
But in distributed
systems such actions
are possible
These two events are
considered together as
call of an operation and
return of result
Example
public specification class Barrier
{
int awaitedThreads = 0;
int waitingThreads = 0;
invariant CountersAreNonnegative() { return awaitedThreads >= 0 && waitingThreads >= 0; }
public specification void Init(int n) {
post {
if(n < 0 || waitingThreads > 0) {
branch NoChanges;
return awaitedThreads == pre awaitedThreads && waitingThreads == pre waitingThreads;
} else {
branch NewHeightSet;
return awaitedThreads == n && waitingThreads == 0;
}
}
}
public specification void Wait() {
post {
if(awaitedThreads <= 1) {
branch Immediate;
return awaitedThreads == 0 && waitingThreads == pre waitingThreads;
} else {
deferred branch Waiting;
return awaitedThreads == pre awaitedThreads - 1 && waitingThreads == pre waitingThreads + 1;
}
}
deferred return
{
pre { return awaitedThreads == 0 && waitingThreads > 0; }
post { return waitingThreads == pre waitingThreads - 1; }
}
}
}
Testing Adequacy



We can describe what we expect
How can we define ‘reasonable exhaustiveness’?
Possibilities




Fault-based approaches
 Percent of faults detected by tests to all faults reported
 Mutants: percent of mutants detected
Source code coverage-based approaches
Requirements coverage-based approaches
UniTesK method : requirements-based
Try to cover structure of postconditions
Outline
Introduction
 Some Theory
 UniTesK Method
 UniTesK Tools and Practice

Test Coverage Goals
post {
if
( f(a, b) || g(a) )
…
else if( h(a, c) & !g(b) )
…
else !f(a, b) && !g(a) && !h(a, c) …
|| !f(a, b) && !g(a) && g(b)
}
Construction of Abstract States
parameters
operation domain
2
3
coverage goals
1
states
Test Data Generation
Computation of single call arguments
parameters
2
3
Test data generation is
based on simple generators
and coverage filtering
1
states
current
abstract state
current state
The Whole Picture
System under Test
Behavior Model
Testing Model
Coverage Model
Single Input Checking
On-the-fly
Test Sequence Generation
Testing Distributed Software
11
s12
11
s11
r11
s21
21
Target
System
r21
r12
21
12
r22
12
31
22
s31
Time


Multisequence of stimuli is used
Stimuli and reactions form a partially ordered set
Checking The Behavior
Plain concurrency : behavior of the system is
Plain concurrency axiom
equivalent to some sequence of the actions






✕

Outline
Introduction
 Some Theory
 UniTesK Method
 UniTesK Tools and Practice

UniTesK Tools






C / Visual Studio 6.0, gcc
Java / NetBeans
C++ / NetBeans + MS Visual Studio
specifications in Java extension
Specialized tool for compiler testing
and complex data generation
C# / Visual Studio .NET 7.1
Java / Eclipse
2002
2002
2003
2003
2003
~ 2005
Tool Demonstration
Case Studies


ISP RAS – Nortel Networks
functional test suite development for
Switch Operating System kernel
IPv6 implementations







2001-2003
Microsoft Research
Mobile IPv6 (in Windows CE 4.1)
Oktet
Intel compiler optimization units
AVS-IPMP Standard
IPSec
Pilot projects




1994-1997
2001-2003
2004
2004-…
Enterprise application development framework
Components of TinyOS
Web-based banking client management system (Luxoft)
Components of billing system (Vympelkom)
http://www.unitesk.com
2003
2003
2004
2005
References
1.
2.
3.
4.
5.
6.
7.
V. Kuliamin, A. Petrenko, I. Bourdonov, and A. Kossatchev. UniTesK Test Suite
Architecture. Proc. of FME 2002. LNCS 2391, pp. 77-88, Springer-Verlag, 2002.
V. Kuliamin, A. Petrenko, N. Pakoulin, I. Bourdonov, and A. Kossatchev. Integration of
Functional and Timed Testing of Real-time and Concurrent Systems. Proc. of PSI
2003. LNCS 2890, pp. 450-461, Springer-Verlag, 2003.
V. Kuliamin, A. Petrenko. Applying Model Based Testing in Different Contexts.
Proceedings of seminar on Perspectives of Model Based Testing, Dagstuhl,
Germany, September 2004.
A. Kossatchev, A. Petrenko, S. Zelenov, S. Zelenova. Using Model-Based Approach
for Automated Testing of Optimizing Compilers. Proc. Intl. Workshop on Program
Undestanding, Gorno-Altaisk, 2003.
V. Kuliamin, A. Petrenko, A. Kossatchev, and I. Burdonov. The UniTesK Approach to
Designing Test Suites. Programming and Computer Software, Vol. 29, No. 6 , 2003,
pp. 310-322. (Translation from Russian)
S. Zelenov, S. Zelenova, A. Kossatchev, A. Petrenko. Test Generation for Compilers
and Other Formal Text Processors. Programming and Computer Software, Vol. 29,
No. 2 , 2003, pp. 104-111. (Translation from Russian)
V. Kuliamin, N. Pakoulin, A. Petrenko. Extended Design-by-Contract Approach to
Specification and Conformance Testing of Distributed Software. Proc. of 9-th World
Multi-Conference on Systemics, Cybernetics, and Informatics, Model Based Testing
Session, July 2005, to be published.
Contacts
RedVerst group web page
http://www.ispras.ru/groups/rv/rv.html
 UniTesK projects web site
http://www.unitesk.com
 Group leader
Alexander Petrenko
[email protected]

Thank You!