An Architecture for Testing Synchronous Multiuser Software

Download Report

Transcript An Architecture for Testing Synchronous Multiuser Software

CAMELOT: A Testing Methodology
for Computer Supported
Cooperative Work
36th Hawaii International Conference on System Science
Experimental Software Engineering Track
January 7, 2003
Kona, Big Island, Hawaii
Robert F. Dugan Jr.
Deptartment of Computer Science
Stonehill College
Easton, MA 02357 USA
[email protected]
Ephraim P. Glinert, Edwin H. Rogers
Deptartment of Computer Science
Rensselaer Polytechnic Institute
Troy, NY 12180 USA
{glinert,rogerseh}@cs.rpi.edu
What Is CSCW?
“Computer-based systems that support groups of people
engaged in a common task and that provide an interface
to a shared environment” [Ellis91]
CSCW Characterization
• Floor Control
• Coupling
• Awareness
• Session Management
• Synchronization
• IPC
HumanHuman
Interaction
Distributed
Systems
HumanComputer
Interaction
Network
Communication
• Iterative Design
• Undo/Redo
• Real-time
• Performance
• Reliability
Motivation
Human-Human Interface
Human-Computer Interface
Overview
Motivation
Survey of Testing
Methodology
Evaluation
Limitations/Future Work
Goals of Testing
Correct behavior
Utility
Reliability
Robustness
Performance
Research Testing
Early stages of life cycle (Requirements,
Functional Specification, Design)
Cost rises deeper into life cycle
Problems scaling to large development efforts
Problem space complex
Requirements in flux
Verification cost exceeds implementation cost
Example:
Commercial Testing
Later stages of life cycle (Implementation, Integration,
System Test)
Less expensive to create individual tests
Use standard communication APIs (GUI events, HTTP,
RMI, etc.)
Capture/Replay communication to drive application
execution
Problems:
Fragile: if application communication changes, so must test case
Late life cycle means problems uncovered more costly
Rudimentary guidelines for use
Example: Mercury Interactive WinRunner
CAMELOT
CSCW Application MEthodoLOgy for Testing
Computer
Human
Single User
General
Computing
Human
Computer
Interaction
Multi-User
Distributed
Computing
HumanHuman
Interaction
General Computing
Camelot Code
Cycle
Description
Implementation
GC.IM.1
Functional Test
GC.ST.12
Procedure Test
GC.ST.13
Acceptance
Test
 Implementation [Meyers79]
 Integration [Scach90]
 System test [Meyers79]
Human Computer Interaction
Camelot Code
Description
Usability Criteria
HCI.UC.1
Time to learn system: How long
does it take for a typical user to
learn to use the system?
HCI.UITG.9
Error Messages
HCI.UITG.10
Color
 Intersection with General Computing [Yip91]
 Usability criteria [Schneiderman97]
 Golden rules [Schneiderman97]
 User interface technology [Schneiderman97]
Distributed Computing
Camelot Code
DC.RC.1
Description
Race Condition
DC.RC.2
Centralized Architecture
Decentralized Architecture
DC.S.8
Loosely Coupled
DC.S.9
Synchronization
 Race conditions
 Scalability
 Deadlock
 GC & HCI intersections
 Temporal consistency
Human-Human Interaction
Camelot Code
Description
Communication
HHI.CM.1
Network bandwidth sufficient to
support user communication.
DC/HHI.5
Distributed computing scalability tests.
Derived from (DC.S ^ HHI.CP) -> DC/HHI.5
DC/HHI.6
Distributed computing temporal consistency tests.
Derived from (DC.TC ^ HHI.CP) -> DC/HHI.6
 Communication

 Coordination

 Coupling
Security
Awareness
Evaluation: RCN
ISServer
rcnClient
RCNPublicServer
rcnClient
rcnClient
Evaluation: RCN
Rensselaer Collaborative Network
Characteristic of CSCW Software
Face-to-face, Synchronous, Meeting Support
Group Management, Chat, Shared Windowing
Floor Control, Asynchronous Multiuser Editing
Mature Application
Unit, System, User Acceptance, Daily Use Tested
Development considered it bug free
Development offered to deliberately introduce bugs!
Ammendable to Rebbeca-J
Java
Source code available
Evaluation
Single User
General
Computing
Human Computer
Interaction
Distributed
Computing
Human-Human
Interaction
Multi-User
Bug Description
A.1 Error message displayed when starting up RCNPPublicServer
A.2 Configuration of PATH shell variable necessary for NativeLibrary.dll
A.3 ISServer does not always flush terminated RCNPublicServer
A.4 Documentation errors
A.7 No version number displayed in RCNPublicServer, rcnClient, ISServer
A.14 Sticky mouse buttons.
A.15 Multiple client control of public machine
A.16 Incorrectly translated keys
A.17 Sticky SHIFT, ALT, and CTRL keys
A.24 Can’t play Indiana Jones from rcnClient
A.5 Inconsistent use of Quite, Exit, Leave, Cancel
A.6 “Pick a IS” is grammatically incorrect.
A.8 Preference Dialog displays invalid colors
A.9 Preference Dialog allows too many colors
A.13 Ghost cursor hidden by new applications
A.20 Inconsistent user of OK, Okay
A.11 No lock mechanism for simultaneous edits of team information
A.12 Race condition joining a session
A.18 Race condition in rcnClient’s user interface
A.19 Race conditions joining sessions, users, teams, publics
A.21 Flickering ghost cursor
A.23 Memory leaks in public and client when ghosting
A.10 Preference Dialog allows same color for two users in same session
A.22 Confusing display of session clients
CAMELOT Code
GC.ST.9, HCI.GR.3
GC.ST.9, HCI.GR.3
GC.ST.9, DC.TC.1
GC.ST.11
GC.ST.9, GC/DC.4
GC.IM.1, DC.RC.2
GC.IM.1
GC.IM.1
GC.IM.1, DC.RC.2
GC.IM.1
HCI.GR.1
HCI.GR.1
HCI.GR.7
HCI.UITG.10
HCI.UITG.7
HCI.GR.1
DC.RC.4
DC.RC.2, DC.RC.3
HCI/DC.1
GC/HCI/DC.1
DC.S.2, HCI.UITG.7
DC.S.2, GC.ST.7
HHI.A, HCI.UITG.10
HCI.UITG.1, HHI.A
Evaluation
“i don't think any tester would have ever discovered that.
simply for discovering that, i consider rebecca a success.”
- J.J. Johns, lead developer for RCN
Limitations/Future Work
Single system evaluated
No formal testing methodology used by RCN
team
Subset of CSCW technology
Large number of guidelines
Lack of ready-to-run test cases
Testing domains for specific technologies
Example: Chat Domain
Not a complete evaluation
CAMELOT: Discussion
 Comparison to existing methodologies
 SSM [Checkland89]
 PETRA [Ross et al. 95]
Technical
 SESL [Ramage99]
 ECW Methodology [Drury et al. 99]
 Part of a complete evaluation
 Correct ordering of the evaluation is
important.
Social
Conclusion
We defined CAMELOT, a methodology for testing
CSCW applications.
• Our methodology has improved prior art by providing
a detailed focus on CSCW technology.
• We have created CSCW software taxonomy with single
user with general computing and human computer
interaction components and multiuser with distributed
computing and human-human interaction components.
• For each component we have identified explicit
validation techniques that can be used in both manual
and automated testing.
• Further, our techniques exploit intersections between
the components to improve bug detection.