An alternative approach to software testing to enable SimShip for the localisation market, using Shadow™ Dr.

Download Report

Transcript An alternative approach to software testing to enable SimShip for the localisation market, using Shadow™ Dr.

An alternative approach to
software testing to enable
SimShip for the localisation
market, using Shadow™
Dr. K Arthur, D Hannan, M Ward
Brandt Technologies
Abstract

Our approach to automated software testing is
novel.
 Test several language instances of an application
simultaneously.
 Direct engineer interaction or through a
record/playback script.
 Examine the effect of separating out the functions
of a test engineer into a product specialist and QA
specialist.
Shadow

Developed by Brandt Technologies.
 Different from other remote control
applications.
 Allows control of many machines
simultaneously.
 Control Mode: Mimic mode and Exact
match mode.
Shadow setup
SimShip

SimShip is the process of shipping the
localised product to customers at the same
time as the original language product.
 Delta – time difference between shipping
original language product and localised
products.
Why SimShip?

To increase revenue.
 To avoid loss of revenue.
 To maintain/increase market share.
SimShip issues

Unrecoverable costs:
– Localised software not taking off in markets.
– Localised software being substandard.
– Localised software having been created inefficiently.
– Changing build means additional content or different
user interface.

We can address some of these issues with Shadow.
What is quality?
Crosby defines quality as “conformance to
requirements”.
 “Fitness for purpose”.
 ISO 9126 provides a definition of the
characteristics of quality in software which can be
used in evaluating software quality:

– Functionality, Usability, Maintainability, Portability.

For our purposes, software quality will be defined
as software that conforms to customer driven
requirements and design specifications.
Software testing
Software “testing involves operation of a system
or application under controlled conditions and
evaluating the results.”
 Testing is performed:

– to find defects.
– to ensure that the code matches specification.
– to estimate the reliability of the code.

To improve or maintain consistency in quality.
Software testing
Testing costs – schedule and budget.
 Not performing testing also costs.
 In 2002 a US government agency suggests
that software bugs cost $59.5 billion
annually.

Software testing

Testing should be an integral part of the
software development process.
 Fundamental to “eXtreme Programming”.
 “Never write a line of code without a failing
test.”
 Quality of the test process and effort will
have an impact on the outcome.
Software testing

There are two types of software testing;
– Manual testing.
– Automated testing.

Within these testing types there are many
subdivisions.
 Either process runs a set of tests designed to
assess the quality of the application.
Manual testing

Running manual tests requires a team of
engineers to write and execute a test script.
 Advantages:
– Infrequent cases might be cheaper to perform.
– Ad hoc testing can be very valuable.
– Engineers can perform variations on test cases.
– Tests product usability.
Manual testing

Disadvantages:
– Time consuming.
– Tedious.
– Difficult sometimes to reproduce some issues
manually all of the time.
Automated testing

Requires a test script to be written.
 Requires team of specialised engineers to
code the test script.
 Advantages:
–
–
–
–
If tests have to be repeated – cheaper to reuse.
Useful for a large testing matrix.
Consistency in producing results.
High productivity – 24 x 7.
Automated testing

Disadvantages:
– Can be expensive to code from scratch
– Can require specialised skills to code
– It takes longer to write, test, document and automate
–
–
–
–
tests.
Test automation is a software development activity –
with all of the implications.
Test cases are fixed.
Automation cannot perform ad hoc testing.
Difficulty in keeping in-synch with a changing build.
Which to use?
Neither mode of testing is a “silver bullet”.
 Successful software development and
localisation should use both manual and
automated testing methodologies.
 Balance between use of each mode can be
decided using budgetary and schedule
constraints.

Shadow

Shadow is a software-testing tool for performing
automated and manual tests on original or
localised software applications.
 Allows user to simultaneously test localised
software applications running on either different
language operating systems or original language
products running in different configurations.
 Quick to set up and use.
Shadow

In the following slide we see the Shadow
setup.
 One PC is running 3 VMWare machines.
 Each VMWare machine is shown displaying
the Start menu.
Shadow remote control
Shadow interface

Shadow is shown in the following slide in
control of 4 VMWare machines.
 Each VMWare machine running the
Catalyst application with a TTK open.
Shadow interface
Shadow architecture

Shadow consists of 3 pieces of software;
– Dispatcher server
– Client Viewer
– Client Target
Shadow architecture
Demonstration

This demonstration shows Shadow
connecting to three Windows XP clients.
 Shadow can connect to a PC running
Windows XP Professional in exactly the
same way as it can connect to VMWare
running Windows XP Pro clients.
Connection Demo
Screenshot demo
Shadow differentiators





Makes automated testing easier to use with less
programming.
Separating Test Engineer into “Product Specialist”
and “Quality Assurance Specialist”.
Making software testing more like the actions of a
human user.
Accelerating the manual testing process through
the unique Shadow user interface.
Recording screenshot data by default.
Case study: Client A

Client A produces ERP software.
 Task list:
–
–
–
–
Write test scripts
Update test scripts
Set up the hardware and software
Execute the test script on the machines; using both
Shadow and WinRunner
– LQA – performed by linguists using the screenshots
– Localisation functional QA using Shadow and
WinRunner
Case study: Client A - results
40 screenshots
Shadow
WinRunner
Task
Days
Days
Comment
Write LQA script
3–4
3–4
Tool independent
Update LQA script
1–2
1–2
Tool independent
1–2
WinRunner only
Write TSL script
Execution script
2
1
Shadow and
WinRunner
LQA
1–2
1–2
Tool independent
Functional QA
1–2
1–2
Tool independent
Total
8 – 12
8 – 13
Case study: Client A - results
400 screenshots
Shadow
WinRunner
Task
Days
Days
Comment
Write LQA script
20 – 25
20 – 25
Tool independent
Update LQA script
10 – 15
10 – 15
Tool independent
Write TSL script
0
25 – 30
WinRunner only
Execution script
16
8
Shadow and
WinRunner
LQA
5–6
5–6
Tool independent
Functional QA
9 – 10
9 – 10
Tool independent
Total
60 – 72
77 – 94
Case study: Client A conclusions

Shadow and WinRunner take approximately
the same time to setup for small number of
screenshots.
 For a larger number, Shadow is faster.
 The client setup had 3 machines, this could
be improved.
 WinRunner requires build preparation.
 Wait for feature.
Case study: Brandt
Translations
Project – localisation of multimedia tours in
several languages.
 Recent projects include;

– 6 tours x 5 languages
– 3 tours x 7 languages

This type of project occurs often.
 Need to be efficient, as the schedule is tight.
Case study: Brandt

Brandt uses Shadow for the purposes of
testing and as an automation tool to perform
tasks that need to be repeated frequently.
 Tasks in the localisation of Captivate tours;
– Audio integration.
– Text integration.
– Font assignment.

Recorded scripts perform these tasks.
Case study: Brandt
Text Integration – Localised text on every
slide in the tour.
 Audio integration – Importing a single WAV
file per slide, voice reads the text.
 Font assignment – Localised text font has to
be consistent for all slides.

Case study: Brandt - results

Task
Automation per 30
slide tour – minutes
Manual time per 30
slide tour – minutes
Audio integration
10
15
Text integration
15
20
Font assignment
10
15
Efficiency is in the parallelism.
 Automated repetitive task – reduced issues
due to human errors.
Brandt Demo
Case study: Brandt conclusions

Shadow was used as an automation tool for
this project.
 Characteristics of this project making
Shadow efficient: repeated tasks, easily
done in parallel.
 Shadow was essential to the effectiveness of
the engineering team.
Conclusions





We looked at the advantages and disadvantages of
the different modes of testing.
Mix of manual and automated testing is essential.
Shadow allows separation of QA from specialist
product knowledge and hardware setup.
For localisation, Shadow can take screenshots of
the application, for linguistic review.
Shadow can be used by the engineer, with
specialist product knowledge, to walk through the
different language versions of an application
simultaneously.
Shadow: Future developments

Addition/integration of an OCR module
 Enhanced AI modules
Acknowledgements

Gary Winterlich provided the Camtasia
demonstrations.
 Bibliography is included a forthcoming
paper to accompany this presentation in
LRC XII
 Thank you for your time.
 Q&A