Test Automation - How far should it go?

Download Report

Transcript Test Automation - How far should it go?

Test Automation –
How far should it go?
Ian Gilchrist, IPL
1
Contents
1. Context and Introduction
2. C++ Example (‘Reverse
String’) with script
generation, coverage
analysis, wrapping
3. C example (‘Reactor’) with
table-driven testing,
coverage optimisation,
robustness testing
2
20 minutes
15 minutes
10 minutes
Context
• C and C++ software
• Unit and Integration tests
• ‘High integrity’ systems
• Up to and including ‘safety-critical’
3
The Back Story
Personal history
4
•
I entered the software industry in 1983 as a
programmer working mainly in Assembler code
•
Testing then consisted of ‘suck and see’
•
By 1985 we were using test facilities based on use of
emulator, but requiring much manual set up, and
interpretation of results from hex output dump of
memory.
•
Not pleasant to do!
•
Not repeatable!
Software Development Lifecycle
System
Requirements
Acceptance
Test
Architectural
Design
System
Test
Integration
Test
Detailed
Design
Unit
Design
Unit
Test
Code
Code
Does code do what it should?
Is code tested sufficiently?
5
Source code
Analysis
Dynamic testing proof
Code Coverage requirements
Why Unit Test?
Regulated Industries (avionics, medical, etc)
•
•
•
Mandated or highly recommended
Structural code coverage
Variation by SIL (System Integrity Level)
Business-critical
•
•
•
6
Making a profit!
Protecting reputation
Avoidance of litigation
Unit/Integration Test - Technical Challenges
Test harness creation
Finding time to create reusable test harnesses to test all functionality
Control of interfaces
Making it simple to test object interactions with the rest of the system
Flexibility of Black-Box/White-box and OO techniques
Having a full tool-kit to test efficiently
Selection/Generation of test data for different purpose
Functional tests – does code do what it should?
Robustness tests – does code survive large data sets?
Baseline tests – we know code’ works’ but want a baseline against which to
test changes
Code coverage
Confidence that tests cover the important code
Evidence generated is sufficient to meet standards
7
Test Harness Creation
•
•
•
•
Test driver harness for units
Structured auto-repeatable test cases
Checking of Outputs
Results production
Automation can do this:
• Parse of project code
• Generate structured test script
• case per function/ method
• parameters, call order, global data, returns
• test case independence
• Incorporate checks according to variable type
• Positive and negative checking of global data
• Results production
8
8
Control of Interfaces
•
•
•
•
Isolation of units from system
Order of calls
Different behavior on calls
Checking/modifying parameters
 Intercept
 Simulate
 Intercept
Automation can do this
• Knowledge of calls made
• Automated generation of
• Stubs to simulate calls
• Wrappers to intercept calls
• Programmable instances for Stubs and Wrappers
• Checks on parameters based on types
9
9
Stubs
A function/method in test script with programmable instances
Replaces call to external software, firmware or hardware
Check Parameters
Check Call Sequences
Source
Code
Choose Return
Parameter value(s)
10
Stub is a
dummy function
Stub
for External replacing
interface to the
Object
External Object
External
Object
Wrapping
A function/method in test script with programmable instances using
Before-After or Before-Replace wrapper pairs
Intercepts call to external software, firmware or hardware
Wrapper for
External Object
Check Out Parameters
Check Call sequences
BEFORE
Wrapper
External
Object
Source
Code
Modify In Parameters
and Return
11
11
Modify Out Parameters
AFTER
Wrapper
Check In Parameters
and Return
Test Driver Data
White (clear) Box Testing
•
•
•
Call private methods/static functions
Set/check private/static data
Control of internal calls
Public Private
Set +
Check
 Checks
Automation can do this:
• Automated accessibility instrumentation
• Call private methods and static functions
• Set/check data which is private, in unnamed
namespaces and declared static
•
•
12
12
Wrapping calls internal to compilation unit
Wrapping OS library calls
Black Box Testing
•
•
•
Large data input sets
Checking large output sets
Robustness tests
Test Driver Data
Public
 Checks
Automation can do this:
• Table-driven test generation
• Multiple values per parameter
• Ranges of values
• Functions calculating values
• Combinatorial effect calculator
• Checks on call sequences and returns
• Robustness rule sets for data types
13
13
Object Oriented Code
•
•
•
Test case re-use aligned with code
Support for Templates
Support for Inheritance
Automation can do this:
• Parallel hierarchy of code and test case re-use
• C++ template instantiation and testing
• C++ inheritance and factory method testing
• Object oriented code coverage
14
14
Integrated Code Coverage
•
•
•
•
Set coverage target in tests
Diagnostics over stages / test runs
Sensible coverage metrics
Coverage redundancy
Automation can do this:
• Coverage Rule Sets
• Powerful drill-down views, filters and reports
• Coverage Metrics
• Entry- Point Coverage
• Statement Coverage
• Decision Coverage
• Call-return Coverage
• Condition Coverage (including MC/DC)
• Test case coverage optimization
15
15
Large Legacy Code Base
•
•
•
Automatic generation of unit tests
Reducing reliance on system regression tests
Identifying testability issues
Automation can do this
Automatic generation of passing C unit test baseline
Automatic regression suite of makefiles
Knowing testability limitations in code
Dynamically unreachable code
Crash scenarios
Data uninitialised or function static
Implicit function declarations
Compiler type truncation
16
End of Introduction
C++ Example – ‘Reverse String’
C Example – ‘Reactor’
C Example – ‘Airborne’
17
I
End of Presentation
If you have any questions, please email:
[email protected]
THANK YOU FOR ATTENDING!
18
I