Transcript Slide 1

Testing Implementasi Sistem
Pertemuan 2
Strategi dan
Teknik Testing
Oleh :Rifiana Arief, SKom, MMSI
Outline
•
•
•
•
•
•
•
•
•
What Testing is
Testing In Development Process
Types Of Testing and Definitions
Verification & Validation
Purpose and Goal of Testing
Who Tests Software
Testing Technique
Testing Step
Testing Strategy
2
What’s Wrong?
A=0
.
A=A + 0 1
A=2 ?
Print A
T
F
3
Print A
What testing is
1) Common definition
Testing is to execute a
program with the purpose
of finding defects
testing
2) Wider definition
”Testing is a technical
investigation of a product,
done to expose qualityrelated information.”
4
Testing in Development Process
• Testing activities take place in all parts of
software development
• From requirement eliciting to final shipment
• Testing is part of the development process
• Testing is part of the company business
process
5
Testing in Development Process
• Testing During implementation: test to verify
that software behaves as intended by the
designer.
• Testing After implementation: test for
conformance with requirements and
reliability, and other non functional
requirement
6
Most Common Software problems






Incorrect calculation
Incorrect data edits & ineffective data edits
Incorrect matching and merging of data
Data searches that yields incorrect results
Incorrect processing of data relationship
Incorrect coding / implementation of
business rules
 Inadequate software performance
 Confusing or misleading data
 Software usability by end users &
Obsolete Software
 Inconsistent processing
 Unreliable results or performance
 Inadequate support of business needs
 Incorrect or inadequate interfaces
with other systems
 Inadequate performance and security
controls
 Incorrect file handling
Types of testing and definitions
• Validation and Verification
– Validate
• correctness or suitability
• vertical experts to confirm master results
– Verification
• confirm software operates as it is required to
• double check to ensure results match those previously
validated and if not then re-validate them
Phase
Rational Unified
Process (RUP)
Inception
Elaboration
Construction
Transition
Requirements
Core Workflow
Analysis
Design
Development
Testing
Maintenance
Testing can take place as part
of each phase of development .
Phase
Process (RUP)
Requirements
Core Workflow
Analysis
Design
Development
Testing
Maintenance
Inception
Elaboration
Testing can take place as part of each core
workflow involved in development
organization.
Rational Unified
11
Construction
Transition
Verification & Validation
• Software V & V defined as a systems
engineering methodology to ensure that
quality is built into the software during
development.
• Software V & V is complementary to and
supportive of quality assurance, project
management, systems engineering, and
development.
Verification & Validation versus Debugging
• Verification & Validation
– a process that establish the existence of
defects in a system
• Debugging
– a process that locates and corrects these
defects
13
Verification versus Validation
• Software Verification Process
– is a process for determining whether the software
products of an activity fulfill the requirements or
conditions imposed on them in the previous activities.
• Software Validation Process
– is a process for determining whether the
requirements and the final, as-built system or
software product fulfills its specific intended
use.
14
Verification versus Validation
• Verification:
– “Are we building the system in the right way?”
– The system should conform to the specification
– It does what you specified it should do
 Validation:
 “Are we building the right system? ”
 The system should do what the users really
requires
15
Verification versus Validation
• Sometimes one of these word is used to
mean both verification and validation:
– Verification in the meaning:
–verification and validation, or
– Validation in the meaning:
–verification and validation
16
The V & V Objectives
• There are two principal objectives:
– To discover and rectify defects in a system
– To assess whether or not the system is
usable in an operational situation.
The V & V Objectives
• Software V & V determines that the software
performs its intended functions correctly.
• Ensure that the software performs no unintended
functions
• Measure and assess the quality and reliability of
software.
18
The V & V Objectives
• As a software engineering discipline, software V
& V also assesses, analyzes, and tests the
software on
– how it interfaces with systems elements
– Influences the performance, or reacts to
stimuli from system elements
19
The V & V process
• V & V Is a whole life-cycle process
• V & V should be applied at each stage in the
software process.
20
Static and Dynamic V&V
Check correspondence between
a program and its specification
Are we building
the system
In the right way?
Static
Verification
Requirements
specification
High-level
design
Formal
specification
Detailed
design
Code/
Program
Dynamic
Validation
Prototype
Are we building
the right system?
Execution
base testing
21
Static and Dynamic V&V
• Static Verification
Concerned with analysis of the static
system representation to discover
problems
– Analysis of all documents produced
that represent the system
– Can be applied during all stages of the
software process
22
V&V
Static
Dynamic = “testing”
Inspect artifacts
Execute systems
to discover problems
(static verification)
observing product behaviour
(dynamic validation)
23
V&V
Static
Dynamic = “testing”
Inspect artifacts
Execute systems
Complements each other
24
V&V
Dynamic =
”Testing”
Static
Unit test
Review
Inspection
Integration
test
System test
Walkthrough
25
Acceptance
test
Static verification
• Review (desk checking)
– Code reading done by a single person,
– informal.
– Uneffective compared to walkthrough or inspection
• Walkthrough
– The programmer(s) ”walks through”/”executes” his
code while invited participants ask questions and
makes comments.
– Relatively informal
• Inspection
– Usually a checklist of common errors is used to
compare the code against.
26
Purpose and goal of testing
are situation dependent
1.
2.
3.
4.
Find defects
Maximize bug count
Block premature product releases
Help managers make ship/no-ship
decisions
5. Assess quality
6. Minimize technical support costs
27
Purpose and goal of testing
are situation dependent
7. Conform to regulations
8. Minimize safety-related lawsuit risk
9. Assess conformance to specification
10. Find safe scenarios for use of the
product (find ways to get it to work, in
spite of the bugs)
11. Verify correctness of the product
12. Assure quality
28
Purpose and goal of testing
are situation dependent
13. Testing cannot show the absence of
errors, only their presence
14. We test a program to find the
existence of an error
15. If we find no errors then we have
been unsuccessful
16. If an error is found debugging should
occur
29
Unsuitable objectives with testing
Show that a system does
what it is supposed to do
Showing that a system is without errors
30
Testing Levels
•
•
•
•
Unit testing
Integration testing
System testing
Acceptance testing
Unit testing
• The most ‘micro’ scale of testing.
• Tests done on particular functions or code
modules.
• Requires knowledge of the internal program
design and code.
• Done by Programmers (not by testers).
Unit testing
Objectives  To test the function of a program or unit of
code such as a program or module
 To test internal logic
 To verify internal design
 To test path & conditions coverage
 To test exception conditions & error
handling
When
 After modules are coded
Input
 Internal Application Design
 Master Test Plan
 Unit Test Plan
Output
 Unit Test Report
Srihari Techsoft
Who
Developer
Methods
White Box testing techniques
Test Coverage techniques
Tools
Debug
Re-structure
Code Analyzers
Path/statement coverage
Education
tools
Testing Methodology
Effective use of tools
Srihari Techsoft
Incremental integration testing
Continuous testing of an application as and
when a new functionality is added.
Application’s functionality aspects are required
to be independent enough to work separately
before completion of development.
Done by programmers or testers.
Integration Testing
– Testing of combined parts of an application to
determine their functional correctness.
– ‘Parts’ can be
•
code modules
•
individual applications
•
client/server applications on a network.
Srihari Techsoft
Types of Integration Testing
»Big Bang testing
»Top Down Integration testing
»Bottom Up Integration testing
Integration testing
Objectives

To technically verify proper
interfacing between modules, and
within sub-systems
When

After modules are unit tested
Input


Internal & External Application
Design
Master Test Plan
Integration Test Plan

Integration Test report

Output
Srihari Techsoft
Who
Developers
Methods
White
Tools
Education
and Black Box
techniques
Problem /
Configuration
Management
Debug
Re-structure
Code Analyzers
Testing Methodology
Effective use of tools
Srihari Techsoft
System Testing
Objectives

When
Input

After Integration Testing



Detailed Requirements & External Application
Design
Master Test Plan
System Test Plan

System Test Report
Output
To verify that the system components perform
control functions
 To perform inter-system test
 To demonstrate that the system performs both
functionally and operationally as specified
 To perform appropriate types of tests relating
to Transaction Flow, Installation, Reliability,
Regression etc.
Who
Development Team
Methods
Problem
Tools
Recommended
Education
Testing Methodology
Effective use of tools
and Users
/ Configuration
Management
set of tools
Systems Integration Testing
Objectives

When


Input
Output
To test the co-existence of products and
applications that are required to perform
together in the production-like operational
environment (hardware, software, network)
 To ensure that the system functions together
with all the components of its environment as a
total system
 To ensure that the system releases can be
deployed in the current environment
After system testing
Often performed outside of project life-cycle
 Test Strategy
 Master Test Plan
 Systems Integration Test Plan
 Systems Integration Test report
Who
System Testers
Methods
White
Tools
and Black Box techniques
Problem / Configuration
Management
Recommended set of tools
Education
Testing
Methodology
Effective use of tools
Acceptance Testing
Objectives

When
Input




Output

To verify that the system meets
the user requirements
After System Testing
Business Needs & Detailed
Requirements
Master Test Plan
User Acceptance Test Plan
User Acceptance Test report
Who
Users / End Users
Methods
Black Box techniques
Problem / Configuration
Management
Tools
Compare, keystroke capture & playback,
regression testing
Education
Testing Methodology
Effective use of tools
Product knowledge
Business Release Strategy
Testing Technique
• Two views on Software testing:
White Box
Testing
Black Box
Testing
Black box
Testing Technique
 White box testing - tests what the program does.
Test sets are developed by using knowledge of the
algorithms, data structures, and control statements.
Testing Technique
Black box
 Black box testing - tests what the program is
supposed to do.
Test sets are developed and evaluated solely on the
specification. There is no knowledge of the
algorithms, data structures, or control statements.
White-box testing
 Also known as:
 Structure based (Structural) testing
 Code based testing
 Glass box testing
 Clear box testing
 Logic driven testing
White-box testing
• White-box (or Structural) testing:
– Use knowledge of the program to derive
test cases to provide more complete
coverage
– Problem: What criteria to use?
White-box testing
... our goal is to ensure that all
Statements, decisions, conditions, and paths have
been executed at least once ...
White-box testing
 The system is looked upon as an open box.
 The test cases is based on the internal structure of
the system (code)
 Theoretically desirable but impossible and
insufficient goal: all paths of the code exercise
Black-box testing
Black box
• Also known as:
– Functional Testing
• because it test all the functions
– Behavioral Testing
• because the program is tested against the
expected behavior (described by
requirements and/or design)
Black-box testing
requirements
output
input
events
– The software is viewed as a black box which
transforms input to output based on the
specifications of what the software is
supposed to do.
Black-box testing
– Check The Conformity of the tested S/W
against established behaviour, and
– Detect errors generated by fault
• Software fault is a software part which is not
according to its definition provided in the
development document
Black-box testing
– Functional tests examine the observable
behavior of software as evidenced by its
outputs without reference to internal
functions.
– If the program consistently provides the
desired features with acceptable
performance, then specific source code
features are irrelevant.
Black-box testing
– Should consider only from the standpoint of
its:
• Input data
• Output data
– Knowledge of its internal structured should not
be
– It is very often impossible to test all the input
data
– It is hence necessary to select a subset of
possible input
Unit code
Unit code
Unit code
Testing Steps
Unit
test
Design
System
specifications functional
requirements
Other
Customer
User
software
requirements environment
requirements specification
Unit
test
.
.
.
Unit
test
Integration
test
Integrated
modules
Function
test
Functioning
system
Performance
test
Acceptance
Verified,
validated
software
Accepted
system
test
Installation
test
SYSTEM
IN USE!
Testing Steps
Acceptance Test
software
tests
customer
developer site

type of acceptance testing performed by customer at the
developer’s site is usually called alpha testing
Testing Steps
Acceptance Test
customer
tests
software
customer site
 beta testing is a type of acceptance testing involving a software product to
be marketed for use by many users
 selected users receive the system first and report problems back to the
developer
 users enjoy it - usually receive large discounts and feel important
 developers like it - exposes their product to real use and often reveals
unanticipated errors
Testing Strategy
Top-down
Big
Bang!
Sandwich
Compromise
Bottom-up
non-incremental
incremental
Testing Strategy
Big bang integration
(all components together)
Bottom up integration
(from lower levels No test stubs necessary)
Top down integration
(from higher levels  no test drivers are needed)
Sandwich testing
(combination of bottom-up and top-down  no
test stubs and drivers needed)
2.What you should test?
•Kualitas dan Resiko Kualitas
Kualitas Perangkat Lunak


Seperti yang telah dijelaskan pada Pertemuan
Pertama, tujuan dari pengujian perangkat
lunak adalah untuk mendapatkan perangkat
lunak dengan kualitas yang sesuai dengan
rancangan yang telah dibuat (quality of
conformance).
Dengan kata lain, pengujian perangkat lunak
merupakan cara untuk menentukan kualitas
suatu produk perangkat lunak.
Defining Quality


"features [that] are decisive as to product
performance and as to 'product satisfaction'
...
freedom from deficiencies... [that] result in
complaints, claims, returns, rework and other
damage
Defining Quality (cont.)


the users and customers become the arbiters
of quality when they experience product
dissatisfaction - and then make complaints,
return merchandise, or call technical support.
Testing looks for situation in which a product
fails to meet customers' or users' reasonable
expectations in specific areas.
Standar Kualitas Perangkat Lunak

Standar internasional yang digunakan untuk
mengevaluasi kualitas perangkat lunak adalah
ISO 9126 yang mendefinisikan karakteristik
perangkat lunak yang berkualitas.
Karakteristik Kualitas Perangkat Lunak



The standard is divided into four parts which address, respectively, the following
subjects: quality model; external metrics; internal metrics; and quality in use metrics.
The quality model established in the first part of the standard, ISO 9126-1, classifies
software quality in a structured set of characteristics and sub-characteristics as
follows:
Functionality - A set of attributes that bear on the existence of a set of functions and
their specified properties. The functions are those that satisfy stated or implied needs.
Suitability
Accuracy
–Interoperability
Compliance
–Security
•
Reliability - A set of attributes that bear on the capability of software to maintain its
level of performance under stated conditions for a stated period of time.
Maturity
Recoverability
–Fault Tolerance
•
Usability - A set of attributes that bear on the effort needed for use, and on the individual assessment of
such use, by a stated or implied set of users.
Learnability
Understandability
–Operability
•
Efficiency - A set of attributes that bear on the relationship between the level of performance of the
software and the amount of resources used, under stated conditions.
Time
Behaviour
Resource
•
Behaviour
Maintainability - A set of attributes that bear on the effort needed to make specified modifications.
Stability
Analyzability
Changeability
–Testability
•
Portability - A set of attributes that bear on the ability of software to be transferred from one environment
to another.
Installability
Replaceability
Adaptability
Who Tests Software?
user
developer
independent tester
71
Who Tests Software?
• User
– Test while using it
• It’s not in purpose to do so
– Indirect test
72
Who Tests Software?
• Software Developer
– Understand system
– Test gently
– Driven by delivery
• Independent Tester
– Doesn’t understand system
– Will try to break it
– Quality driven
73