MCA –Software Engineering Kantipur City College Topics include  Validation Planning  Testing Fundamentals  Test plan creation  Test-case generation  Black-box Testing  White Box Testing 

Download Report

Transcript MCA –Software Engineering Kantipur City College Topics include  Validation Planning  Testing Fundamentals  Test plan creation  Test-case generation  Black-box Testing  White Box Testing 

Slide 1

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 2

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 3

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 4

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 5

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 6

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 7

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 8

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 9

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 10

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 11

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 12

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 13

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 14

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 15

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 16

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 17

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 18

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 19

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 20

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 21

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 22

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 23

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 24

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 25

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 26

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 27

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 28

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 29

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 30

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 31

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 32

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 33

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 34

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 35

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 36

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 37

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 38

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 39

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 40

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 41

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 42

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing


Slide 43

MCA –Software Engineering

Kantipur City College

Topics include
 Validation Planning
 Testing Fundamentals
 Test plan creation
 Test-case generation
 Black-box Testing
 White Box Testing
 Unit Testing
 Integration Testing
 System testing
 Object-oriented Testing

Verification Vs. Validation
 Two questions
 Are we building the right product ? => Validation
 Are we building the product right ? = > Verification

Machines

Building product right

Efficiency
making best use of
resources in achieving
goals

Building the right product
Effectiveness
choosing effective goals and
achieving them

Verification & Validation
 Software V & V is a disciplined approach to assessing software
products throughout the SDLC.
 V & V strives to ensure that quality is built into the software
and that the software satisfies business functional
requirements.
 V & V is to ensures that software conforms to its specification
and meets the needs of the customers.
 V & V employs review, analysis, and testing techniques to
determine whether a software product and its intermediate
deliverables comply with requirements. These requirements
include both business functional capabilities and quality
attributes.
 V & V provide management with insights into the state of the
project and the software products, allowing for timely change
in the products or in the SDLC approach.
 V & V is typically applied in parallel with software development
and support activities.

Verification & Validation
 Verification involves checking that
 The software conforms to its specification.
 System meets its specified functional and non-functional
requirements.

“Are we building the product right ?”
 Validation, a more general process ensure that the
software meets the expectation of the customer.

“Are we building the right product ?”
You can't test in quality. If its not there before you begin
testing, it won’t be there when you’re finished testing.

Techniques of system
checking & Analysis
 Software inspections
 Concerned with analysis of the static system
representation to discover problems (static verification)
such as
• Requirements document
• Design diagrams and
• Program source code
 It do not require the system to be executed.
 This techniques include program inspections, automated
source code analysis and formal verification.
 It can’t check the non-functional characteristics of the
software such as its performance and reliability.

Techniques of system
checking & Analysis
 Software testing
 It involves executing an implementation of the software
with test data and examining the outputs of the software
and its operational behavior to check that it is performing
as required.
 It is a dynamic techniques of verification and validation.
 The system is executed with test data and its operational
behaviour is observed.
 Two distinct types of testing
 Defect testing : to find inconsistencies between a program
and its specification.
 Statistical testing : to test program’s performance and
reliability and to check how it works under operational
conditions

Static and Dynamic V & V
Static
verification

Requirements
specification

Prototype

High-level
design

Formal
specification

Detailed
design

Program

Dynamic
validation

Software Testing
fundamentals
 Testing is a set of activities that can be planned in
advance and conducted systematically.
 Testing is the process of executing a program with the
intent of finding errors.
 A good test case is one with a high probability of finding
an as-yet undiscovered error.
 A successful test is one that discovers an as-yetundiscovered error.

Software testing priciples
 All tests should be traceable to customer
requirements.
 Tests should be planned long before testing
begins.
 The Pareto principle (80% of all errors will likely
be found in 20% of the code) applies to software
testing.
 Testing should begin in the small and progress to
the large.
 Exhaustive testing is not possible.
 To be most effective, testing should be conducted
by an independent third party.

Software Testability Checklist
 Operability-the better it works the more efficiently it can
be tested
 Observability-what you see is what you test
 Controllability-the better software can be controlled the
more testing can be automated and optimized
 Decomposability-by controlling the scope of testing, the
more quickly problems can be isolated and retested
intelligently
 Simplicity-the less there is to test, the more quickly we
can test
 Stability-the fewer the changes, the fewer the
disruptions to testing
 Understandability-the more information known, the
smarter the testing

V&V Vs. Debugging
 Verification and validation
 A process that establishes the existence of defects in a
software system.
 The ultimate goal of the V&V process is to establish
confidence that the software system is “fit for purpose”.

 Debugging
 A process that locates and corrects these defects
Test
results

Locate
error

Test
cases

Specification

Design
error repair

Repair
error

Re-test
program

The defect testing process
Test
cases

Design test
cases

Test
data

Prepare test
data

Test
results

Run program
with test data

Test
reports

Compare results
to test cases

Test data
 Inputs which have been devised to test the system

Test cases
 Inputs to test the system and the predicted outputs from
these inputs if the system operates according to its
specification

Project Planning
Plan

Description

Quality Plan

Describes the quality procedure and
standards that will be used in a project.

Validation Plan

Describes the approach, resources and
schedule used for system validation.

Configuration
Management Plan

Describes the configuration management
procedures and structures to be used.

Maintenance Plan

Predicts the maintenance requirements of
the system, maintenance costs and effort
required.

Staff development Plan

Describes how the skills and experience of
the project team members will be developed.

Verification and Validation
Plan
Requirements
specification

System
specification

System
integration
test plan

Acceptance
test plan

Service

System
desig n

Acceptance
test

Detailed
design

Sub-system
integration
test plan

System
integration test

Module and
unit code
and tess

Sub-system
integration test

Test Plan as a link between development and testing

Testing Process

Unit
testing
Module
testing
Sub-system
testing
System
testing
Acceptance
testing

Component
testing

Integration testing

User
testing

Testing Process
 Unit testing - Individual components are tested
independently, without other system components
 Module testing - Related collections of dependent
components( class, ADT, procedures & functions) are tested,
without other system module.
 Sub-system testing-Modules are integrated into sub-systems
and tested. The focus here should be on interface testing to
detect module interface errors or mismatches.
 System testing - Testing of the system as a whole. Validating
functional and non-functional requirements & Testing of
emergent system properties.
 Acceptance testing-Testing with customer data to check that
it is acceptable. Also called Alpha Testing

The testing process
 Component testing
 Testing of individual program components
 Usually the responsibility of the component developer
(except sometimes for critical systems)
 Tests are derived from the developer’s experience
 Integration testing
 Testing of groups of components integrated to create
a system or sub-system
 The responsibility of an independent testing team
 Tests are based on a system specification

The testing process
Acceptance Testing
Making sure the software works correctly for
intended user in his or her normal work
environment.
Alpha test-version of the complete software is
tested by customer under the supervision of the
developer at the developer’s site.
Beta test-version of the complete software is
tested by customer at his or her own site without
the developer being present

Black-box testing
 Also known as behavioral or functional testing.
 The system is a “Blackbox” whose behavior can be
determined by studying its inputs and related outputs.
 Knowing the specified function a product is to perform
and demonstrating correct operation based solely on its
specification without regard for its internal logic.
 Focus on the functional requirements of the software
i.e., information domain not the implementation part of
the software and disregards control structure.
 The program test cases are based on the system
specification
 It is performed during later stages of testing like in the
acceptance testing or beta testing.

Black-box testing

Input test data

I

Inputs causing
anomalous
behaviour

e

System

Output test results

Oe

Outputs which reveal
the presence of
defects

Black-box testing
Test are designed to answer the following questions:

 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input behavior will make good test case?
 Is the system particularly sensitive to certain input
values?
 How are the boundaries of data class isolated?
 What data rates and data volume can the system
tolerate?
 What effect will specific combinations of data have on
system operations?

Advantages of Black box testing
 Validates whether or not a given system conforms
to
its software specification
 Introduce a series of inputs to a system and
compare
the outputs to a pre-defined test specification.
 Test integration between individual system
components.
 Tests are architecture independent — they do not
concern themselves with how a given output is
produced, only with whether that output is the
desired and expected output.
 Require no knowledge of the underlying system,
one
need not be a software engineer to design black
box
tests.

Disadvantages of Black box
testing
 Offer no guarantee that every line of code has been
tested.
 Being architecture independent, it cannot determine
the efficiency of the code.
 Will not find any errors, such as memory leaks, that
are not explicitly and instantly exposed by the
application.

Black-box testing techniques







Graph-based testing methods
Equivalence Partitioning,
Boundary Value Analysis (BVA)
Comparison Testing
Orthogonal Array Testing.

Equivalence Partitioning
 Black-box technique that divides the input domain into
classes of data from which test cases can be derived
 An ideal test case uncovers a class of errors( incorrect
processing of all incorrect data) that might require many
arbitrary test cases to be executed before a general error is
observed
 Equivalence class guidelines:
 If input condition specifies a range, one valid and two invalid
equivalence classes are defined
 If an input condition requires a specific value, one valid and
two invalid equivalence classes are defined
 If an input condition specifies a member of a set, one valid
and one invalid equivalence class is defined
 If an input condition is Boolean, one valid and one invalid
equivalence class is defined

Equivalence Partitioning

Invalid inputs

System

Outputs

Valid in puts

Equivalence Partitioning

3
4

Less than 4

7

11
10

Between 4 and 10

More than 10

Number of input values
9999
10000

Less than 10000
Input values

50000

100000
99999

Between 10000 and 99999

More than 99999

Boundary Value Analysis (BVA)
 Black-box technique that focuses on the boundaries of the
input domain rather than its center
 BVA guidelines:
 If input condition specifies a range bounded by values a and
b, test cases should include a and b, values just above and
just below a and b
 If an input condition specifies and number of values, test
cases should be exercise the minimum and maximum
numbers, as well as values just above and just below the
minimum and maximum values
 Apply guidelines 1 and 2 to output conditions, test cases
should be designed to produce the minimum and maxim
output reports
 If internal program data structures have boundaries (e.g.
size limitations), be certain to test the boundaries

Comparison Testing
 Also called back-to-back testing.
 Black-box testing for safety critical systems ( such
as aircraft avionics, automobile braking system) in
which independently developed implementations of
redundant systems are tested for conformance to
specifications
 Often equivalence class partitioning is used to
develop a common set of test cases for each
implementation.

Orthogonal Array Testing
 Black-box technique that enables the design of a
reasonably small set of test cases that provide
maximum test coverage
 Focus is on categories of faulty logic likely to be
present in the software component (without
examining the code)
 Priorities for assessing tests using an orthogonal
array
 Detect and isolate all single mode faults
 Detect all double mode faults
 Mutimode faults

White-box or Glass Box
testing
Knowing the internal workings of a product,
tests are performed to check the workings of all
independent logic paths.
It derive test cases that:
 Guarantee that all independent paths within a module
have been exercised at least once.
 Exercise all logical decisions on their true and false
sides.
 Execute all loops at their boundaries and within their
operational bounds, and
 Exercise internal data structures to ensure their
validity.

Techniques being used: basic path and control
structure testing.

White-box or Glass Box
testing
Test data

Tests

Derives
Component
code

Test
outputs

Integration Testing
Tests complete systems or subsystems
composed of integrated components
Integration testing should be black-box testing
with tests derived from the specification
Main difficulty is localising errors
Incremental integration testing reduces this
problem.
Incremental integration strategies include
 Top-down integration
 Bottom-up integration
 Regression testing
 Smoke testing

Approaches to
integration testing
Top-down testing
 Start with high-level system and integrate from the
top-down replacing individual components by stubs
where appropriate

Bottom-up testing
 Integrate individual components in levels until the
complete system is created

In practice, most integration involves a combination
of these strategies

Top-down testing

Level 1

Testing
sequence

Level 2
Level 2
stubs

Level 3
stubs

Level 1

Level 2

Level 2

. ..

Level 2

Bottom-up testing

Test
drivers
Level N

Test
drivers

Level N

Level N–1

Level N

Level N–1

Level N

Level N

Level N–1

Testing
sequence

System Testing
 Recovery testing
 Checks the system’s ability to recover from failures.

 Security testing
 Verifies that system protection mechanism prevent improper
penetration or data alteration

 Stress testing
 Program is checked to see how well it deals with abnormal
resource demands – quantity, frequency, or volume.

 Performance testing
 Designed to test the run-time performance of software,
especially real-time software.

Object-oriented Testing

The components to be tested are object classes
that are instantiated as objects
Larger gain than individual functions so
approaches to white-box testing have to be
extended
No obvious ‘top’ to the system for top-down
integration and testing

Acceptance Test Format
 Test Item List
 Identification of Test-item
 Testing Detail
 Detailed testing procedure
 Testing Result
 Summary of testing-item

Test-item List
Item
No.
SR-02

Test Item

Sub –
Test-Sub Item
item No.
Staff Review SR-02-01 Program Officer
Review

SR-02-02 Early Decline Report

Test-Level
A- Basic Function, compulsory
B- Enhanced Function, compulsory
C- Enhanced Function, optional

Level
A

A

Testing Details
SR-02 Staff Review
Item No

SR-02-01

Test Date

Item

Staff Review

Sub-item

PO Review
Report: Early Decline

Precondition
Test Procedure

Test Standard
Test description

 Passed
 Failed

Test Result and
Conclusion
Sin of the Tester

Sign of the
Manager

References
 From software engineering, A practitioner’s
approach by Roger S. Pressman
– Chapter 17: Software testing techniques





Software Testing Fundamentals
Test case design
White-box testing- Basic path, Control Structure Testing
Black-box testing

– Chapter 18: Software Testing Strategies
• A strategic approach to software testing
• Unit, Integration, Validation, System testing

 From Software Engineering, Ian Sommerville
– Part5: Verification and Validation
• Chapter 19: Verification and validation
• Chapter 20: Software testing