Test Analysis & Design – good practices Raluca Gagea 2013, October 17th We all know the importance of doing the right things from the very beginning.
Download ReportTranscript Test Analysis & Design – good practices Raluca Gagea 2013, October 17th We all know the importance of doing the right things from the very beginning.
Slide 1
Test Analysis
& Design –
good practices
Raluca Gagea
2013, October 17th
We all know the importance of doing the right things
from the very beginning and of putting the right questions when
the impact on the product under development is still minor.
From here to test analysis and test cases design is just a
small step, but one that needs special attention, experience,
intuition and creativity and many good practices.
I won't "teach" you how to do it best, I'll share with you
some tips that helped me during testing and we'll try to cover:
& some vocab stuff we usually don't use or we do it wrong
& specific activities and their benefits
& ways to measure progress and report the benefits
& test cases writing/designing styles - advantages and
disadvantages
& important properties for test cases
& few mistakes we all do :)
& importance of tools and the testing structures we choose
Slide 2
There is an A for everything
Theoretical
side of things
Definitions
Vocab
ISTQB
Practical
(realistic )
side of things
Activities and
their benefits
Good
practices
Things to
remember
Lessons to
learn
Examples
Slide 3
Fundamental Test Process
P
L
A
N
N
I
N
G
A
N
A
L
Y
S
I
S
D
E
S
I
G
N
I
M
P
L
E
M
E
N
T
EXECUTION
EVALUATING EXIT CRITERIA
AND REPORTING
CONTROL
C
L
O
S
U
R
E
Slide 4
Test Analysis & Design
Test Analysis & Design is the activity where general testing
objectives are transformed into tangible test conditions and test
designs.
Test
Analysis
Process of
looking at
something that
can be used to
derive test
information
Test Design
Process of
identifying the
associated highlevel test cases
for a test item
Slide 5
Test Analysis & Design Vocab
Business
Scenarios
Functional
specifications
Test Basis
Technical
documents
Use cases
Test Object 1
Test Object 2
Test Object n
Emails
Test Item 1
Non-functional
specifications
Test case 1
Test Item n
Test condition n
Test condition 1
Test case n
Test case design
techniques
Effective test case 1
Test suite
T
E
S
T
Effective test case n
Input & Output
Test data
O
B
J
E
C
T
I
V
E
S
Slide 6
Test Analysis & Design Vocab
Subscription Form
User Name
Age
Test
Items
City
Postal Code
Test
Object
Submit
Input constraints
User Name must be between 6 and 12 characters long, must start with a letter and
include only digits.
Test
Age must be a number greater or equal to 18 and less than 65
City must be one of Ottava, Toronto, Montreal or Halifax
Conditions
Slide 7
Test Analysis & Design – Why?
Review test
basis
Examine
specifications
Evaluate
testability
Test
Basis
Prevents defects
appearing in the
code
At execution
moment, all the
requirements are
translated in terms
of testable items
Select relevant documents
only; Identify gaps and
ambiguities in the
specifications because we
are trying to identify
precisely what happens at
each point in the system
Slide 8
Test Analysis & Design – Why?
Analysis of
test items
Identify test
conditions
Test
Conditions
Gives us a high-level
list of what we are
interested in testing
We can start
identifying the type
of generic test data
we might need
Slide 9
Test Analysis & Design – Why?
Design the
tests
Use test design
techniques
Test
Cases
The high risks areas
will be covered by
tests before the
actual execution
phase starts
Slide 10
Test Analysis & Design – Why?
Identify
test data
Test Data
At execution moment, the
test cases will be executed
using the most closest data
than the one in production
Slide 11
Test Analysis & Design – Why?
Design the
environment set-up
Identify any
infrastructure and
Test
tools
Environment
Availability
At execution moment,
everything we need to
carry out our work is in
place
Slide 12
Test Analysis & Design – Why?
Create
traceability
Test Basis
&
Test Cases
At every moment, we can
calculate the requirements
testing coverage
Slide 13
Requirements
A requirement is a
singular
documented physical
and functional need
that a particular
product or service
must be or perform.
Test Oracle
Functional reqs
Non-functional reqs
Architectural reqs
Design reqs
Structural reqs
Constraint reqs
Test Basis
Slide 14
When can we really have them all in place?
Slide 15
Lessons I’ve learnt – First Requirements
Some documents
coming in
Info
Are they
addressing
reqs or
additional
info for
me?
Piece of Requirements
coming in
Reqs
Add Test
to
Basis
if relevant
Add
to
if relevant
KT
pack
NO
Perform static
analysis
Are they
testable?
YES
Add
to
Test
Basis
Slide 16
Lessons I’ve learnt – First Test Cases “Design”
Analyze test basis, &
test oracle
Identify critical
functionalities
Identify needed
testing types
Identify automation
need
Identify ways to keep
traceability
Which kind of test suites
will be needed (e.g. smoke, regression,
per functionality, per component)?
How
we toprioritize
Do wecan
need
have
them?
separate test suites / cases for each
What’s
theseparate
impact
in terms
testing type
(manual
vs
automation,
Create
test
suites
of
effort
when
changing the
functional
vs non-functional,
etc.) for
for various
automated
testpriority
cases
(review
testDocases,
execute
them, track
we need
to have
different
purposes.
execution
status,
to initial
different design
&revert
writing
styles for
Choose
an appropriate
priority)
thedesign
test cases?
and writing style for these test
cases.
Slide 17
Lessons I’ve learnt – Testing Structure
Delivery
Model
Estimated
level of
change
Test Levels
Time
Number of
Testing
Cycles
Risk
Testing
Structure
Slide 18
Testing Structures – some examples
Traditional Waterfall
release cycles are typically
several weeks to several months
long, and usually have multiple
phases of testing (Functional,
System test, Performance, User
Acceptance Test, etc) during
any given release cycle
execution cycles are planned
and scheduled based on these
phases and metrics are tracked
for each phase as well
Organize by Test Phase
Organize by
Functionality or System
and then by Phase
Slide 19
Testing Structures – some examples
Agile
release products in shorter and frequent
release cycles, each consisting of multiple
Sprints in which one or more User Stories
are targeted for development completion
Organize by Sprints and then by User
Stories or functionality
-> when having large number of
releases with shorter Sprint cycles and
many overlapping release cycles
Slide 20
Testing Structures – some examples
Agile
release products in shorter and
frequent release cycles, each
consisting of multiple Sprints in
which one or more User Stories are
targeted for development
completion
Organize by moving Sprints at
project level as releases
-> you have larger release cycles
with many Sprints
Slide 21
Testing Structures – some examples
Testing of System of systems
large, complex platforms that may contain multiple systems or sub-systems each with its
own development and QA tracks, and there may be a need to track testing progress on a
per system bases followed by System wide testing
Organize releases as
projects, followed by
System level testing. This
is useful when different
teams are tracking
progress for different
systems.
Slide 22
Testing Structures – some examples
Testing of System of systems
large, complex platforms that may contain
multiple systems or sub-systems each with its
own development and QA tracks, and there
may be a need to track testing progress on a
per system bases followed by System wide
testing
Organize releases under the same
project, followed by systems. This is
useful when the platform has larger
number of frequent releases and a
single PM over all
Slide 23
Test Cases Design – some good practices
Why do we write test cases?
The test cases are more than some sentences used to test various flows.
They are our way to
proof the level of confidence in what we deliver by:
measuring the requirements coverage
and their status at every point in the development process:
(if the requirements are covered by enough test cases, if the testing
execution status at some point is the desired or planned one)
Slide 24
Test Cases Design – some good practices
Write Test Cases before the implementation of the
requirements
Write Test Cases for all the requirements
Test Cases should map precisely to the requirements and not
be an enhancement to the requirement
Slide 25
Test Cases Design – some good practices
Use same naming convention for all the test cases in a project
Create unique names for your test cases (use “TC” + identifier +
title)
Use “Action - Target – Scenario” method to formulate the title
Slide 26
Test Cases Design – some good practices
Test Case Title
Action
Scenario
Target
the rest of what your
a verb that describes
test is about and how
what you are doing the focus of your test
you distinguish
(create, delete,
(screen, object entity,
multiple test cases
ensure, edit, open,
program)
for the same Action
populate, login)
and Target
Slide 27
Test Cases Design – some good practices
Action – Target – Scenario
Create – Task – title is not supplied
Create – Task – title is the
maximum allowable length
Slide 28
Test Cases Design – some good practices
Write detailed description of every step of execution.
Define one single action per execution step.
Write clear and precise expected results
Slide 29
Test Cases Design – some good practices
The Expected Result states:
"Verify if error message is displayed."
Issue: As executing the Test Case, what if the error message says,
“Please provide postcode ", while it should say, "Your postcode is
invalid"?
Solution:
The Expected Results states:
"Verify that the error message about an invalid postcode is
displayed.”
Slide 30
Test Cases Design – some good practices
Each Test Case checks only one testing idea, but two or more
expected results are totally acceptable if there is a need to
perform several verifications for that testing idea.
Testing idea: “Payment can be performed by MasterCard credit
card."
Expected results:
1) In DB, cc_transaction table, in MasterCard column, value 1 is
registered.
2) Credit card balance is reduced by the amount equal to the
amount of the payment.
Slide 31
Test Cases Design – some good practices
Expected results should met the test case purpose.
Additional steps should be specified separately.
Slide 32
Test Cases Design – some good practices
TC ID
Execution Steps
TC01.01 Verify
1. Login as Customer User
TC01.01 Verify
Login as Customer User
Customer User is 2. Navigate to Create Keyword
Customer User is Navigate to Create Keyword page.
able to create
page.
able to create
keyword
3.1. Complete all fields with valid
keyword
data. Submit data.
2. Navigate to Keywords List page.
4.
Verify that
that the
the created
created keyword
keyword
5. Verify
present in
in the
the keywords
keywords list.
list.
isis present
Expected Results
1. Successful login.
2. Create Keyword page is
displayed.
3.1. Info
Infomessage
messageisisdisplayed
displayed
thatthe
thekeyword
keywordisis
that
successfullycreated.
created.
successfully
2. Newly
created
4.
Keywords
listkeyword
page is is
displayed
displayed.in the keywords
list.
Wrong – The Login and Navigate to steps are not5.required,
as thekeyword
purposeis
Newly created
displayed
the keywords
of the test is to verify that the user is able to successfully
createinkeywords.
list. Test Cases.
Login and page displaying should be verified in separate
Slide 33
Test Cases Design – important attributes
Effective
Self
cleaning
Repeatable
Traceable
Nonredundant
Clear
Well
Practical
structured
flow of
and
events,
low
and
Short
redundancy.
Contain
correspondence
maintainable,
rather
detailed
than
Any
The result of the test
steps
between
neither
lengthy;
feature
No drawbacks
needed
too
under
written
execution
simple
to
test
like
test
inbe
Each
test
case
can
Should
case
cover
should
all
bethe
asimple
steps
particular
spelling
nor
Returns
should
Have
too
and
language,
complex;
mistakes;
athe
not
expected
function;
high
test
beaso
traced
back
to
features/functionalit
always
the
same,
no
no
use
repeated
separated
environment
that
missing
probability
the
any
results;
system
in
person
test
execution
different
to
of
cases
exact
the
is
requirement/use
ies
matter
that how
havemany
to be
test
able
functionality
clean
detecting
for
unambiguously
cases.
to
steps;
positive
state
understand
Two
no
errors
/
and
GUI
test
timestested
itcase
has been
negative
defined
the
cases
unnecessary
scope
names
should
execution
scenarios;
of each
not
executed before
limit
steps
execution
find
to
test
and
15
thecase
expected
execution
same
steps
defect.
results
steps
Test
cases
Complete
Clear
Detailed
Accurate
Evolvable
Short and
simple
language
Slide 34
Test Cases Cascading vs Independent design
Cascading style
Test Cases built on each other
Simpler and smaller
The output of one Test Case
becomes the input of the next
Test Case. Arranging Test Cases
in a right order saves time during
test execution
If one Test Case fails, the
subsequent tests may be invalid
or blocked
Independent style
Each Test Case is self contained,
does not rely on any other Test
Cases
Any number of Test Cases
can be executed in any order
Larger and more complex
Harder to design, create and
maintain
Slide 35
Test Cases High-Level vs Low-Level writing style
High-level style
Test Cases defining what to test in
general terms, without specific
values for input data and
expected results.
less time to write
greater
flexibility
in
execution
more appropriate when tests
are executed by testers with a
vast knowledge of the application
Low-level style
Test Cases with specific values
defined for both input and
expected results.
repetitive
it can be executed even by a
tester that is just learning the
application
easier to determine pass or
fail criteria
easier to automate
Slide 36
Test Cases Design – some good practices
Test cases must evolve during the entire software
development lifecycle.
Slide 37
Test Cases Design – some good practices
As requirements change, the testers must adjust test cases
Due toaccordingly
changes in requirements, design or implementation,
test cases become often obsolete, out-of-date. Given the
Test cases must be modified to accommodate the additional
pressures of having to complete the testing, testers continue
information obtained from other phases
their tasks without ever revisiting the test cases. The
casetest
modified
upon a outdated,
change request
should have in the
problem isEach
thattest
if the
cases become
the initial
the record
describes
the changemanual
(email, meeting
work description
creating these
tests isthat
wasted
and additional
case ID)
tests minutes,
executed use
without
having a test case in place cannot be
repeated. As defects are found and corrected, test cases must be updated to
reflect the changes and additions to the system
When a new scenario is encountered, it must be evaluated,
assigned a priority and added to the set of test cases
Slide 38
Test Analysis & Design – Metrics & Measurements
Percentage of requirements
or quality (product) risks
covered by test conditions
What cannot be measured
cannot be managed.
Percentage of test
conditions covered by test
cases
Number of defects found
during test analysis and
design
Slide 39
Thanks for attending this session!
Questions?
Thoughts?
Debates?
Test Analysis
& Design –
good practices
Raluca Gagea
2013, October 17th
We all know the importance of doing the right things
from the very beginning and of putting the right questions when
the impact on the product under development is still minor.
From here to test analysis and test cases design is just a
small step, but one that needs special attention, experience,
intuition and creativity and many good practices.
I won't "teach" you how to do it best, I'll share with you
some tips that helped me during testing and we'll try to cover:
& some vocab stuff we usually don't use or we do it wrong
& specific activities and their benefits
& ways to measure progress and report the benefits
& test cases writing/designing styles - advantages and
disadvantages
& important properties for test cases
& few mistakes we all do :)
& importance of tools and the testing structures we choose
Slide 2
There is an A for everything
Theoretical
side of things
Definitions
Vocab
ISTQB
Practical
(realistic )
side of things
Activities and
their benefits
Good
practices
Things to
remember
Lessons to
learn
Examples
Slide 3
Fundamental Test Process
P
L
A
N
N
I
N
G
A
N
A
L
Y
S
I
S
D
E
S
I
G
N
I
M
P
L
E
M
E
N
T
EXECUTION
EVALUATING EXIT CRITERIA
AND REPORTING
CONTROL
C
L
O
S
U
R
E
Slide 4
Test Analysis & Design
Test Analysis & Design is the activity where general testing
objectives are transformed into tangible test conditions and test
designs.
Test
Analysis
Process of
looking at
something that
can be used to
derive test
information
Test Design
Process of
identifying the
associated highlevel test cases
for a test item
Slide 5
Test Analysis & Design Vocab
Business
Scenarios
Functional
specifications
Test Basis
Technical
documents
Use cases
Test Object 1
Test Object 2
Test Object n
Emails
Test Item 1
Non-functional
specifications
Test case 1
Test Item n
Test condition n
Test condition 1
Test case n
Test case design
techniques
Effective test case 1
Test suite
T
E
S
T
Effective test case n
Input & Output
Test data
O
B
J
E
C
T
I
V
E
S
Slide 6
Test Analysis & Design Vocab
Subscription Form
User Name
Age
Test
Items
City
Postal Code
Test
Object
Submit
Input constraints
User Name must be between 6 and 12 characters long, must start with a letter and
include only digits.
Test
Age must be a number greater or equal to 18 and less than 65
City must be one of Ottava, Toronto, Montreal or Halifax
Conditions
Slide 7
Test Analysis & Design – Why?
Review test
basis
Examine
specifications
Evaluate
testability
Test
Basis
Prevents defects
appearing in the
code
At execution
moment, all the
requirements are
translated in terms
of testable items
Select relevant documents
only; Identify gaps and
ambiguities in the
specifications because we
are trying to identify
precisely what happens at
each point in the system
Slide 8
Test Analysis & Design – Why?
Analysis of
test items
Identify test
conditions
Test
Conditions
Gives us a high-level
list of what we are
interested in testing
We can start
identifying the type
of generic test data
we might need
Slide 9
Test Analysis & Design – Why?
Design the
tests
Use test design
techniques
Test
Cases
The high risks areas
will be covered by
tests before the
actual execution
phase starts
Slide 10
Test Analysis & Design – Why?
Identify
test data
Test Data
At execution moment, the
test cases will be executed
using the most closest data
than the one in production
Slide 11
Test Analysis & Design – Why?
Design the
environment set-up
Identify any
infrastructure and
Test
tools
Environment
Availability
At execution moment,
everything we need to
carry out our work is in
place
Slide 12
Test Analysis & Design – Why?
Create
traceability
Test Basis
&
Test Cases
At every moment, we can
calculate the requirements
testing coverage
Slide 13
Requirements
A requirement is a
singular
documented physical
and functional need
that a particular
product or service
must be or perform.
Test Oracle
Functional reqs
Non-functional reqs
Architectural reqs
Design reqs
Structural reqs
Constraint reqs
Test Basis
Slide 14
When can we really have them all in place?
Slide 15
Lessons I’ve learnt – First Requirements
Some documents
coming in
Info
Are they
addressing
reqs or
additional
info for
me?
Piece of Requirements
coming in
Reqs
Add Test
to
Basis
if relevant
Add
to
if relevant
KT
pack
NO
Perform static
analysis
Are they
testable?
YES
Add
to
Test
Basis
Slide 16
Lessons I’ve learnt – First Test Cases “Design”
Analyze test basis, &
test oracle
Identify critical
functionalities
Identify needed
testing types
Identify automation
need
Identify ways to keep
traceability
Which kind of test suites
will be needed (e.g. smoke, regression,
per functionality, per component)?
How
we toprioritize
Do wecan
need
have
them?
separate test suites / cases for each
What’s
theseparate
impact
in terms
testing type
(manual
vs
automation,
Create
test
suites
of
effort
when
changing the
functional
vs non-functional,
etc.) for
for various
automated
testpriority
cases
(review
testDocases,
execute
them, track
we need
to have
different
purposes.
execution
status,
to initial
different design
&revert
writing
styles for
Choose
an appropriate
priority)
thedesign
test cases?
and writing style for these test
cases.
Slide 17
Lessons I’ve learnt – Testing Structure
Delivery
Model
Estimated
level of
change
Test Levels
Time
Number of
Testing
Cycles
Risk
Testing
Structure
Slide 18
Testing Structures – some examples
Traditional Waterfall
release cycles are typically
several weeks to several months
long, and usually have multiple
phases of testing (Functional,
System test, Performance, User
Acceptance Test, etc) during
any given release cycle
execution cycles are planned
and scheduled based on these
phases and metrics are tracked
for each phase as well
Organize by Test Phase
Organize by
Functionality or System
and then by Phase
Slide 19
Testing Structures – some examples
Agile
release products in shorter and frequent
release cycles, each consisting of multiple
Sprints in which one or more User Stories
are targeted for development completion
Organize by Sprints and then by User
Stories or functionality
-> when having large number of
releases with shorter Sprint cycles and
many overlapping release cycles
Slide 20
Testing Structures – some examples
Agile
release products in shorter and
frequent release cycles, each
consisting of multiple Sprints in
which one or more User Stories are
targeted for development
completion
Organize by moving Sprints at
project level as releases
-> you have larger release cycles
with many Sprints
Slide 21
Testing Structures – some examples
Testing of System of systems
large, complex platforms that may contain multiple systems or sub-systems each with its
own development and QA tracks, and there may be a need to track testing progress on a
per system bases followed by System wide testing
Organize releases as
projects, followed by
System level testing. This
is useful when different
teams are tracking
progress for different
systems.
Slide 22
Testing Structures – some examples
Testing of System of systems
large, complex platforms that may contain
multiple systems or sub-systems each with its
own development and QA tracks, and there
may be a need to track testing progress on a
per system bases followed by System wide
testing
Organize releases under the same
project, followed by systems. This is
useful when the platform has larger
number of frequent releases and a
single PM over all
Slide 23
Test Cases Design – some good practices
Why do we write test cases?
The test cases are more than some sentences used to test various flows.
They are our way to
proof the level of confidence in what we deliver by:
measuring the requirements coverage
and their status at every point in the development process:
(if the requirements are covered by enough test cases, if the testing
execution status at some point is the desired or planned one)
Slide 24
Test Cases Design – some good practices
Write Test Cases before the implementation of the
requirements
Write Test Cases for all the requirements
Test Cases should map precisely to the requirements and not
be an enhancement to the requirement
Slide 25
Test Cases Design – some good practices
Use same naming convention for all the test cases in a project
Create unique names for your test cases (use “TC” + identifier +
title)
Use “Action - Target – Scenario” method to formulate the title
Slide 26
Test Cases Design – some good practices
Test Case Title
Action
Scenario
Target
the rest of what your
a verb that describes
test is about and how
what you are doing the focus of your test
you distinguish
(create, delete,
(screen, object entity,
multiple test cases
ensure, edit, open,
program)
for the same Action
populate, login)
and Target
Slide 27
Test Cases Design – some good practices
Action – Target – Scenario
Create – Task – title is not supplied
Create – Task – title is the
maximum allowable length
Slide 28
Test Cases Design – some good practices
Write detailed description of every step of execution.
Define one single action per execution step.
Write clear and precise expected results
Slide 29
Test Cases Design – some good practices
The Expected Result states:
"Verify if error message is displayed."
Issue: As executing the Test Case, what if the error message says,
“Please provide postcode ", while it should say, "Your postcode is
invalid"?
Solution:
The Expected Results states:
"Verify that the error message about an invalid postcode is
displayed.”
Slide 30
Test Cases Design – some good practices
Each Test Case checks only one testing idea, but two or more
expected results are totally acceptable if there is a need to
perform several verifications for that testing idea.
Testing idea: “Payment can be performed by MasterCard credit
card."
Expected results:
1) In DB, cc_transaction table, in MasterCard column, value 1 is
registered.
2) Credit card balance is reduced by the amount equal to the
amount of the payment.
Slide 31
Test Cases Design – some good practices
Expected results should met the test case purpose.
Additional steps should be specified separately.
Slide 32
Test Cases Design – some good practices
TC ID
Execution Steps
TC01.01 Verify
1. Login as Customer User
TC01.01 Verify
Login as Customer User
Customer User is 2. Navigate to Create Keyword
Customer User is Navigate to Create Keyword page.
able to create
page.
able to create
keyword
3.1. Complete all fields with valid
keyword
data. Submit data.
2. Navigate to Keywords List page.
4.
Verify that
that the
the created
created keyword
keyword
5. Verify
present in
in the
the keywords
keywords list.
list.
isis present
Expected Results
1. Successful login.
2. Create Keyword page is
displayed.
3.1. Info
Infomessage
messageisisdisplayed
displayed
thatthe
thekeyword
keywordisis
that
successfullycreated.
created.
successfully
2. Newly
created
4.
Keywords
listkeyword
page is is
displayed
displayed.in the keywords
list.
Wrong – The Login and Navigate to steps are not5.required,
as thekeyword
purposeis
Newly created
displayed
the keywords
of the test is to verify that the user is able to successfully
createinkeywords.
list. Test Cases.
Login and page displaying should be verified in separate
Slide 33
Test Cases Design – important attributes
Effective
Self
cleaning
Repeatable
Traceable
Nonredundant
Clear
Well
Practical
structured
flow of
and
events,
low
and
Short
redundancy.
Contain
correspondence
maintainable,
rather
detailed
than
Any
The result of the test
steps
between
neither
lengthy;
feature
No drawbacks
needed
too
under
written
execution
simple
to
test
like
test
inbe
Each
test
case
can
Should
case
cover
should
all
bethe
asimple
steps
particular
spelling
nor
Returns
should
Have
too
and
language,
complex;
mistakes;
athe
not
expected
function;
high
test
beaso
traced
back
to
features/functionalit
always
the
same,
no
no
use
repeated
separated
environment
that
missing
probability
the
any
results;
system
in
person
test
execution
different
to
of
cases
exact
the
is
requirement/use
ies
matter
that how
havemany
to be
test
able
functionality
clean
detecting
for
unambiguously
cases.
to
steps;
positive
state
understand
Two
no
errors
/
and
GUI
test
timestested
itcase
has been
negative
defined
the
cases
unnecessary
scope
names
should
execution
scenarios;
of each
not
executed before
limit
steps
execution
find
to
test
and
15
thecase
expected
execution
same
steps
defect.
results
steps
Test
cases
Complete
Clear
Detailed
Accurate
Evolvable
Short and
simple
language
Slide 34
Test Cases Cascading vs Independent design
Cascading style
Test Cases built on each other
Simpler and smaller
The output of one Test Case
becomes the input of the next
Test Case. Arranging Test Cases
in a right order saves time during
test execution
If one Test Case fails, the
subsequent tests may be invalid
or blocked
Independent style
Each Test Case is self contained,
does not rely on any other Test
Cases
Any number of Test Cases
can be executed in any order
Larger and more complex
Harder to design, create and
maintain
Slide 35
Test Cases High-Level vs Low-Level writing style
High-level style
Test Cases defining what to test in
general terms, without specific
values for input data and
expected results.
less time to write
greater
flexibility
in
execution
more appropriate when tests
are executed by testers with a
vast knowledge of the application
Low-level style
Test Cases with specific values
defined for both input and
expected results.
repetitive
it can be executed even by a
tester that is just learning the
application
easier to determine pass or
fail criteria
easier to automate
Slide 36
Test Cases Design – some good practices
Test cases must evolve during the entire software
development lifecycle.
Slide 37
Test Cases Design – some good practices
As requirements change, the testers must adjust test cases
Due toaccordingly
changes in requirements, design or implementation,
test cases become often obsolete, out-of-date. Given the
Test cases must be modified to accommodate the additional
pressures of having to complete the testing, testers continue
information obtained from other phases
their tasks without ever revisiting the test cases. The
casetest
modified
upon a outdated,
change request
should have in the
problem isEach
thattest
if the
cases become
the initial
the record
describes
the changemanual
(email, meeting
work description
creating these
tests isthat
wasted
and additional
case ID)
tests minutes,
executed use
without
having a test case in place cannot be
repeated. As defects are found and corrected, test cases must be updated to
reflect the changes and additions to the system
When a new scenario is encountered, it must be evaluated,
assigned a priority and added to the set of test cases
Slide 38
Test Analysis & Design – Metrics & Measurements
Percentage of requirements
or quality (product) risks
covered by test conditions
What cannot be measured
cannot be managed.
Percentage of test
conditions covered by test
cases
Number of defects found
during test analysis and
design
Slide 39
Thanks for attending this session!
Questions?
Thoughts?
Debates?