Transcript Agile_Testing_Tactic..
Agile Testing Tactics
for WaterFail, FrAgile, ScrumBut, & KantBan
Tim Lyon [email protected]
http://www.linkedin.com/in/timlyon
2
Epic Fails with Application Lifecycle Management (ALM) and QA
3
WaterFail Project Example
4
FrAgile Project Example
5
ScrumBut Project Example
6
KantBan Project Example
7 Learnings: No Perfect Agile QA Process Several QA steps can help address agile development projects and risk Automate, automate, & automate Test planning make effective instead of extensive Maximize test cases with probabilistic tests & data Scheduled Session-based Exploratory & Performance testing Simple, information reporting & tracking of testing
8
Agile Automation Considerations
Why Don’t We Automate?
Human Interface/Human Error Description
General rate for errors involving high stress levels Operator fails to act correctly in the first 30 minutes of an emergency situation
Error Probability
0.3
0.1
Operator fails to act correctly after the first few hours in a high stress situation Error in a routine operation where care is required Error in simple routine operation Selection of the wrong switch (dissimilar in shape) Human-performance limit: single operator Human-performance limit: team of operators performing a well designed task 0.03
0.01
0.001
0.001
0.0001
0.00001
1000 LOCs
300
100 TCs
30 100 30 10 1 1 0.1
10 3 1 0.1
0.1
0.01
0.01 0.001
General Human-Error Probability Data in Various Operating Conditions "Human Interface/Human Error", Charles P. Shelton, Carnegie Mellon University, 18-849b Dependable Embedded Systems, Spring 1999, http://www.ece.cmu.edu/~koopman/des_s99/human/ 9
10 Can We Afford Not To Automate? Additional Functionality Additional Functionality Baseline Existing Features Time Need More QA Effort ≈ Coverage Need More QA Effort ≈ Coverage Base Head Count & Effort for Coverage Time Pace of feature addition & complexity far exceeds pace of deprecating systems or functionality Long term increasing complexity with more manual testing is not a sustainable model Automated Test Coverage or Overall Manual Coverage Time Automated regression tests optimize the Manual QA effort
11 Building Functional Automation Library
Sanity Check
Application is running and rendering necessary information on key displays
Automation Layers Used to Develop Automation Library Over Time Smoke Tests
Simple “Happy Path” functional tests of key features to execute and validate
Positive Functional Tests
In depth “Happy Path” functional tests of features to execute and validate
Database Validation Tests
Non-GUI based data storage and data handling validation
Alternative Functional Tests
In depth alternate but “Happy Path” of less common functional tests of features to execute and validate
Negative Functional Tests
In depth “UNHappy Path” functional tests of features to execute and validate proper system response and handling System Tests Development
12
Learnings: No Automation Costs
Start with Simple Confirming Tests and Build up Work with Developers to Help Implement, Maintain, and Run Utilize Available Systems After Hours Provide Time to Write, Execute, and Code Review Automation
13
Agile Test Planning
14 “10 Minute Test Plan” (in only 30 minutes) Concept publicized on James Whittaker’s Blog: http://googletesting.blogspot.com/2011/09/10-minute test-plan.html
Intended to address issues with test plans, such as: Difficult to keep up-to-date and become obsolete Written ad-hoc, leading to holes in coverage Disorganized, difficult to consume all related information at once
15
How about ACC Methodology?
Attributes (adjectives of the system) Qualities that promote the product & distinguish it from competition (e.g. "Fast", "Secure", "Stable“) Components (nouns of the system) Building blocks that constitute the system in question. (e.g. "Database", “API", and “Search“) Capabilities (verbs of the system) Ties a specific component to an attribute that is then testable: Database is Secure: “All credit card info is stored encrypted” Search is Fast: “All search queries return in less than 1 second”
16
Google Test Analytics
Google Open Source Tool for ACC Generation: http://code.google.com/p/test-analytics
17 Learnings: 10 Minutes is Not Enough Keep things at a High level Components and attributes should be fairly vague Do NOT start breaking up capabilities into tasks each component has to perform Generally 5 to 12 Components best per project Tool is to
help
generate coverage and risk-focus over project - not necessarily each and every test case Tool is still in its infancy – released 10/19/2011
18
Combinational Testing
19
What is Combinational Testing?
Combining all test factors to a certain level to increase effectiveness and probability of discovering failures Pairwise/ Orthogonal Array Testing (OATS)/Taguchi Methods http://www.pairwise.org/ Most errors caused by interactions of at most two factors Efficiently yet effectively reduces test cases rather than testing all variable combinations Better than “guesstimation” for generating test cases by hand with much less chance for errors of combination omission
20
Example OATS
Orthogonal arrays can be named like L Runs (Levels Factors ) Example: L 4 (2 3 ): Website with 3 Sections (FACTORS) Each Section has 2 States (LEVELS) Results in 4 pair-wise Tests (RUNS)
RUNS Test 1 Test 2 Test 3 Test 4 FACTORS TOP
HIDDEN HIDDEN VISIBLE VISIBLE
MIDDLE
HIDDEN VISIBLE HIDDEN VISIBLE
BOTTOM
HIDDEN VISIBLE VISIBLE HIDDEN
21
Comparison Exhaustive Tests
Increases Test Cases by 100% Can still utilize from to add “interesting” test cases
RUNS Test 1 Test 2 Test 3 Test 4 Test 5 Test 6 Test 7 Test 8 FACTORS TOP
HIDDEN HIDDEN VISIBLE VISIBLE HIDDEN HIDDEN VISIBLE VISIBLE
MIDDLE
HIDDEN VISIBLE HIDDEN VISIBLE HIDDEN VISIBLE VISIBLE HIDDEN
BOTTOM
HIDDEN VISIBLE VISIBLE HIDDEN VISIBLE HIDDEN VISIBLE HIDDEN
22
Helpful when Factors Grow
23
Tool to Help Generate Tables
Microsoft PICT (Freeware) http://msdn.microsoft.com/en-us/library/cc150619.aspx
Web Script Interface: http://abouttesting.blogspot.com/2011/03/pairwise-test case-design-part-four.html
24
What is PICT Good for?
Mixed Strength Combinations Create Parameter Hierarchy Conditional Combinations & Exclusions Seeding Mandatory Test Cases Assigning Weights to Important Values
25
Learnings: Tables can be BIG!
Combinations and parameters to use takes some thoughtfulness & planning Still can generate unwieldy test cases to run PICT statistical optimization does not always generate full pair-wise combinations Good for Test Case generation as well as basic test data generation
26
Session Based / Exploratory Testing
27
Organized Exploratory Testing
Exploratory testing is simultaneous learning, test design, and test execution - James Bach Session-Based Test Management is a more formalized approach to it: http://www.satisfice.com/articles/sbtm.pdf
Key Points Have a Charter/Mission Objective for each test session Time Box It - with “interesting path” extension possibilities Record findings Review session runs
28
Learnings: Explore Options
Make it a regular practice Visible timer on display on desktop Have some type of recording mechanism to replay if possible Change mission roles and objectives to give different context Test Lead needs to be actively involved in review If implementation is too cumbersome to execute – they won’t
29
Tools That Can Help
Screenshot Greenshot ( http://getgreenshot.org/ ) Video Capture CamStudio Open Source ( http://camstudio.org/ ) Web debugging proxy Fiddler2 ( http://www.fiddler2.com
) Network Protocol Analyzer WireShark ( http://www.wireshark.org/ )
30
Reporting on Agile Testing
31
Use Test Case Points
Weighted measure of test case execution/coverage Points provides risk/complexity of test cases to run Attempts to help answer “We still have not run X test cases, should we release?” Regrade as it moves from new feature to regression New Feature = + 10 Cart/Checkout = + 10 Accounts = +5 Regression Item = + 1 Negative Test = + 3
32
Learnings: Clarify What was QA’d
Tell Them What Your Team Did Traceability to Requirements List of Known Issues (LOKI) Regression Efforts (Automation & Manual) Exploratory Testing, etc Tell Them What Your Team Did NOT Do!
Regression Sets Not Run Environmental Constraints to Efforts Third-Party Dependencies or Restrictions