Very cool title goes here and look you have plenty of room

Download Report

Transcript Very cool title goes here and look you have plenty of room

An Introduction to Load Testing:
A Blackboard Primer
Steve Feldman
Director of Performance Engineering and Architecture
Blackboard Inc.
July 18th 3pm
© Blackboard, Inc. All rights reserved.
What is Software Performance Engineering?
SPE
Software
Execution
Model
System
Execution
Model
Software
System
2
The Software Execution Model
»
»
»
Response Time Performance
SPE Absolutely Critical
Performance Measurement.
Emphasis on optimizing the application business logic.
Design Pattern implementation is primary concern.
Software
System
3
The System Execution Model
Response Time Performance Remains Critical
Performance Measurement.
» Emphasis on optimizing the deployment
environment.
» System Resource Utilization of primary concern.
»
4
The System Execution Model and the Performance
Maturity Model
Awareness of system performance
peaks and valleys.
» Knowledge of capacity planning
needs.
2:of the data is available, but littleLevel
» Level
All
is
5:
Level 4:
Level 3:
Monitoring
Process
Business
done
otherPerformance
then
basic
optimization.
And
Optimizing
Optimizing
Optimizing
Instrumenting
» Looking to extend performance
management via environmental
optimization.
»
Level 1:
Reactive
Fire Fighting
5
How Do We Optimize our Environment Using the
System Execution Model?
»
»
»
Study existing behavior, adoption, growth and system
resource utilization patterns.
Measure live system response times during periods of
variations for watermark analysis.
Simulate synthetic load mocking the usage patterns of the
deployment.
6
Introduction to Load Testing
»
»
»
»
»
»
What is load testing?
Why do we load test?
What tools do we use?
Preparing for load testing?
How do we load test?
What to do with the results of a load test?
7
What is Load Testing?
»
Load testing is a controlled method of exercising
artificial workload against a running system.
»
»
»
A system can be hardware or software oriented.
A system can be both.
Load testing can be executed in a manual or
automated fashion.
»
Automated Load Testing can mitigate inconsistencies
and not compromise scientific reliability of data.
8
Why do we Load Test?
Most load tests are executed with false intentions,
(performance barometer).
» Understanding the impact of response times for
predictable behavioral conditions and scenarios.
» Understanding the impact of response times for
patterns of adoption and growth.
» Understanding the resource demands of the
deployment environment.
»
9
What tools do we use?
»
Commercial Tools
»
»
»
»
Freeware Tools
»
»
»
»
Mercury LoadRunner (Currently Used at Blackboard)
Segue SilkPerformer (Formally Used at Blackboard)
Rational Performance Studio
Grinder (Occasionally Used at Blackboard)
Apache JMeter (Occasionally Used at Blackboard)
OpenSTA
Great Resources for Picking a Load Testing Tool
»
»
Performance Analysis of Java Web Sites by Stacy Joines (ISBN:
0201844540)
http://www.testingfaqs.org/t-load.html
10
Preparing for load testing?
»
»
»
»
»
Define Performance Objectives
Use Case Definition
Performance Scenarios
Data Modeling
Scripting and Parameterization
11
Define Performance Objectives
Every Load Test Should Have a Purpose of
Measurement.
» Common Objectives
»
Sessions Per Hour
» Transactions Per Hour
» Throughput Per Transaction and Per Hour
» Response Time Calibration
» Resource Saturation Calibration
»
12
Define Use Cases
»
Use Cases should be prioritized based on the
following:
Criticality of Execution
» Observation of Execution (Behavioral Modeling)
» Expectation of Adoption
» Baseline Analysis
»
13
Define Performance Scenarios
Collection of one or more use cases sequenced in
a logical manner (compilation of a user session)
» Scenarios should be realistic in nature and based
on recurring patterns identified in session
behavior models.
»
»
»
Avoid simulating extraneous workload.
Iterate when necessary.
14
Design an Accurate Data Model
»
Uniform in Construction
»
Naming Conventions (Sequential)
»
»
User000000001, LargeCourse000000001, 25-MC-QBQ-Assessment,
XSmallMsg000000001, etc…
Data Constructions (Uniform for Testability)
XSmallMsg000000001 Contains 100 Characters of Text
» XLargeMsg000000001 Contains 1000 Characters of Text (Factor of
10X)
»
»
Multi-Dimensional Data Model
»
Fragmented in Nature
Data Conditions for Testability
» Scenarios for Testability
»
15
Scripting and Parameterization
»
Script Programmatically
Focus on Reusability, Encapsulation and Testability
» Componentize the Action Step of the Use Case
» Use Explicit Naming Conventions
» Example: Measure the Performance of Opening a Word
Document.
»
»
Authenticate(), NavPortal(), NavCourse(),
NavCourseMenu(Docs), ReadDoc()
16
Scripting and Parameterization: Example
/**
* Quick Navigation: Click on Course Menu
Code
* Params Required: course_id
* Params Saved: none
*/
CourseMenu()
Reusable Action Name
{
static char *status = "Course Menu: Course Menu";
Comments
Echo Status and Transaction
bb_status(status);
lr_start_transaction(status);
bb_web_url("{bb_target_url}/webapps/blackboard/content/courseMenu.jsp?mini=Y&cour
se_id={bb_course_pk}", 'l');
lr_end_transaction(status, LR_AUTO);
HTTP Representation with
lr_think_time(navigational);
}
Parameterization and
Abandonment
17
Scripting and Parameterization
»
Parameterize Dynamically
Realistic load simulations test against unique data
conditions.
» Avoid hard-coding dynamic or user-defined data
elements.
» Work with uniform, well-constructed data sets
(sequences)
» Example: Parameterize the username for
Authentication().
»
»
student000000001, student000000002, instructor000000001,
admin000000001, observer000000001, hs_student000000001
18
Scripting and Parameterization: Example
// Save various folder pks and go to the course menu folder
web_reg_save_param("course_assessments_pk", "NotFound=Warning",
"LB=content_id=_", "RB=_1&mode=reset\" target=\"main\">Assessments",
LAST);
Parameterization Name: Course_Assessments_Pk
If Not Found: Issues a Warning
Finds the Values b/w Left Boundary and Right Boundary
web_reg_save_param("course_documents_pk", "NotFound=Warning", "LB=content_id=_",
"RB=_1&mode=reset\" target=\"main\">Course Documents", LAST);
These are LoadRunner Terminology References. However, other
scripting tools use same constructs.
19
Scripting and Parameterization: Blackboard
Gotchas
»
RDBMS Authentication
»
»
»
One Time Token
MD5 Encrypted Password
MD5 (MD5 Encrypted Password + One Time Token)
20
Scripting and Parameterization: Blackboard
Gotchas
»
Navigational Concerns
»
Dynamic ID’s
Tab IDs
Content IDs
» Course IDs
» Tool IDs
»
»
»
Modes
»
»
»
»
Reset
Quick
View
Action Steps
»
»
Manage, CaretManage, Copy, Remove_Proc
Family
21
Scripting and Parameterization: Blackboard
Gotchas
»
Transactional Concerns
»
HTTP Post
»
»
»
»
»
Multiple ID submissions
Action Steps
Data Values
Permissions
Metadata
22
Scripting and Parameterization: Blackboard
Gotchas (Example)
/** * User Modifies Grade and Submits Change
* Params Required: Random Number from 0 to 100
* Params Saved: none */
ModifyGrade()
{
static char *status = "Modify Grades";
bb_status(status);
start_timer();
lr_start_transaction(status);
web_submit_form("itemGrades",
"Snapshot=t28.inf",
ITEMDATA,
"Name=grade[0]", "Value={randGrade}", ENDITEM,
"Name=grade[1]", "Value={randGrade}", ENDITEM,
Dynamic Parameterized Values
"Name=grade[2]", "Value={randGrade}", ENDITEM,
"Name=grade[3]", "Value={randGrade}", ENDITEM,
"Name=submit.x", "Value=41", ENDITEM,
"Name=submit.y", "Value=5", ENDITEM, LAST);
lr_end_transaction(status, LR_AUTO);
stop_timer();
Explicit Abandonment Policy and Parameterized
abandon('h');
lr_think_time(transactional);}
Think Time
Transaction Timer
23
How do we load test?
»
»
»
»
»
»
Initial Configuration
Calibration
Baseline
Environmental Clean-Up
Collecting Enough Samples
Optimization
24
Load Testing: How To Initially Configure
»
Optimize the Environment from the Start
»
»
»
»
Consider it your baseline configuration
Knowledge of embedded sub-systems
Previous Experience with Blackboard and/or current
deployment Configuration
Think twice about using the out of the box
configuration.
25
Load Testing: Calibration
Definition: The process of identifying an ideal
workload to execute against a system.
» Blackboard Performance Engineering uses two
types of Calibration.
»
»
Identify Peak of Concurrency (Key Metric for Identifying
Sessions per Hour)
»
»
Calibrate to Response Time
Calibrate to System Saturation
26
Y-Axis: Response Time
Load Testing: Response Time Calibration
Optimal Workload
Response Time Threshold Line
X-Axis: Iterations
27
Y-Axis: Resource Utilization
Load Testing: Response Time Calibration
Optimal Workload
Resource Saturation Threshold
CPU
X-Axis: Iterations
28
Load Testing: How To Baseline
»
The baseline is the starting point or comparative
measurement
»
»
»
»
»
»
Defined Use Cases
Arrival Rate, Departure Rate and Run-Time iterations
Software/System Configuration.
Arrival Rate: Rate in which virtual users are introduced on
a system.
Departure Rate: Rate in which virtual users exit the
system.
Run-Time Iterations: The number of unique, iterative
sessions executed during a measured test.
29
Load Testing: How To Baseline
Arrival Period
Y-Axis: Users
Departure Period
Run-Time Iterations
Workload
X-Axis: Time
30
Load Testing: How To Clean-Up between Tests
»
Tests Should Be Identical in Every which Way
»
Restore the Environment to it’s previous state
»
»
»
Remove Data Added from Test
Truncate Logs
Keep the Environment Pristine
»
Shutdown and Restart Sub-Systems
Remove All Guessing and What If Questions
» Automate these Steps because you will test more
then once and hopefully more then twice.
»
31
Y-Axis: Users
Load Testing: Samples and Measurements
Response Time
Measurement
Begins After
Arrivals
Response Time
Measurement
Ends Before
Departure
Samples = Iterations
Calibrated Data
Is more reliable
X-Axis: Time
32
Load Testing: How To Optimize
»
Measure Against Baseline
»
»
Instrument 1 data element at a time
Never use results from an instrumentation run
Introduce One Change at a Time
» Comparative Regression Against Baseline
» Changes Should be Based on Quantifiable Measurement
and Explanation
»
»
»
»
Avoid Guessing
Cut Down on Googling (Not Everything You Read on the Net is
True)
Validate improvements through repeatability
33
What to do with the Results of a Load Test
Advanced Capacity Planning
» Operational Efficiencies
» Business Process Optimization
»
34
Advanced Topic: Behavioral Modeling
»
Behavior modeling is a form of trend analysis.
Study navigational and transactional patterns of user
activity within the application.
» Session lengths and click path depths
» Study patterns of resource utilization and peak usage
» Deep understanding of seasonality usage versus
general usage adoption.
»
»
Many tools to diagnose collected data.
»
Sherlog, Webalizer, WebTrends
35
Advanced Topic: User Abandonment
»
»
User Abandonment is the simulation of a user’s
psychological patience when transacting with a software
application.
Two Types of Abandonment:
»
»
»
»
Uniform (All things equal)
Utility (Element of randomness)
Load tests that do not simulate abandonment are flawed.
Two Great Reads
»
»
http://www-106.ibm.com/developerworks/rational/library/4250.html
http://www.keynote.com/downloads/articles/tradesecrets.pdf
36
Resources and Citations
Joines, Stacey. Performance Analysis for Java™ Websites, First Edition, Addison-Wesley, ISBN:
0201844540;
Maddox, Michael. “A Performance Process Maturity Model,” 2004 CMG Proceedings.
Barber, Scott. “Beyond Performance Testing part 4: Accounting for User Abandonment,” http://www128.ibm.com/developerworks/rational/library/4250.html, April 2004;
Savia, Alberto. “http://www.keynote.com/downloads/articles/tradesecrets.pdf,”
http://www.keynote.com/downloads/articles/tradesecrets.pdf, May 2001;
37