DevCon_2012_BenchmarkFramework

Download Report

Transcript DevCon_2012_BenchmarkFramework

Alfresco Benchmark Framework
Derek Hulley
Repository and Benchmark Team
Some History
•
•
•
2008:
•
Simple RMI-based remote loading of FileFolderService
•
Unisys 100M document repository-specific test
Then:
•
QA wrote JMeter scripts for specific scenarios
•
Customers, partners and field engineers provided tests for specific scenarios
•
Hardware shared with QA and dev, as required
Mid 2011:
•
•
•
•
RackSpace benchmark environment commissioned
Late 2011:
•
Failed attempt to simulate real Share-Repo interaction using JMeter (AJAX, etc)
•
First major proof of 4.0 architecture started (JMeter with sequential, heaviest calls)
•
Later called BM-0001
•
Client load driver was resource intensive and results had to be collated from 3 servers
Early 2012:
•
Benchmark Projects Lead role created
•
Evaluation of Benchmarking technology
Mid 2012:
•
Successful approximation of Share-repo interaction using JMeter
•
Benchmark Projects formalized
•
BM-0002 executed (ongoing for regression testing)
•
BM-0009 started
(Some of the) Benchmark Framework Requirements
•
•
•
•
•
Real browser interaction
• Javascript, asynchronous calls, resource caching, etc
Scaling
• Scale to thousands of active sessions
• Elastic client load drivers
• Shared thread pools
Results
• Durable and searchable results
• Support real-time observation and analysis of results
• Every logical action must be recorded
• Every result (positive or negative) must be recorded
Tests
• Treated as real software (automated integration testing, major and minor versions, etc)
• Reusable code or components
• Aware of server state
• Different tests can share the same data set up
Execution
• Remote control from desktop
• Override default test properties
• Concurrent test execution
• Start / Stop / Pause / Reload
• Automated reloading
Benchmark Server Architecture
Client
Configuration
Reporting
ZooKeeper
MongoDB
MongoDB
MongoDB
Server configuration
Test Definitions
Test run definitions
Test Run Event Queues
Test Run Results
Data Mirror Collections
Benchmark Server 1
Thread Pool
Common Libraries eg. WebDriver
Benchmark Server N
Thread Pool
Common Libraries e.g. WebDriver
Test Target
Demo
Modifying Test Parameters During Run
HTTP connection pool refreshing
Paused test
Continued test
Doubled workflow rate
Halved workflow rate