Update:Remora Load Testing
Goals
- Tests should be distributed, run from multiple nodes
- Tests should return req/s capability
- Tests should return metrics on HTTP error codes and/or success/failure rates
- Tests should test with cache and without -- and at varying cache-hit rates between
- Tests should be configurable to have different levels of concurrency
TODO
morgamic
wikify meeting notes and todocome up with test cases / criteria- come up with URIs to actually test
- work with IT to come up with testing schedule
lars
find out how to get and parse results from grinder- document how to set up test nodes, install grinder on them, connect to aggregate box
- write load tests appropriate for the "Pages We Want To Test" section below
- setup a load test strategy based on the "Test Variance" section below
- gather results and export them from grinder console
Grinder
We concluded that Grinder fulfills these requirements, and excels in ways AB and httperf can't or don't.
The Guessing Game
The Grinder results will not be completely accurate. This is the nature of load testing, agreed, but there are also some things we can do with peak/off-peak load numbers to understand how the load test results could be skewed to accommodate for the external effect of other apps and overall higher stress on shared resources.
We discussed gathering some cumulative NS/app/db stats to get a better hold of what our load tests numbers mean, and gain some perspective on the margin of error.
mrz is going to give us some numbers based on cumulative Cacti results.
Test Variance
- By concurrent requests
- 100
- 1000
- 2000
- By multiple input vars (GET)
- All unique vars, causing cache-hit to be zero
- Mixture of unique vars, possibly 50% duplicated
- Only one URI across all requests
- Otherwise differentiated by request URI
Pages We Want To Test
We should be able to config two base URIs:
https://remora.stage.mozilla.org/ (for public and forum pages) https://remora-services.stage.mozilla.org/ (for services)
Test URIs, used against test db (remora.stage.mozilla.com):
/ /en-US/search /en-US/search/?q=farmer /en-US/addons/browse/type:1 /en-US/addons/browse/type:4 /en-US/addons/browse/type:1/cat:all /en-US/addons/display/7 /en-US/reviews/display/7 /en-US/addons/rss/newest
Preview URIs, used against migrated db (preview.addons.mozilla.org):
/ /en-US/search /en-US/search/?q=farmer /en-US/addons/browse/type:1 /en-US/addons/browse/type:4 /en-US/addons/browse/type:1/cat:all /en-US/addons/display/7 /en-US/reviews/display/7 /en-US/addons/rss/newest
Pages we want to be able to test:
Main pageSearch pageCategory listingAdd-on main page- Services
- Update check
- Blocklist
- PFS
RSS / Feeds- Vanilla
- Addon-specific discussion page
- Top page
Grinder Installation
Grinder is a distributed Java application scriptable in Jython. A central GUI application known as the Console marshals a herd of Agents running on other machines. The Agents spawn processes which, in turn, run tests in threads. The tests, written in Jython, are intended to hit some resource repeatedly. The Agents collect response statistics from the processes and report back to the Console. The Console aggregates and summarizes the data for presentation within its GUI. The Console also can save the aggregated results as a CSV file.
Setting up Grinder
Setting up the application is the same for both the Agents and the Console. We will probably create a custom tar file for a distribution. Once untarred, it will only be necessary to edit an environment setup file to adjust paths.
(TO-DO enumerate actual steps)
Running the tests
Tests are run manually by starting the Agents and then the Console.
Agents are started at a command line by typing:
...