Networking/Archive/Necko/Performance/AutomatedTesting: Difference between revisions
No edit summary |
|||
Line 15: | Line 15: | ||
We'll be reporting results to a graph server from all three tier-1 platforms in Q3 2012. | We'll be reporting results to a graph server from all three tier-1 platforms in Q3 2012. | ||
== Results == | |||
Results are reported [https://datazilla.mozilla.org/stoneridge/ here]. | |||
== Infrastructure == | == Infrastructure == |
Revision as of 08:41, 13 September 2012
Automated Performance Testing w/NeckoNet (Stone Ridge)
Summary
The goal of this project (Stone Ridge) is to develop a system that can run automated performance tests every day against different network conditions, simulated by NeckoNet. The results of these tests will be pushed to a public graph server.
People
- Nick Hurley (primary developer for NeckoNet) and Josh Aas will own the project.
- Patrick McManus will work on developing the network profiles we test against.
- Mozilla's automation team (including Clint Talbert and Dan Parsons) will help get servers, test automation, and graphing set up.
- Honza Bambas will develop the performance tests.
Schedule
We'll be reporting results to a graph server from all three tier-1 platforms in Q3 2012.
Results
Results are reported here.
Infrastructure
Each NeckoNet proxy will run RHEL on its own low-power HP server. NeckoNet proxies will not run in VMs so as to avoid potential network interference from a VM hypervisor. The NeckoNet proxy will not have an internet connection for security purposes. We are originally planning to deploy three NeckoNet proxies.
Test client machines can run any OS and may be VMs. These machines will be configured to run tests against the NeckoNet proxies and report results to a graph server. Test clients will also not have connections to the internet for security purposes.
Supported NeckoNet Profiles
None of these are cutting edge so they should make reasonable broad based targets. Implementations by ISPs vary widely so its easy to find counter examples, but I would argue for optimizing for the lower end where we make choices.
- Average Broadband
- An upper bound on things worth measuring, though they certainly do get faster than this: 90ms rtt, 10mbit of bandwidth, 0 jitter.
- Modern Mobile
- A semi-advanced 3g or bad 4g network: 150ms rtt, 1 mbit of bandwidth, and 20ms of jitter. Sometimes this technology does work better than this - but this seems to be a common point of degradation.
- Classic Mobile
- Something like an hspda or even edge handset. 300ms rtt, 400kbps of bandwidth and 40ms of jitter.
In all cases the bandwidth should be shared across all IPs. I didn't model loss here, even tough it can be an issue, because its randomness would introduce way too much variability into short tests. As a separate effort we could build tests with deterministic loss.
Supported Test Client Configurations
For starters, we will have
- One RHEL 6, 64-bit
- One Windows 7 Professional, 64-bit
On the server side, there are three (3) RHEL 6, 64-bit machines, each providing a network under the conditions listed above. One of these also doubles as the "master", which talks to the outside world for things like downloading builds of firefox and reporting results from all clients to the graph server.
Performance Tests
Test development is tracked in bug 728435.