Networking/Archive/Necko/Performance/AutomatedTesting
Contents
WARNING WARNING WARNING - READ FIRST
Both Stone Ridge and NeckoNet are obsolete and no longer maintained. This information exists here for historical purposes only.
Automated Performance Testing w/NeckoNet (Stone Ridge)
Summary
The goal of this project (Stone Ridge) is to develop a system that can run automated performance tests every day against different network conditions, simulated by NeckoNet. The results of these tests are pushed to a public graph server.
People
- Nick Hurley (primary developer for NeckoNet) and Josh Aas will own the project.
- Patrick McManus will work on developing the network profiles we test against.
- Mozilla's automation team (including Clint Talbert and Dan Parsons) will help get servers, test automation, and graphing set up.
- Honza Bambas will develop the performance tests.
Schedule
We'll be reporting results to a graph server from all three tier-1 platforms in Q3 2012.
Results
Results are reported here. Note: for now you must select a Firefox >=18 in the "Control Panel" to see results.
Infrastructure
We currently have three NeckoNet servers. All are HP servers running RHEL 6.2. NeckoNet servers do not run in VMs so as to avoid potential network interference from a VM hypervisor. The servers have dual NICs, one for our private test network and one to the outside network. One server doubles as the "master", which handles tasks such as downloading Firefox builds and reporting results from all clients to the graph server.
We currently have three test client machines - one OS X 10.8, one RHEL 6.2, and one Window 7.
These machines are configured to run tests against the NeckoNet servers and report results to a graph server. Test clients also have two NICs, one for our private test network and one to the outside network.
Supported NeckoNet Profiles
None of these are cutting edge so they should make reasonable broad based targets. Implementations by ISPs vary widely so its easy to find counter examples, but I would argue for optimizing for the lower end where we make choices.
- Average Broadband
- An upper bound on things worth measuring, though they certainly do get faster than this: 90ms rtt, 10mbit of bandwidth, 0 jitter.
- Modern Mobile
- A semi-advanced 3g or bad 4g network: 150ms rtt, 1 mbit of bandwidth, and 20ms of jitter. Sometimes this technology does work better than this - but this seems to be a common point of degradation.
- Classic Mobile
- Something like an hspda or even edge handset. 300ms rtt, 400kbps of bandwidth and 40ms of jitter.
In all cases the bandwidth should be shared across all IPs. I didn't model loss here, even tough it can be an issue, because its randomness would introduce way too much variability into short tests. As a separate effort we could build tests with deterministic loss.
Performance Tests
Test development is tracked in bug 728435.
To-Do List
- Get OS X test client reporting results.
- Ability to push custom builds for performance testing.
- Write more tests.
- Add TLS/OCSP testing capabilities.
- Add ability to measure impact on pageload in addition to network-level results.