Auto-tools/Projects/Signal From Noise
Signal From Noise
Making sense of the Talos results
This is a joint project among the A-team, Releng, Webdev, and Metrics.
Overview
Historically we have had an 'acceptable' range of fluctuation in our talos number. Our methods of managing and tracking the numbers have all been surrounding running a test multiple times and generating a single number that we can track over time. This is great for long term tracking, but when looking at what that number represents and why it fluctuates there is a lot of room for error.
We want to do a better job of generating our 1 tracking number. We also want to revisit the way we are testing things and make sure we are running the right tests and the correct number of iterations to get a reliable data point. Most likely this involves looking at every page that we have and tracking that page individually, not as a small piece of a larger set of pages.
Goals
- define what is signal and what is noise
- understand the distribution of numbers and have confidence that our representation is meaningful
Bugs
Signal from Noise bugs are marked with the SfN whiteboard entry
Drivers
- side by side staging : jmaher
- graphserver : jeads + BYK
- pageloader and other tools : jhammel
Background
Most of this project is outlined well at on the [Talos Investigation] page.
Meetings
Meetings are every Thursday at 11AM Pacific Time.
Here is our meeting page - take a look for more details and notes from previous meetings.
Datazilla Meetings
The Datazilla project is holding focus group meetings with interested developers to judge our progress toward fixing use cases that developers and tree sheriffs care about.
See our Datazilla Meeting Page for information and notes from those.
Action Items
The Goal by March 2012 is:
- Have the tools (pageloader, talos, graphserver) retooled so we can research new tests and run tests in a more reliable fashion
- Implement and roll out tdhtml using the new toolchain
- Have a process in place for adding new tests and pagesets into the tool set
There are general estimates of time throughout here, these are just placeholders. While the development time might be 2 hours, there are 2 days budgeted for this. Basically this accounts for time to develop, test and document your patch. Time for the reviewer to review and any back and forth. Finally time for staging and coordinating a deployment of a new talos.zip.
All in all there are 82 estimated work days to achieve success this next quarter. These 82 days do not include core development of the graph server, but it does include us meeting, reviewing, and helping the UI folks with the graph server.
Milestone 1
25 work days
- discard the first iteration of a page load (2 days to get landed, then SxS for a rollout - good practice) (jhammel - it would be nice to know *why* the difference)
- add options to pageloader for alternative page loading and measurements. make it more flexible as to how to load pages (the order, etc) 1 week
- add options to talos configuration to support new pageloader requirements. 2 days (jhammel - I wouldn't mind taking this)
- create a v1 of the dhtml test using new methodology. 1 week
- work with rhelmer and jeads to start discussion of what data we want. 2 weeks
- samples of work that :slewchuk did, mixed with inital data from dhtml results
- Initial version of database requirements to host new data. 3 days
- Blog frequently about progress and goals. get the word out. get feedback. cultivate knowledge
Milestone 2
20 days of work
- Validate tdhtml data with metrics. 2 days
- Generate single 'metric' to track tdhtml as we currently do. 2 days
- Ensure core database and input methods for data are deployed. 2 days
- Start rolling out on branches with side by side staging. 4 days
- Beta version of UI live for inital data from the branches. 1 week
- Start investigating tsvg and a11y for optimal sampling sizes and accuracy. 1 week
- Continue to blog and post to newsgroups. n/a
Milestone 3
24 work days
- Continue rolling out tdhtml to other branches. 4 days
- Enhance tools like compare-talos and regression-finder to work with new tdhtml. 1 week
- Write analysis toolchain for investigating new tests and pages (i.e. the work we do on tsvg and a11y should be automated). 1 week
- Integrate analysis toolchain into existing tools as much as possible. 1 week
- Version 1.0 of the new UI should be available. Multiple views on the same data as well as drill down from given data point or time window. 1 week
Milestone 3.14 (bonus work if all goes well)
13 work days
- Define requirements for a Version 2.0 of the new UI. 2 days
- start rolling out tsvg and a11y. 3 days
- start investigating tp5 (or maybe it is time for tp6 and we start there). 1 week.
- enhance compare talos toolchain to show differences from a try server run to the baseline (easier talos development as well as firefox development). 3 days
Related Work
We need to be considerate of other projects and try to coordinate as much as possible.
- mozbase
- we will be fixing up talos to use mozprocess, mozprofile, mozrunner. This doesn't intersect with SfN work, but if we are doing a large staging run this would be beneficial to bundle together. staging
- mozharness
- again, no impact on this project. staging,SxS
- python 2.4->2.6+
- no real impact on this project. staging,SxS
- jetpack talos
- most likely some changes to talos, primarily focused on ts, maybe some graphserver work required
- AMO maintenance
- no impact on this project
- OSX RSS from pageloader
- small talos and config tweaks for tp5. staging,SxS
Possible Reshuffling
Most of the other work requires staging and side by side (SxS) running to ensure we don't fudge the numbers.
- Can our toolchain make the side by side easier and less painful? (jhammel - this would be a good thing to blog about)
We won't be modifying talos proper much which means that the work in these other projects shouldn't affect SfN.
- Will we be comfortable doubling our work in staging and SxS? (jhammel - we should probably peg more carefully to versions of mozbase software)
Contacts
- ateam: BYK, jeads, jhammel, jmaher
- metrics: christina
- releng: armenzg ??
- webdev: rhelmer
UI Prototype
The prototype user interface can be reached on Mozilla-MPT by adding:
10.8.73.31 datazilla
To your /etc/hosts file and then directing your browser to:
/datazilla/views
The source code for datazilla can be found at https://github.com/jeads/datazilla
Mockups
A set of user interface mockups can be found here Media:TalosSignalFromNoiseMocks.pdf. This document presents a collection of ideas for extending the graphs-new interface to manage different types of data with multiple visualization strategies.
Use Cases
Currently:
- Firefox developer:
- push patch to mozilla-central, expect green talos results
- all results are green unless test fails to complete, then it is red
- notification to dev.tree-management indicates a regression
- developer goes to graphs-new and looks at the (test, platform, branch) graph
- maybe compares to other platforms or branches
- push patch to mozilla-central, expect green talos results
- Talos developer
- adds new feature to talos with expected change in numbers
- run change side by side as a new test name for 1 week
- browse to graphs-new to view new_test vs old_test to look at raw data points over a few days on each platform
Proposed 1 (assuming 1% deviation):
- Firefox developer
- push patch to mozilla-central, expect green talos results
- number outside of 2% from gold standard, run turns orange
- orange run has link on tbpl to graph server
- graph server has a quick line of historical data and other platforms
- then a focused section of what the gold standard is and what that run produced
- it would be nice to see what the previous 5 runs had in terms of numbers, as well as all other platforms
- no need for notification mails to dev.tree-management since this is managed in tbpl
- FLAW: if firefox adjusts the standard number (up or down), then how do we call it the new standard?
- maybe the web interface can have a way to change the number on the fly and put a bug/comment for the adjustment
- push patch to mozilla-central, expect green talos results
- Talos developer
- adds new feature to talos with expected change in numbers
- while pushing add an entry to the graph server of the new expected number
- no need for side by side since we are just comparing to a known standard number
Data
Current:
- data from tests (tp5 sample)
- noisy output on test console (coming from pageloader)
NOISE: |i|pagename|median|mean|min|max|runs| NOISE: |0;thesartorialist.blogspot.com;852;864.3333333333334;809;1135;951;852;920;865;809;858;851;1135;836;837 NOISE: |1;cakewrecks.blogspot.com;264;266.55555555555554;252;651;651;263;252;273;264;275;260;292;268;252
- data sent to the graph server
0,852.00,thesartorialist.blogspot.com 1,264.00,cakewrecks.blogspot.com
Right now we are sending the median value (without the highest value in the set) to the graph server for each page. On the graph server, we [calculate our metric] for tp5 by averaging all the uploaded median values except for the max value.
- TODO* define the perf counters that we collect and upload.
- how it is stored
- volume
Proposed
- data from tests
- how it is stored
- volume
Auto-tools/Projects/Signal From Noise/JSON Ingestion (jhammel - I have no idea why the above useless page is linked to; it should be deleted and the link removed
Data
Links
- https://wiki.mozilla.org/Metrics/Talos_Investigation
- http://shawnwilsher.com/archives/tag/regression
- https://groups.google.com/forum/#!topic/mozilla.dev.planning/nxR6tcDmZWQ
- https://groups.google.com/forum/#!msg/mozilla.dev.platform/kXUFafYInWs/XRCsrapUUGAJ
- The script to pull the talos data out of elastic search: https://github.com/salamand/ESTalosPull
- The Log harvester that helped pull logs directly from pulse: https://github.com/salamand/PulseLogHarvester
- https://wiki.mozilla.org/Perfomatic#Architecture
- bug 706912
- where regression emails go: http://groups.google.com/group/mozilla.dev.tree-management/topics?lnk=srg&pli=1
- http://people.mozilla.org/~jmaher/sxs/sxs.html
- http://k0s.org/mozilla/blog/20120131164249
- https://etherpad.mozilla.org/graphserver-next
- https://plus.google.com/u/0/108996039294665965197/posts/8GyqMEZHHVR on bimodality
- http://www-plan.cs.colorado.edu/diwan/asplos09.pdf
- https://plus.google.com/u/0/108996039294665965197/posts/8E7zHQuTaj1
- http://www.jerrydallal.com/LHSP/LHSP.HTM
- http://www.jerrydallal.com/LHSP/npar.htm
- http://datazilla.readthedocs.org/en/latest/
- https://github.com/mozilla/datazilla-metrics