Auto-tools/Projects/Signal From Noise/Meetings/2012-05-17: Difference between revisions

From MozillaWiki
Jump to navigation Jump to search
(Created page with "= Previous Action Items = * [ctalbert,carljm,jhammel,jmaher] - look at datazilla VM * [jmaher] - identify tp5 tests that hit the internet, give christina set of numbers with thos...")
 
No edit summary
Line 1: Line 1:
= Previous Action Items =
= Previous Action Items =
* [ctalbert,carljm,jhammel,jmaher] - look at datazilla VM
* find what pages we can consider reliable
* [jmaher] - identify tp5 tests that hit the internet, give christina set of numbers with those pages missing
** define what "reliable" is
* [christina] - analyze reduced pageset for SfN metric
 
* triangulate
 
* [christina] - give jeads/ctalbert/jmaher a set of pages that are just too noisy and operating systems
 
* local TBPL (contact: edmorley)
 


= Metric Calculations =
= Metric Calculations =
* 70 pages still are not fitting in the model :(
** will provide top offenders to a*team


= Datazilla =
= Datazilla =
Line 26: Line 30:


= Action Items =
= Action Items =
* find what pages we can consider reliable
** define what "reliable" is
* triangulate
* [christina] - give jeads/ctalbert/jmaher a set of pages that are just too noisy and operating systems
* local TBPL (contact: edmorley)

Revision as of 17:13, 16 May 2012

Previous Action Items

  • find what pages we can consider reliable
    • define what "reliable" is
  • triangulate
  • [christina] - give jeads/ctalbert/jmaher a set of pages that are just too noisy and operating systems
  • local TBPL (contact: edmorley)


Metric Calculations

Datazilla

Page Specific Views

Compare Talos Functionality

Round Table

  • proposal for moving forward
    • focus on tdhtml and tsvg as page centric
      • find a model for pages that can detect regressions with a single new data point
      • display data on graphs for investigation purposes
      • determine a metric to report for pass fail purposes and tracking as a page set (mochitests have pass/fail/todo, and if 1 test fails, we turn the job orange, maybe we do something similar here)
    • if a page doesn't fit a model then lets not run it and file a bug to investigate the page (80/20 rule here)
    • consider a maybe flag (probably orange) which doesn't match our model, but we cannot determine programatically that it is a regressions or measurable improvement.
  • new color for tbpl? (You might have regressed?)
  • killing pages for network access: https://bugzilla.mozilla.org/show_bug.cgi?id=720852 (could just use datazilla)

Action Items