Auto-tools/Projects/Stockwell/backfill-retrigger: Difference between revisions

From MozillaWiki
Jump to navigation Jump to search
m (→‎how many retriggers: initial data)
Line 31: Line 31:


= how many retriggers =
= how many retriggers =
There is not an exact science here, but I typically choose 40 as my target.  This allows us to easily see a pattern and find a percentage of pass/fail jobs per revision.  If a test is failing 10% of the time, we would need 10 data points to see 1 failure.  Imagine if we had 18 green tests and 2 failed tests, it is often wise to get at least 20 data points.  I choose 40 because then in the case of 10% failure rate we have more evidence to show 10% failure rate.
If we look at the data from orangefactor it cannot tell us a real failure rate.  The data we have will say X failures in Y pushes (30 failures in 150 pushes).  While that looks like a 20% failure rate, it can be misleading for a few reasons:
* we do not run every job/chunk on every push, so it could be 30 failures in 75 data points
* there could be retriggers on the existing data and we could have 3 or 4 failures on a few pushes making it failing less than 20%
= what to do with the data =
= what to do with the data =
= exceptions =
= exceptions =

Revision as of 17:11, 14 March 2018

finding bugs to work on

choosing a config to test

It is best to look at the existing pattern of data you see when looking at all the starred instances. Typically when adding a comment to a bug while triaging it is normal to list the configurations that the failures are most frequent on. Usually pick the most frequency configuration, maybe if it is a tie for 2 choose both of those.

If there is not a clear winner, then consider a few factors which could help:

  • debug typically provides more data, but takes longer
  • pgo is harder to backfill and builds take longer- try to avoid this
  • ccov/jsdcov builds/tests are only run on mozilla-central- avoid these configs
  • nightly is only run on mozilla-central- avoid these configs
  • mac osx has a limited device pool- try to pick linux or windows

choosing a starting point

Ideally you want to pick the first instance of a failure and work backwards in time to find the root cause. In practice this can be confusing as we have multiple branches or sometimes different configs that fail at different times.

I would look at the first 10 failures and weigh:

  • what branch is most common
  • where do the timestamps end up close to each other
  • is the most common config on the same branch and with close timestamps

In many cases you will pick a different failure as the first point- I often like to pick the second instance of the branch/config so I can confirm multiple revisions show the failure (show a pattern).

how to find which job to retrigger

Once you have the revision and config, now we need to figure out the job. Typically a test will fail in a specific job name which often is a chunk number. We have thousands of tests and run split across many jobs in chunks. These are dynamically balanced, which means that if a test is added or removed as part of a commit, the chunks will most likely rebalance and tests are often run in different chunks.

Picking the first job is easy- that is usually very obvious when choosing the config that you are running against and pulling up the revision to start with. for example, it might be linux64/debug mochitest-browser-chrome-e10s-3.

As a sanity check, I pull up the log file and search for the test name, it should show up as TEST-START, and then shortly after TEST-UNEXPECTED-FAIL.

When retriggering on previous revisions, you need to repeat this process to ensure that if you select chunk 3 that the test exists, it is likely that the test could be in a different chunk number.

how many retriggers

There is not an exact science here, but I typically choose 40 as my target. This allows us to easily see a pattern and find a percentage of pass/fail jobs per revision. If a test is failing 10% of the time, we would need 10 data points to see 1 failure. Imagine if we had 18 green tests and 2 failed tests, it is often wise to get at least 20 data points. I choose 40 because then in the case of 10% failure rate we have more evidence to show 10% failure rate.

If we look at the data from orangefactor it cannot tell us a real failure rate. The data we have will say X failures in Y pushes (30 failures in 150 pushes). While that looks like a 20% failure rate, it can be misleading for a few reasons:

  • we do not run every job/chunk on every push, so it could be 30 failures in 75 data points
  • there could be retriggers on the existing data and we could have 3 or 4 failures on a few pushes making it failing less than 20%

what to do with the data

exceptions