Buildbot/Talos/Sheriffing
Overview
The sheriff team does a great job of finding regressions in unittests and getting fixes for them or backing stuff out. This keeps our trees green and usable while thousands of checkins a month take place!
For talos, we run about 50 jobs per push (out of ~400) to measure the performance of desktop and android builds. These jobs are green and the sheriffs have little to do.
Enter the role of a Performance Sheriff. This role looks at the data produced by these test jobs and finds regressions, root causes and gets bugs on file to track all issues and make interested parties aware of what is going on.
What is an alert
As of January 2015, alerts come in from [graph server] to [dev.tree-alerts]. These are generated by programatically verifying there is a sustained regression over time (original data point + 12 future data points]).
the alert will reference:
- [branch]
- platform
- test name
- [% change / values]
- malicious changeset [range] including commit summary
- link to [graph server]
Keep in mind that alerts mention improvements and regressions, which is valuable for us to track the entire system as a whole. For filing bugs, we focus mostly on the regressions.
Finding the root cause
There are many reasons for an alert and different scenarios to be aware of:
- backout (usually within 1 week causing a similar regression/improvement)
- pgo/nonpgo (some errors are pgo only and might be a side effect of pgo). We only ship PGO, so these are the most important.
- test/infrastructure change - once in a while we change big things about our tests or infrastructure and it affects our tests
- Coalesed - this is when we don't run every job on every platform on every push and sometimes we have a set of changes
- Regular regression - the normal case where we get an alert and we see it merge from branch to branch
Backout
Backouts happen every day, but backouts that generate performance regressions are what add noise to the system.
Here is an example of a backout which affected many tests. [AlertManager] [related coalesced]
This example is interesting because we see one change which was quickly identified as the correct change, but one job was coalesced. The coalescing is easy to detect because looking at the suspected [changeset] it is a range. That range includes our backed out changeset as well as the graph showing the backout pattern. Adding more to it, this is on Windows 8 which is the platform which showed a regression on the backout. We have high confidence to map this coalesced alert as being the root cause of the backout.
Verifying an alert
Once we have identified a suspected push, it is good manners to retrigger the job a few times on that push+surrounding pushes. Many tests have noise and we could have an oddball result which ends up misidentifying the changeset.
Likewise we should verify all the other platforms to see what the scope of this regression is.
Filing a bug
A lot of work is being done inside of Alert Manager to make filing a bug easier, As each bug has unique attributes it is hard to handle this in a programmatic way, but we can do our best. In fact, there is a 'File bug' clickable link which is underneath each Revision in Alert Manager. Clicking it will bring up a popup with a suggested summary and description for the bug.
Here are some guidelines for filing a bug:
- Product/Component - this should be the same as the bug which is the root cause, if >1 bug, file in "[Testing :: Talos]"
- Dependent/Block bugs - For a new bug, add the tracking bug and root cause bug(s) as blocking this bug
- CC list - cc :jmaher, :avih, patch author(s) and reviewer(s), and owner of the tests as documented on the [talos tests wiki]
- Summary of bug should follow this pattern (should be suggested correctly):
%xx <platform> <test> regression on <branch> (v.<xx>) Date, from push <revision>
- The description is auto suggested as well, this should be good, but do make sure it makes sense
Additional Resources
- [Alert FAQ]
- [Noise FAQ]
- [GraphServer FAQ]
- [Tree FAQ]