Sheriffing/Job Visibility Policy
Jump to navigation
Jump to search
This page exists to clarify the policy towards how jobs reporting to Treeherder are managed. Common sense will apply in cases where some of the requirements are not applicable for a particular platform/build/test type.
To propose changes to this policy, please speak to the sheriffs and/or send a message to dev.tree-management.
Overview of the Job Visibility Tiers
Jobs reporting to Treeherder can fall into three tiers.
- Tier 1: Jobs that run on a Tier-1 platform, are shown by default on Treeherder, and are sheriff-managed. Bustage will cause a tree closure and is expected to result in a quick follow-up push or a backout (at the discretion of the sheriff on duty). Bugs will be filed for new intermittent test failures and are subject to the Test Disabling Policy if not addressed in a timely fashion.
- Tier 2: Jobs are shown by default on Treeherder, but are not sheriff-managed. Results will be shown on Treeherder "for information only". New test failures/bustage will not result in a backout, but a tracking bug will be filed when observed.
- Tier 3: Jobs are not shown by default on Treeherder. All responsibilities for monitoring the results will fall upon the owner of the job.
Requirements for jobs shown in the default Treeherder view
The below section applies to both Tier 1 and Tier 2 jobs. Owners of non-sheriff managed project/disposable repos do not need to meet these requirements. However, they must be satisfied prior to being enabled in production.
Has an active owner
- Who is committed to ensuring the other requirements are met not just initially, but over the long term.
- Who will ensure the new job type is switched off to save resources should we stop finding it useful in the future.
Usable job logs
- Full logs should be available for both successful and failed runs in either raw or structured formats.
- The crash reporter should be enabled, mini-dumps processed correctly (ie: with symbols available) & the resultant valid crash stack visible in the log (it is recommended to use mozcrash to avoid reinventing the wheel).
- Failures must appear in the Treeherder failure summary in order to avoid having to open the full log for every failure.
- Failure output must be in the format expected by the Treeherder's bug suggestion generator (otherwise sheriffs have to manually search Bugzilla when classifying/annotation intermittent failures):
- For in-tree/product issues (eg: test failures, crashes):
- Delimeter: ' | '
- 1st token: One of {TEST-UNEXPECTED-FAIL, TEST-UNEXPECTED-PASS, PROCESS-CRASH}.
- 2nd token: A unique test name/filepath (not a generic test loader that runs 100s of other test files, since otherwise bug suggestions will return too many results).
- 3rd token: The specific failure message (eg: the test part that failed, the top frame of a crash or the leaked objects list for a leak).
- For non test-specific issues (eg: infra/automation/harness):
- Treeherder falls back to searching Bugzilla for the entire failure line (excluding mozharness logging prefix), so it should be both unique to that failure type & repeatable (ie: no use of process IDs or timestamps, for which there will rarely be a repeat match against a bug summary).
- Exceptions & timeouts must be handled with appropriate log output (eg: the failure line must state in which test the timeout occurred, not just that the entire run has timed out).
- For in-tree/product issues (eg: test failures, crashes):
- The sheriffs will be happy to advise regarding the above.
Has sufficient documentation
- Has a wiki page with:
- An overview of the test-suite.
- Instructions for running locally.
- How to disable an individual failing test.
- The current owner/who to contact for help.
- The Bugzilla product/component where bugs should be filed (Github issues is not discoverable enough and prevents the use of bug dependencies within the rest of the project).
- That wiki page is linked to from https://developer.mozilla.org/docs/Mozilla/QA/Automated_testing
Additional requirements for Tier 1 jobs
Breakage is expected to be followed by tree closure or backout
- Failures visible in the default view (other than those that are known intermittents/transient), must have their cause backed out in a timely fashion or else the tree closed until diagnosed.
- Sheriffs will generally ping in #developers on irc.mozilla.org when such a situation arises. If sufficient time passes without acknowledgement (typically ~5min), the regressing patch(es) will be backed out in order to minimize the length of the closure for other developers.
- If acknowledged, sheriffs will decide in conjunction with the developer whether backing out or fixing in-place is the most reasonable resolution. The sheriff maintains the right to backout if necessary, however.
Runs on mozilla-central and all trees that merge into it
- Necessary because job failures when tree X merges into mozilla-central will not be attributable to a single changeset, resulting in either tree closure or backout of the entire merge (see the previous requirement).
- When filing the release engineering bug to enable your job on all the required trees, ask to enable it on "mozilla-central based trees" and release engineering will enable it in the default config from which all trunk trees inherit (unless the various tree owners have explicitly opted out). As a rough guide, mozilla-central based trees include mozilla-inbound, autoland, as well as many of the other project/disposable repositories.
Scheduled on every push
- Otherwise job failures will not be attributable to a single changeset, resulting in either tree closure or backout of multiple pushes (see requirement #2).
- An exception is made for nightly builds with an virtually equivalent non-nightly variant that is built on every push & for tests run on PGO builds (given that PGO builds take an inordinate amount of time, we still schedule them every 3/6 hours depending on tree, and relatively speaking there are not too many PGO-only test failures). Periodic builds have also been granted an exception as they don't run tests and have sufficient coverage on other platforms such that the odds of unique bustage are small and relatively easy to diagnose.
- Note also that coalescing (buildbot queue collapsing when there is more than one queued job of the exact same tree/type) may mean that not all scheduled jobs actually get run. Whilst coalescing makes sheriffing harder, it's a necessary evil given that automation infrastructure demand frequently outstrips supply.
Must avoid patterns known to cause non deterministic failures
- Must avoid pulling the tip of external repositories as part of the build - since landings there can cause non-obvious failures. If an external repository is absolutely necessary, instead reference the desired changeset from a manifest in mozilla-central (like talos or gaia do).
- Must not rely on resources from sites whose content we do not control/have no SLA:
- Since these will cause failures when the external site is unavailable, as well as impacting end to end times & adding noise to performance tests.
- eg: Emulator/driver binaries direct from a vendor's site, package downloads from PyPi or page assets for unit/performance tests.
- Ensure MOZ_DISABLE_NONLOCAL_CONNECTIONS is defined in the automation environment (see bug 995417) & use a list of automation prefs for switching off undesirable behaviour (eg automatic updates, telemetry pings; see bug 1023483 for where these are set).
- Must not contain time bombs, e.g. tests that will fail after a certain date or when run at certain times (e.g., the day summer time starts or ends, or when the test starts before midnight and finishes after midnight).
- See the best practices for avoiding intermittent failures (oranges).
Low intermittent failure rate
- A high failure rate:
- Causes unnecessary sheriff workload.
- Affects the ability to sheriff the trees as a whole, particularly during times of heavy coalescing.
- Undermines devs confidence in the platform/test-suite - which as demonstrated by Firefox for Android, permanently affects their willingness to believe any future failures, even once the intermittent-failure rate is lowered.
- A mozilla-central push results in ~400 jobs. The typical OrangeFactor across all trunk trees is normally (excluding the recent spike) 3-4, ie: a failure rate of ~1%.
- Therefore as a rough guide a new platform/testsuite must have at most a 5% per job failure rate initially, and ideally <1% longer term.
- However, sheriffs will make the final determination of whether a job type has too many intermittent failures. This will be a based on a combination of factors including failure rate, length of time the failures have been occurring, owner interest in fixing them & whether Treeherder is able to make bug suggestions.
Easily run on try server
- Needed so that developers who have had their landing backed out for breaking the job type are able to debug the failures/test the fix, particularly if they only reproduce on our infrastructure.
- Developers should not be expected to guess try chooser options, so http://trychooser.pub.build.mozilla.org/ should be updated if appropriate.
Optional, but helpful
Easy for a dev to run locally
- Supported by mach (if appropriate).
- Ideally part of mozilla-central (legacy exceptions being Talos, gaia).
Supports the disabling of individual tests
- It must be possible for sheriffs to disable an individual test per platform or entirely, by either annotating the test or editing a manifest/moz.build/Makefile in the relevant gecko repository.
Requesting changes in visibility
- Jobs that are marked as tier 3 will be hidden in Treeherder by default.
- To adjust the tier for a Taskcluster job, use a bug either in the Taskcluster Task Graph component, or else a component related to the type of task being adjusted, then edit the in-tree task definiton.
- For legacy buildbot jobs, the tier is set via a hardcoded whitelist of job signatures. File a bug in the Treeherder Data ingestion component and follow the steps here: https://treeherder.readthedocs.io/common_tasks.html#hide-jobs-with-tiers
- CC :sheriffs when adjusting a job's tier, so they are aware of the change and can confirm the criteria have been met.
Adding a new test task, or a new test platform?
- Be sure to demonstrate an acceptable intermittent failure rate for your new test tasks on try, and include the try links in the bug which adds the new tasks. Usually that means repeating each new test task at least 10 times (try: --rebuild 10).
- For each known intermittent failure, check the expected frequency from recent comments in the bug, or by looking up the failure in treeherder's Intermittent Failures view; if you see higher failure rates in your try push, consider fixing or disabling the test(s) before enabling your new task(s).
My platform/test-suite does not meet the base requirements, what now?
- Your platform/test-suite will still be being run, just not shown on the default view. This model has worked well for many projects/build types (eg jetpack, xulrunner, spidermonkey).
- To see it, click the "show/hide hidden jobs" checkbox to the left of the quick filter input field in the Treeherder UI. Alternatively, |&exclusion_profile=false| can be added to the URL to show all hidden jobs.
- To filter the jobs displayed, under the 'Filters' menu use the 'job name' field.
- For Try specifically, you can request that the job type by made non-default (ie requires explicit opt-in when using trychooser syntax, and won't be scheduled with '-u all' or similar), in order to be shown in the default view on Try - example.