Sheriffing/Job Visibility Policy
From MozillaWiki
This page exists to clarify the policy towards how jobs reporting to Treeherder are managed. Common sense will apply in cases where some of the requirements are not applicable for a particular platform/build/test type.
To propose changes to this policy, please speak to the sheriffs and/or send a message to the sheriffs group.
Contents
- 1 Overview of the Job Visibility Tiers
- 2 Requirements for jobs shown in the default Treeherder view
- 3 Additional requirements for Tier 1 jobs
- 4 Optional, but helpful
- 5 Requesting changes in visibility
- 6 Adding a new test task or a new test platform?
- 7 My platform/test-suite does not meet the base requirements, what now?
Overview of the Job Visibility Tiers
Jobs reporting to Treeherder can fall into three tiers.
- Tier 1: Jobs that run on a Tier-1 platform, are shown by default on Treeherder, and are sheriff-managed. Bustage will cause a tree closure and is expected to result in a quick follow-up push or a backout (at the discretion of the sheriff on duty). Bugs will be filed for new intermittent test failures and are subject to the Test Disabling Policy if not addressed in a timely fashion.
- Tier 2: Jobs are shown by default on Treeherder and sheriffs will create bugs for new failures, even if those are permanent. New test failures/bustage will not result in a backout, but a tracking bug will be filed when observed. These new issues are expected to be fixed in 2 business days.
- Tier 3: Jobs are not shown by default on Treeherder. All responsibilities for monitoring the results will fall upon the owner of the job.
Requirements for jobs shown in the default Treeherder view
The below section applies to both Tier 1 and Tier 2 jobs. Owners of non-sheriff managed project/disposable repos do not need to meet these requirements. However, they must be satisfied prior to being enabled in production.
Has an active owner
- Who is committed to ensuring the other requirements are met not just initially, but over the long term.
- Who will ensure the new job type is switched off to save resources should we stop finding it useful in the future.
Usable job logs
- Full logs should be available for both successful and failed runs in either raw or structured formats.
- The crash reporter should be enabled, mini-dumps processed correctly (ie: with symbols available) & the resultant valid crash stack visible in the log (it is recommended to use mozcrash to avoid reinventing the wheel).
- Failures must appear in the Treeherder failure summary in order to avoid having to open the full log for every failure.
- Failure output must be in the format expected by the Treeherder's bug suggestion generator (otherwise sheriffs have to manually search Bugzilla when classifying/annotation intermittent failures):
- For in-tree/product issues (eg: test failures, crashes):
- Delimeter: ' | '
- 1st token: One of {TEST-UNEXPECTED-FAIL, TEST-UNEXPECTED-PASS, PROCESS-CRASH}.
- 2nd token: A unique test name/filepath (not a generic test loader that runs 100s of other test files, since otherwise bug suggestions will return too many results).
- 3rd token: The specific failure message (eg: the test part that failed, the top frame of a crash or the leaked objects list for a leak).
- For non test-specific issues (eg: infra/automation/harness):
- Treeherder falls back to searching Bugzilla for the entire failure line (excluding mozharness logging prefix), so it should be both unique to that failure type & repeatable (ie: no use of process IDs or timestamps, for which there will rarely be a repeat match against a bug summary).
- Exceptions & timeouts must be handled with appropriate log output (eg: the failure line must state in which test the timeout occurred, not just that the entire run has timed out).
- [documentation for mozlog library https://firefox-source-docs.mozilla.org/mozbase/mozlog.html]
- For in-tree/product issues (eg: test failures, crashes):
- The sheriffs will be happy to advise regarding the above.
Has sufficient documentation
- Has a wiki page with:
- An overview of the test-suite.
- Instructions for running locally.
- How to disable an individual failing test.
- The current owner/who to contact for help.
- The Bugzilla product/component where bugs should be filed (Github issues is not discoverable enough and prevents the use of bug dependencies within the rest of the project).
- That wiki page is linked to from https://firefox-source-docs.mozilla.org/testing/automated-testing/index.html
Additional requirements for Tier 1 jobs
Breakage is expected to be followed by tree closure or backout
- Failures visible in the default view (other than those that are known intermittents/transient), must have their cause backed out in a timely fashion or else the tree closed until diagnosed.
- Sheriffs will generally ping in #developers on chat.mozilla.org when such a situation arises. If sufficient time passes without acknowledgement (typically ~5min), the regressing patch(es) will be backed out in order to minimize the length of the closure for other developers.
- If acknowledged, sheriffs will decide in conjunction with the developer whether backing out or fixing in-place is the most reasonable resolution. The sheriff maintains the right to backout if necessary, however.
Runs on mozilla-central and autoland
- Necessary because job failures when autoland merges into mozilla-central will not be attributable to a single changeset, resulting in either tree closure or backout of the entire merge (see the previous requirement).
Scheduled on every push
- Otherwise job failures will not be attributable to a single changeset, resulting in either tree closure or backout of multiple pushes (see requirement #2).
- An exception is made for nightly builds with a virtually equivalent non-nightly variant that is built on every push & for tests run on shippable builds (relatively speaking there are not too many shippable-only test failures). Periodic builds have also been granted an exception as they don't run tests and have sufficient coverage on other platforms such that the odds of unique bustage are small and relatively easy to diagnose.
- Note also that scheduling optimization (may mean that not all scheduled jobs actually get run. Whilst such coalescing makes sheriffing harder, it's a necessary action to keep the automation infrastructure demand at reasonable levels.
Must avoid patterns known to cause non deterministic failures
- Must avoid pulling the tip of external repositories or their latest release as part of the build - since landings there can cause non-obvious failures. If an external repository/dependency is absolutely necessary, instead reference the desired changeset or version from a manifest in mozilla-central.
- Must not rely on resources from sites whose content we do not control/have no SLA:
- Since these will cause failures when the external site is unavailable, as well as impacting end to end times & adding noise to performance tests.
- eg: Emulator/driver binaries direct from a vendor's site, package downloads from PyPi or page assets for unit/performance tests.
- Ensure MOZ_DISABLE_NONLOCAL_CONNECTIONS is defined in the automation environment (see bug 995417) & use a list of automation prefs for switching off undesirable behavior (e.g. automatic updates, telemetry pings; see bug 1023483 for where these are set).
- Must not contain time bombs, e.g. tests that will fail after a certain date or when run at certain times (e.g., the day summer time starts or ends, or when the test starts before midnight and finishes after midnight).
- See the best practices for avoiding intermittent failures (oranges).
Low intermittent failure rate
- A high failure rate:
- Causes unnecessary sheriff workload.
- Affects the ability to sheriff the trees as a whole, particularly during times of heavy coalescing.
- Undermines confidence in the platform/test-suite - which permanently affects developers' willingness to believe any future failures, even once the intermittent-failure rate is lowered.
- A mozilla-central push results in 4000-10000 jobs. The typical intermittent failure rate (OrangeFactor) across all trunk trees is normally 2-4%.
- Therefore as a rough guide a new platform/test suite must have at most a 5% per job failure rate initially, and ideally <1% longer term.
- However, sheriffs will make the final determination of whether a job type has too many intermittent failures. This will be a based on a combination of factors including failure rate, length of time the failures have been occurring, owner interest in fixing them & whether Treeherder is able to make bug suggestions.
Easily run on try server
- Needed so that developers who have had their landing backed out for breaking the job type are able to debug the failures/test the fix, particularly if they only reproduce on our infrastructure.
- The job should be visible to |./mach try fuzzy| and |./mach try chooser| without having to use the --full option.
Optional, but helpful
Easy for a dev to run locally
- Supported by mach (if appropriate).
- Ideally part of mozilla-central (legacy exceptions being Talos).
Supports the disabling of individual tests
- It must be possible for sheriffs to disable an individual test per platform or entirely, by either annotating the test or editing a manifest in the relevant gecko repository.
Requesting changes in visibility
- Jobs that are marked as tier 3 will be hidden in Treeherder by default.
- To adjust the tier for a Taskcluster job, use a bug either in the Firefox Build System :: Task Configuration component, or else a component related to the type of task being adjusted, then edit the in-tree task definition.
- CC :sheriffs when adjusting a job's tier, so they are aware of the change and can confirm the criteria have been met.
Adding a new test task or a new test platform?
- Be sure to demonstrate an acceptable intermittent failure rate for your new test tasks on try, and include the try links in the bug which adds the new tasks. Usually that means repeating each new test task at least 10 times (try: --rebuild 10).
- For each known intermittent failure, check the expected frequency from recent comments in the bug, or by looking up the failure in Treeherder's Intermittent Failures view; if you see higher failure rates in your try push, consider fixing or disabling the test(s) before enabling your new task(s).
My platform/test-suite does not meet the base requirements, what now?
- Your platform/test-suite will still be being run, just not shown on the default view. This model has worked well for many projects/build types (e.g. spidermonkey).
- To see it, click the "3" button to the left of the quick filter input field in the second toolbar of the Treeherder UI.
- To filter the jobs displayed, under the 'Filters' menu use the 'job name' field.
- For Try specifically, you can request that the job type be made non-default (i.e. requires explicit opt-in when the tasks to run get selected), in order to be shown in the default view on Try - see
UNCOMMON_TRY_TASK_LABELS
in [taskcluster/taskgraph/target_tasks.py target_tasks.py].