Security/Firefox/Security Bug Life Cycle
Security bugs in our product put millions of people at risk. To fulfill Mozilla's mission we must discover those bugs, fix them, and ship the fixes. This process involves multiple teams across the organization. This page describes a bug-centric view of the tasks that are part of that process, serving almost as a checklist to make sure we are executing on each step. There are also handy bugzilla queries that will be helpful for people as they work on each task.
Since this is a bug-centric view there are many important activities performed by Mozilla security teams that are not mentioned, or only briefly. Fuzzing, static analysis, and other research are an input into this process, serving as a source of bug discovery--and much preferred to bugs being found in the wild. The analysis step described in this page can be an input to the efforts to harden Firefox against exploits (for example, sandboxing, site-isolation, and mitigating XSS in privileged UI code).
Note: The bugzilla links in this document are intended for the people performing the tasks described in the sections where they are found. Most of them will yield empty or incomplete results unless you are logged in to bugzilla.mozilla.org and have access to security bugs.
Contents
A Bug is Born
Reports of security vulnerabilities come from many different sources. Many are directly filed as security bugs by various groups including:
- Our security teams (via fuzzing, static analysis, security reviews and audits)
- External security researchers (including bounty hunters)
- Engineers developing, reviewing, or testing may notice vulnerabilities as they work on non-security bugs
- QA and others looking at raw crashes on Socorro
- Users noticing something that worries them
Some issues are found outside of the Mozilla community. The security team or other Mozilla community members file bugs for these issues when they come to our attention via:
- Concerns or incidents mailed to security@mozilla.org
- Blogs and social media of known security researchers
- Security advisories from libraries we incorporate into our products
- Tech press
Security Triage
Incoming
The main goal at this stage is to get security bugs rated appropriately and into the purview of the engineers who manage the relevant areas of code. Only a limited number of people can see security bugs by default so we need to ensure that each bug is in the right security group and CC additional people as necessary. When triaging, consider the following for each bug:
- Is the bug well formed and reproducible?
- If it is make sure it’s NEW rather than UNCONFIRMED.
- If not, “needinfo?” the reporter until it is or the bug is closed (potentially as INCOMPLETE or WORKSFORME).
- Is it in the right Product and Component?
- Is it in the right security group for the component (especially if it’s in the “Core” product)? See Security teams and components for a mapping between security groups and components.
- Are the appropriate developers CC’d so they can see the bug and needinfo'd so they are aware of it?
- If you can't select an appropriate security severity rating, needinfo? someone for help. This is typically either a senior security team member or a senior engineer in that area of the code.
Incoming (untriaged) security bugs
Client security bugs filed in the last week
Client security bugs filed in the last month
VulnSmash
We must make sure the most severe security bugs (critical and high) are kept on track. For these bugs:
- Set the priority to P1
- This matches the Firefox project's definition of "Fix in this release", which is also roughly our required time-to-fix for security bugs of this severity. See the triage guide.
- It may be appropriate for engineers to lower the priority later after consulting with their manager and the security team. P1 is the default absent an explanation of why it's necessary to keep our users at severe risk.
- Set the appropriate version status flags to “affected”
- Set the version tracking flags to “+”
- Assign to an appropriate owner (if there’s no better person use the Triage Owner)
Open sec-critical and sec-high bugs (include stalled)
Unassigned sec-critical/sec-high bugs(include stalled)
Sec-critical/sec-high bugs without a priority
Administrivia
Once a fix lands the security group on that bug should be changed to the “Release-track” group (core-security-release) so that QA can see and verify the bugs.
Fixed security bugs that need to be moved to "release track"
Analysis
Once the cause of a security bug has been identified, the security team and the engineers involved must look for similar patterns elsewhere. Was it a misunderstanding or oversight by a particular engineer? A foot-gun API we need to change? Correct at one time but depending on other parts of the code that changed out from under them? Is there a mitigation or hardening we can put in place so similar mistakes in the future are less harmful, or are caught (by tests, linting) before they are checked in?
Protecting our Users
Fixing Vulnerabilities
Severe security bugs need to be fixed with deliberate speed to protect our users. In addition, some external reporters have a disclosure deadline such as 60 or 90 days before they report the issue publicly.
- Within three days the assignee of the bug should comment on its status, to acknowledge receipt of the bug and to give a generalized ETA of a patch for planning. Even if the ETA is “can’t look at it until after bugs X,Y, and Z” even that much is helpful for planning and, if necessary, finding a different assignee.
- Sec-critical bugs are highest priority and should be fixed within two weeks. If that can’t be accomplished because of other priorities check with the security team and your manager to resolve the conflict.
- Sec-high bugs should be fixed within a few weeks: 6 weeks maximum is a good goal. 60 days is a common disclosure deadline, and in addition to writing the patch, we have to account for time spent on QA and the release process as a whole.
Overdue sec-critical bugs
Overdue sec-high bugs
Untouched for more than two weeks
Landing Fixes and Tests
External parties watch check-ins in order to identify security patches [1][2], and we have both documented and suspected cases of this for Firefox patches. We don’t want to 0-day ourselves by landing obvious fixes that sit in the tree for a long time before they are shipped in an update, and we especially don't want to land test cases that demonstrate how to trigger the vulnerability. The Security Bug Approval Process is designed to prevent that. Part of the approval process is evaluating what bugs need to be pushed to Beta and which are risky and need to ride the trains, and whether or not the patch is needed on supported ESR branches.
Testcases for vulnerability fixes should be split into a separate patch for this "sec-approval" process. These testcases should land after we have shipped the fix in Release, usually by a few weeks to give users time to have applied the update. We must track the task of landing these patches later. You have two main options and either is fine. A task bug is more upfront work but more straightforward; the flag is easy but requires more follow-up.
- Option 1: Create a task bug assigned to yourself ("Land tests for bug XXXX") that depends on the vulnerability bug. It must be a hidden security bug like the main vulnerability. Add the keyword
sec-other
- Option 2: Track it in the original bug using the
in-testsuite?
flag. If you go this route you must remember to check for un-landed tests (queries below). Once the tests are landed change the flag toin-testsuite+
"My" security testcases that need landing (personalized)
All unlanded testcases for fixed security bugs
Pending sec-approval requests
Verifying Fixes
It's generally important to have bug fixes tested by fresh eyes who might catch problems with incorrect assumptions made by the original fix. This is especially important for security fixes because we announce these fixes in our advisories: if the fix doesn't work we put people at risk. Verification is especially important when we uplift/back-port patches to the Beta or, worse, the ESR branches since they're more likely to suffer from subtle dependencies on code changes from normal trunk development that weren't back-ported to those branches.
The QA team's process for verifying security bugs for release is described in the “Post CritSmash” document.
ESR
We have committed to supporting Extended Support Release (ESR) branches for roughly a year each with a two release overlap between ESR branches. “Support” primarily means security fixes. Security bugs labeled sec-critical or sec-high are automatic candidates for back-porting. Some less-severe security bugs are also included after evaluating their impact, risk, and visibility. See the ESR landing process page for additional release-management triage queries.
Security Advisories
The fixed bugs that had been present in a shipped release need to have a CVE assigned and to be written up in our release advisories. Security fixes for recent regressions that only affected Nightly or Beta don’t need an advisory. Advisory instructions
For historical write-ups see our Published advisories.
The Pit of Despair
Sometimes we can't make much progress on finding and fixing a security bug, especially if we don't have a reliable way to reproduce it. This is a particular problem with crashes filed from crash-stats reports. They are real bugs, they may even happen fairly often, and the crash stacks show memory corruption that is likely exploitable if it can be triggered reliably. These should be filed and treated as security vulnerabilities because we do manage to fix a significant number of them when we investigate. However, many others are generic and the crash is detected so far after the actual cause of corruption that we can't make progress. These bugs are given the keyword "stalled" and removed from active work. There are sometimes ways to make further progress (e.g. diagnostic asserts might be added to narrow down theories about what is going wrong), but once all ideas are exhausted and there is no longer any hope for further progress many of these bugs are eventually going to have to be closed as INCOMPLETE.
Triage tools
The Open Selected Links extension can be helpful for opening multiple bugs at once from a buglist during triage. It also has a "View link source" context menu item that can be useful for inspecting testcases.
The "Bug Age" bookmarklet can be run on any buglist for basic age stats. Find it at this gist.