Security/Firefox/Security Bug Life Cycle
A Bug is Born
Reports of security vulnerabilities come from many different sources. Many are directly filed as security bugs by various groups:
- Our security teams (e.g. fuzzing, security reviews and audits)
- External security researchers (including bounty hunters)
- Engineers developing, reviewing, or testing notice vulnerabilities as they work on non-security bugs
- QA and others looking at raw crashes on Socorro
- Users noticing something that worries them
Some issues are found outside the Mozilla community. The security team or other Mozilla community members file bugs for these issues when they come to our attention:
- Concerns or incidents mailed to security@mozilla.org
- Blogs and social media of known security researchers
- Security advisories from libraries we incorporate into our products
- Tech press
Security Triage
Note: the bug query links in the following sections are intended for members of the security team and will yield empty or incomplete results if you don't have access to security bugs.
Incoming
The main goal at this stage is to get security bugs rated appropriately and into the purview of the engineers who manage that area of code. Only a limited number of people can see these security bugs so we need to ensure that includes the right people. For each bug:
- Is the bug well formed and reproducible?
- If it is make sure it’s NEW rather than UNCONFIRMED.
- If not, “needinfo?” the reporter until it is or the bug is closed (potentially as INCOMPLETE or WORKSFORME).
- Is it in the right Product and Component?
- Is it in the right security group for the component (especially if it’s in the “Core” product)? See Security teams and components
- Are appropriate developers CC’d so they can see the bug and needinfo'd so they are aware of it?
- If you can't select an appropriate security severity rating needinfo? someone who can (either a senior security team member or a senior engineer in the appropriate component)
Incoming (untriaged) security bugs
Client security bugs filed in the last week
Client security bugs filed in the last month
VulnSmash
We must make sure the most severe security bugs are kept on track. For these bugs:
- Set the priority to P1
- Set the appropriate version status flags to “affected”
- Set the version tracking flags to “+”
- Assign to an appropriate owner. If there’s no better person use the Triage Owner
Open sec-critical and sec-high bugs (include stalled)
Unassigned sec-critical/sec-high bugs (include stalled)
Administrivia
Once a fix lands the security group on that bug should be changed to the “Release-track” group (core-security-release) so QA folks can see and verify the bugs.
Fixed security bugs that need to be moved to "release track"
Analysis
Once the cause of a security bug has been identified, the security team and the engineers involved must look for similar patterns elsewhere. Was it a misunderstanding or oversight by a particular engineer? A foot-gun API we need to change? Correct at one time but depending on other parts of the code that changed out from under them? Is there a mitigation or hardening we can put in place so similar mistakes in the future are less harmful, or are caught (by tests, linting) before they are checked in?
Protecting our Users
Fixing Vulnerabilities
Severe security bugs need to be fixed with deliberate speed. Some external reporters even have a 60-day deadline before they report the issue publicly.
- Within three days the assignee of the bug should comment on its status, to acknowledge receipt of the bug and to give a generalized ETA of a patch for planning. Even if the ETA is “can’t look at it until after bugs X,Y, and Z” even that much is helpful for planning and, if necessary, finding a different assignee.
- Sec-critical bugs are highest priority and should be fixed within two weeks. If that can’t be accomplished because of other priorities check with the security team and your manager to resolve the conflict.
- Sec-high bugs should be fixed within a few weeks: 6 weeks maximum is a good goal. 60 days is a common disclosure deadline, and in addition to writing the patch we have to account for time spent on QA and the release process as a whole.
Overdue sec-critical bugs
Overdue sec-high bugs
Untouched for more than two weeks
Landing Fixes
We know people watch our check-ins and we don’t want to 0-day ourselves by landing obvious fixes and test cases that demonstrate how to trigger the vulnerability. The Security Bug Approval Process is designed to prevent that. Part of the approval process is evaluating what bugs need to be pushed to beta and which are risky and need to ride the trains, and whether the patch is needed on supported ESR branches.
Verifying Fixes
It's generally important to have bug fixes tested by fresh eyes, who might catch problems with incorrect assumptions made by the original fix. This is especially important for security fixes because we announce these fixes in our advisories: if the fix doesn't work we have put people at risk. Verification is especially important when we uplift/back-port patches to the Beta or, worse, the ESR branches since they're more likely to suffer from subtle dependencies on code changes from normal trunk development that weren't back-ported to those branches.
The QA team's process for verifying security bugs for release is described in the “Post CritSmash” document.
ESR
We have committed to supporting Extended Support Release (ESR) branches for roughly a year each with a two release overlap between ESR branches. “Support” primarily means security fixes. Security bugs labeled sec-critical or sec-high are automatic candidates for back-porting. Some less-severe security bugs are also included after evaluating their impact, risk, and visibility. See the ESR landing process page where there are additional triage queries.
Security Advisories
The fixed bugs that had been present in a shipped release need to have a CVE assigned and be written up in our release advisories. Security fixes for recent regressions that only affected Nightly or Beta don’t need an advisory. [link to Al's doc]
For historical write-ups see our Published advisories.
The Pit of Despair
Sometimes we can't make much progress on finding and fixing a security bug, especially if we don't have a reliable way to reproduce it. This is a particular problem with crashes filed from crash-stats evidence. They are real bugs, they may even happen fairly often, and the crash stacks show memory corruption that is likely exploitable if it can be triggered reliably. These are worth filing and treating as security vulnerabilities because we do manage to fix a significant number of them when we investigate, but sadly many others are so generic and often crashing far after the actual cause of corruption that we can't make progress. These bugs are given the keyword "stalled" and removed from active work. There are sometimes ways to make further progress (e.g. diagnostic asserts might be added to narrow down theories about what is going wrong). If there's no longer any hope or ideas for further progress, many times these are eventually going to have to be closed as INCOMPLETE.
Triage tools
The Open Selected Links extension can be helpful for opening multiple bugs at once from a buglist during triage. It also has a "View link source" context menu item that can be useful for inspecting testcases and what-not.
The "Bug Age" bookmarklet can be run on any buglist for basic age stats. Find it at this gist.