Security/RiskRatings: Difference between revisions
m (→Calculating Risk Ratings: fix tag closing order) |
|||
Line 65: | Line 65: | ||
|Review Type || Group (Scheduled on SecReview Calendar) || Group (Scheduled on SecReview Calendar) || Individual Reviewer || Individual Reviewer | |Review Type || Group (Scheduled on SecReview Calendar) || Group (Scheduled on SecReview Calendar) || Individual Reviewer || Individual Reviewer | ||
|- | |- | ||
|Required | |Artefacts Required | ||
| | |||
{| border="1" | {| border="1" | ||
|Architecture Diagram | |Architecture Diagram | ||
Line 74: | Line 75: | ||
|- | |- | ||
|Threat Model | |Threat Model | ||
|- | |||
|Testing Plan Required | |||
|} | |} | ||
| | | | ||
{| border="1" | {| border="1" | ||
| | |Architecture Diagram | ||
|- | |- | ||
| | |Application Diagram, | ||
|- | |- | ||
| | |Data Flow Enumeration, | ||
|- | |- | ||
| | |Threat Model | ||
|- | |||
|Testing Plan Required | |||
|} | |} | ||
| | | | ||
{| border="1" | {| border="1" | ||
| | |Architecture Diagram attached to bug | ||
|- | |- | ||
| | |Testing Plan Required | ||
|} | |||
| | |||
{| border="1" | |||
|Architecture Diagram attached to bug if more than one system is involved. | |||
|- | |- | ||
| | |Testing Plan Required | ||
|} | |} | ||
|- | |- | ||
| How Documented || SecReview Wiki || SecReview Wiki || SecReview Wiki -or- in Secreview bug (with indidication of no-wiki) || In SecReview Bug | | How Documented || SecReview Wiki || SecReview Wiki || SecReview Wiki -or- in Secreview bug (with indidication of no-wiki) || In SecReview Bug |
Revision as of 02:22, 18 June 2013
This document is under construction
Calculating Risk Ratings
The Security Assurance team calculates risk ratings using a basic methodology capturing the business importance of given actions being requested, and the impact should the action not be adequatly completed.
As all our work is being tracked via bugzilla we are using the importance flags that are already available to classify this work so it can be properly prioritized.
When assessing an item using the tables below, we consider the request in the context of each of the headings, and score each item based on its matching values.
Importance to the business
Rank | Priority | Type | Definition |
5 | P1 | Incident | An active threat vector to Mozilla |
4 | P2 | Mozilla Initiative | A project wide initiate to achieve a specific goal (ie. k9o, basecamp) |
3 | P3 | Overall Mozilla Quarterly Goal | Specific quarterly goal that spans functional areas (includes ongoing goals like "keep Firefox safe") |
2 | P4 | Team Quarterly Goal (Any Team) | The specific action or bug is directly related to the publicly stated quarterly goals of that area |
1 | P5 | Age | Reviews that were previously uncategorized but are now older than 'x' time need to define x |
0 | None | Other | Any other request that does not fall into the above categories |
Impact
The impact of a item is the potential outcome if the threat or negative action is realized. The table below indicates the severity of the impact and what that means across several domains as examples.
Rank | Impact | Operational | User | Privacy | Engineering | Reputation |
5 | blocker | down until fixed or permanently removed | Complete control over the users device | Violation of Privacy Policy with production data | Platform or Application configuration changes needed. | Negative press in mainstream media |
4 | critical | Significant Outage (intl store) | The ability to execute scripts and code that is sandboxed on the users device | Violation of Privacy Policy | Reimplementation of core components required. | Negative press in industry media |
3 | major | Moderate Outage, complaints from users | Specific information about specific users can be obtained | Moderate concerns over Privacy issues | New development required to resolve issues. | Negative comments from user base |
2 | normal | Minor Outage, in line with SLAs | User behaviour can be trended | Minor concerns over Privacy issues | Multiple bug fixes and changes required. | Negative comments from community members |
1 | minor | Ops Team Notified | Browser crashes | Unresolved privacy issues inline with Privacy Policy | Platform or Application configuration changes needed. | Negative comments from stakeholders |
Total Priority Score = ( (total of impact scores X priority value) / max impact score) X 100)
What Scores Mean
Critical (100+) | High (99-76) | Medium (75-26) | Low (25-0) | |||||||||||||||
Effort Estimation | 1 Month | 2 Weeks | 2 Days | <1 Day | ||||||||||||||
Review Type | Group (Scheduled on SecReview Calendar) | Group (Scheduled on SecReview Calendar) | Individual Reviewer | Individual Reviewer | ||||||||||||||
Artefacts Required |
|
|
|
| ||||||||||||||
How Documented | SecReview Wiki | SecReview Wiki | SecReview Wiki -or- in Secreview bug (with indidication of no-wiki) | In SecReview Bug |
Previous Working Copy
Calculating Risk Ratings
The infrastructure security team calculates risk ratings using a basic methodology capturing the likelihood of a threat becoming a successful attack, and the impact should the attack be completed.
When assessing a threat using the tables below, consider the threat in the context of each of the headings, and score each threat for each column. Select the highest score and record that as the impact or likelihood.
Example
Consider the threat "URL Shorteners get a copy of URLs shared by F1 Users" from the Mozilla F1 security review.
Looking at the Likelihood table we see:
- Probability is 5 since it is already happening (Ongoing Issue)
- Technical is also 5 since URL shorteners are relatively easy to enumerate
Going to the Impact table we see that:
- Operational impact is zero since it has not effect on the stability of the service
- User impact is 2 since user behaviour can be trended.
- Privacy impact is 4 since sharing information with 3rd parties is a violation of our privacy policies.
- Financial impact is 1 since it is extremely low cost to resolve the issue
- Engineering impact is 3 since replacing the functionality requires authoring new software.
- Reputation impact is 3 since there may be negative comments from our users who do not wish to use the shortening service
The highest Likelihood score is 5, and the highest impact score is 4 (Privacy).
To calculate the risk score simply multiply the likelihood by the impact, in the case of the issue discussed above, the Risk Rating would be 20.
Likelihood
Likelihood | Probability | Technical |
1 | Shouldn't happen | Advanced Attack with requirement of multiple vulnerabilities to exploit |
2 | Once every few years | Advanced Attack |
3 | Once a year | Moderate difficulty attack vector |
4 | Multiple times a year | Common attack vector, requires manual exploit creation |
5 | Ongoing issue | Common attack vector, easy to mount with available tools |
Impact
The impact of a finding is the potential outcome if the threat is realized. The table below indicates the severity of the impact and what that means across several domains within an organization.
Impact | Operational | User | Privacy | Financial | Engineering | Reputation |
1 | Ops Team Notified | Browser crashes | Unresolved privacy issues inline with Privacy Policy | Low cost to remediate | Platform or Application configuration changes needed. | Negative comments from stakeholders |
2 | Minor Outage, in line with SLAs | User behaviour can be trended | Minor concerns over Privacy issues | Director approval to pay cost to remediate | Multiple bug fixes and changes required. | Negative comments from community members |
3 | Moderate Outage, complaints from users | Specific information about specific users can be obtained | Moderate concerns over Privacy issues | Requires budget changes to remediate | New development required to resolve issues. | Negative comments from user base |
4 | Significant Outage (intl store) | The ability to execute scripts and code that is sandboxed on the users device | Violation of Privacy Policy | Requires Board review to pay for remediation | Reimplementation of core components required. | Negative press in industry media |
5 | Service will be mothballed. | Complete control over the users device | Violation of Privacy Policy with Production Data | Extreme cost for remediation (e.g. MoCo/Mofo can't afford to) | Complete redesign and rewrite | Negative press in mainstream media |
Risk Rating Methodologies Used Elsewhere
DREAD from Microsoft (blog post) Uses five categories:
- Damage - how bad would an attack be?
- Reproducibility - how easy it is to reproduce the attack?
- Exploitability - how much work is it to launch the attack?
- Affected users - how many people will be impacted?
- Discoverability - how easy it is to discover the threat?
When a given threat is assessed using DREAD, each category is given a rating. For example, 3 for high, 2 for medium, 1 for low and 0 for none. The sum of all ratings for a given exploit can be used to prioritize among different exploits.
OWASP Risk Rating Methodology Similar to Yvan's in that it uses Risk = Likelihood * Impact but produces a rating from 0 to 9 (or three groups 1-3, 4-6, 7-9 which equate to Low, Medium, and High).