Foundation/2021/OKRs

From MozillaWiki
Jump to navigation Jump to search

Mozilla Foundation 2021 OKRs

Mozilla Foundation is fueling a movement for a healthy internet.

This document outlines the Mozilla Foundation’s organizational objectives and key results (OKRs) for 2021.

  • Objectives = “What do we want to do and why?”
  • Key Results = “How will we know if we’re successful?”

These objectives were developed based on our work in 2020, and are linked to our multi-year Trustworthy AI theory of change, with links to specific short term outcomes.

In 2021, the Foundation will also continue to increase its focus on Diversity, Equity and Inclusion, both internally and externally. Most OKRs have DEI related activities. Also, the Foundation is developing a multi-year DEI strategy that builds on past efforts, including our 2020 Racial Justice Commitments.


Related documents:

Org-wide OKRs

2021 OKRs are informed by Mozilla Foundation's three year narrative arcs.

OKR 1: Making AI Transparency the Norm: Test AI transparency "best practices" to increase adoption by builders and policymakers.

Responsible: Ashley Boyd

# Key Results KR Lead
1.1 X# of citations of Mozilla’s AI transparency best practices by builders.

Motivation: Our work on misinfo and political ads established us as a champion of AI transparency. In 2021, we will broaden this work by a. working with builders to create a list of AI transparency best practices and b. creating a transparency rating rubric for Privacy Not Included.

Sample Activities:

  • Develop a taxonomy + gap analysis of ‘AI + transparency’ best practices in consumer internet tools and platforms (H1).
  • Involve builders in the research, iteration and sharing of best practices.
  • Develop an AI transparency rating rubric for Privacy Not Included based on these best practices (H2).
Eeva Moore
1.2 25 citations of Mozilla data/models by policymakers as part of AI transparency work.

Motivation: Projects like Regrets Reporter and Firefox Rally show citizens will engage in efforts to make platforms more transparent. In 2021, we want to test whether evidence gathered from this type of research is effective in driving enforcement and policy change related to AI transparency.

Sample Activities:

  • Recruit additional YouTube Regrets users w/ movement partners to generate additional data and reporting, particularly regions where AI transparency is gaining momentum.
  • Use YouTube Regrets findings to engage policymakers in key jurisdictions to demonstrate the need for AI transparency policies.
  • Use the Rally platform to run up to five in-depth research studies by Mozilla and others that demonstrate the value of transparency in guiding decisions re: misinfo + AI.
Brandi Geurkink
1.3 5 ‘best practice’ reports published to provide evidence about value to consumers.

Motivation: Our hope is that more AI transparency will give people more agency -- and that this is something people want. However, we don’t know that this is true. In 2021, we want to fund or run a set of experiments to see whether consumers value AI transparency in practice.

Sample Activities:

  • Generate best practice reports based on investments such as PNI rubric (developed in H1) and Umang Bhatt’s research on transparency in a consumer product (eg., Spotify).
  • Feature findings from these and other reports in Internet Health Report, MozFest, D+D, etc.
  • Partner with cities to test efficacy of AI registry and transparency tools (pending funding).
  • Model + test transparent recommendation engine designs based on what we learned from YouTube Regrets (pending funding).
Becca Ricks

OKR 2: Modeling Good Data Stewardship: Accelerate more equitable data governance alternatives to advance trustworthy AI.

Responsible: J Bob Alotta

# Key Results KR Lead
2.1 7 projects tested with real users to identify building blocks for viable data stewardship models.

Motivation:
We have a number of projects funded or underway to test our alternative data stewardship models. In 2021, we want to: a. Design, implement, test and advance these projects; and b. establish a set of ‘success criteria’ for these projects in the process.

Sample Activities:

  • Develop and document success criteria for Data Futures Lab, Common Voice, MoFo CRM.
  • Document Data Futures Lab grantee partners success and failures, and feed this into the development of criteria for replicablicablity.
  • Take over stewardship of the Common Voice project, modeling and documenting our thinking on how citizen-built data commons for AI can work.
  • Use CRM update project to develop new MoFo data governance processes, Pan Mozilla data sharing framework and ways to model citizen-centric approaches to data stewardship.
Mehan Jayasuriya
2.2 5 regulatory jurisdictions utilize our input to enable collective data rights for users.

Motivation:
While many jurisdictions are giving people new data rights, there are few places where people can pursue these rights collectively or are protected from collective harm. In 2021, we want to develop -- and advocate for -- concrete policy proposals related to collective data rights.

Sample Activities:

  • Work with Data Futures Lab grantees to use existing regulatory frameworks collectively on behalf of their constituents (eg. labour + consumers).
  • Set up a data rights policy working group (team/fellows) to develop a position on -- and advocate for -- collective data rights in regulations in EU, UK, Canada, and India.
  • Also, develop recommendations on collective data rights for inclusion in U.S. platform accountability approaches being considered by the new administration.
Mathias Vermeulen
2.3 6 stakeholder groups established as constituents of the Data Futures Lab.

Motivation:
We now have a ‘proto’ Data Futures Lab in place. In 2021, we will fully launch the Lab, creating a kinetic point of connection across many disciplines and geographies. As an increasing number of researchers, policy makers, activists, designers, developers, legal experts and funders ‘join’, momentum, funding, expertise and impact will grow from the Lab.

Sample Activities:

  • Launch Lab, hire staff, establish stakeholder engagement.
  • Make Infrastructure Fund grants to stakeholders w high motivation for alternative data governance models. * Source second cohort of Prototype Fund grantee partners.
  • Core funders engage other funders who join collaborative. Non-tech funders invest in Lab.
  • Convene developers and builders; journalists, researchers, activists.
Kasia Odrozek

OKR 3: Mitigating AI bias: Accelerate the impact of people working to mitigate bias in AI.

Responsible: Ashley Boyd

# Key Results KR Lead
3.1 X% increase in investments for AI + bias grantees. Jessica Gonzales
3.2 X# people participate (share stories, donate data, etc.) in projects on mitigating bias in AI as a result of Mozilla promotion. Xavier Harding
3.3 Pipeline of additional projects Mozilla can support to mitigate bias in AI established. Roselyn Odoyo

OKR 4: Growing Across Movements: Strengthen partnership with diverse movements to deepen intersections between their primary issues and ours, including trustworthy AI.

Responsible: J Bob Alotta

# Key Results KR Lead
4.1 Phase 1 Landscape analysis is complete and we have identified partner movements. Hanan Elmasu
4.2 The Foundation’s African Mradi workstream centering local expertise is developed. TBD (pending hire)
4.3 Synchronize internal operations to strengthen ability to strategically partner externally. Lindsey Frost Dodson

OKR 5: Organizational Effectiveness: Enhance our organizational systems and capabilities to support more data-informed decision-making.

Responsible: Angela Plohman

# Key Results KR Lead
5.1 2022 planning and budget decisions driven by systematic evaluation of our work in 2021. Lainie DeCoursy
5.2 100% of teams have workflows and reports that are supported by our integrated CRM. Jackie Lu
5.3 Complete data analysis that reveals best approaches for converting ‘subscribers’ to ‘donors’. Will Easton