Foundation/AI: Difference between revisions

No edit summary
(Added note making this page obsolete)
 
(33 intermediate revisions by 3 users not shown)
Line 1: Line 1:
'''In 2019, Mozilla Foundation decided that a significant portion of its internet health programs would focus on AI topics.''' This wiki provides an overview of the issue as we see it, our theory of change and Mozilla's programmatic pursuits for 2020. Above all, it opens the door to collaboration from others.
'''NOTE: this page has been made obsolete and is no longer maintained. For up to date information on Mozilla's strategy, AI work, OKRs and more, please visit: [https://wiki.mozilla.org/Foundation https://wiki.mozilla.org/Foundation] '''


Watch our January, 2020 [https://youtu.be/zzkGhH-4FDs All Hands Plenary] for more information.
<div style="display:block;-moz-border-radius:10px;background-color:#595cf3;padding:20px;margin-top:20px;">
<div style="display:block;-moz-border-radius:10px;background-color:#FFFFFF;padding:20px;margin-top:20px;">


= Trustworthy AI Brief V0.9=
'''In 2019, Mozilla Foundation decided that a significant portion of its internet health programs would focus on AI topics.''' This wiki provides an overview of the issue as we see it, our theory of change and Mozilla's programmatic pursuits for 2020. Above all, it opens the door to collaboration from others.
 
''A downloadable version of the issue brief is available here: [https://mzl.la/AIIssueBrief https://mzl.la/AIIssueBrief]. For earlier versions see [https://mzl.la/IssueBriefv01 v0.1], [https://drive.google.com/file/d/1o8bK5qmMYzABk9aEO21bjW3_y1vKuXgB/view?usp=sharing v0.6], and [https://mzl.la/IssueBriefV061 v0.61].''
 
In 2019, Mozilla Foundation decided that a significant portion of its internet health programs would focus on AI topics. This brief offers an update and opens the door to collaboration from others.
 
'' Summary ''
 
Current debates about AI often skip over a critical question: is AI enriching the lives of human beings?
 
AI has immense potential to improve our quality of life: teeing up the perfect song; optimizing the delivery of goods; solving medical mysteries. But adding AI to the digital products we use everyday can equally compromise our security, safety and privacy. Time and again, concerning stories regarding AI, big data and targeted marketing are hitting the news. The public is losing trust in big tech yet doesn’t have any alternatives. There is much at stake.
 
Mozilla believes we need to ensure that the use of AI in consumer technology enriches the lives of human beings rather than harms them. We need to build more trustworthy AI. For us, this means two things: personal agency is a core part of how AI is built and integrated and corporate accountability is real and enforced. This will take AI in a direction different than where it’s headed now.
 
The best way to make this happen is to work like a movement: collaborating with citizens, companies,
technologists, governments and organizations around the world working to make ‘trustworthy AI’ a
reality. This is Mozilla’s approach. We already have collaborative projects underway in four areas:
 
* Helping developers build more trustworthy AI, collaborating with Pierre Omidyar and others to put $3.5 million behind professors integrating ethics into computer science curriculum.
 
* Generating interest and momentum around trustworthy AI technology, backing innovators working on ideas like data trusts and working on open source voice technology.
 
* Building consumer demand -- and encouraging consumers to be demanding, starting with resources like our Privacy Not Included guide and pushing platforms to tackle misinformation.
 
* Encouraging governments to promote trustworthy AI, including work by Mozilla Fellows to map out a policy and litigation agenda that taps into current momentum in Europe.
 
These projects are just a sample -- and just a start -- on how we hope to move the ball forward through
this collaborative strategy. We have more in the works.
 
Mozilla’s roots are as a community driven organization that works with others. We are constantly
looking for allies and collaborators to work with on our trustworthy AI efforts. As a part of this, we are
looking for AI experts to join our program advisory board.
 
 
'' What is trustworthy? ''
 
Our definition of trustworthy AI is encompassed by two key concepts: agency and accountability. We
will know we have built and designed AI that is serving rather than harming humanity when:
 
All AI is designed with personal agency in mind. Privacy, transparency and human wellbeing are key considerations.
 
and
 
Companies are held to account when their AI systems make discriminatory decisions, abuse data, or make people unsafe.
 
Mozilla is a part of a growing chorus of voices calling for a better direction for AI. Dozens of groups
have put out principles and guidelines describing what this might look like. We’re excited to see this
momentum and to work with others to make this vision a reality. See AI goals framework in appendix.
 
 
'' What’s at stake? ''
 
AI is playing a role in everything from directing our attention to deciding who gets mortgages to
solving complex human problems. This will have a big impact on humanity. The stakes include:
 
* Privacy: Our personal data powers everything from traffic maps to targeted advertising.
Trustworthy AI should let people decide how their data is used and what decisions are made with it.
 
* Fairness: We’ve seen time and again that historical bias can show up in automated decision making.
To effectively address discrimination, we need to look closley at the goals and data that fuel our AI.
 
* Trust: Algorithms on sites like YouTube often push people towards extreme, misleading content.
Overhauling these content recommendation systems could go a long way to curbing misinformation.


* Safety: Experts have raised the alarm that AI could increase security risks and cyber crime. Platform
Mozilla's most recent work on this topic is a [https://mzl.la/MozillaWhitePaper trustworthy AI white paper] released in May, 2020. Accompanying the paper is a [https://docs.google.com/forms/d/e/1FAIpQLSemkMhbjhtugjHUjxVwS0XlAkBlaP-prOm3pUsELPKjkXjupQ/viewform?usp=sf_link request for comments] that is open to the public.  
developers will need to create stronger measures to protect our data and personal security.
You can also watch our January, 2020 [https://youtu.be/zzkGhH-4FDs All Hands Plenary] for additional background on this work.  


* Transparency: Automated decisions can have huge personal impact, yet the reasons for decisions
are often opaque. We need breakthroughs in explainability and transparency to protect users.


Many people do not understand how AI regularly touches our lives and feel powerless in the face of
</div>
these systems. Mozilla is dedicated to making sure the public understands that we can and must have a
</div>
say in when machines are used to make important decisions – and shape how those decisions are made.


<div style="display:block;-moz-border-radius:10px;background-color:#b7b9fa;padding:20px;margin-top:20px;">
<div style="display:block;-moz-border-radius:10px;background-color:#FFFFFF;padding:20px;margin-top:20px;">
= Background: Mozilla and Trustworthy AI =


'' How do we move the ball forward? ''


In 2019, Mozilla Foundation decided that a significant portion of its internet health programs would focus on AI topics. We launched that work a little over a year ago, with a post arguing that: [https://marksurman.commons.ca/2019/03/06/mozillaaiupdate/ if we want a healthy internet -- and a healthy digital society -- we need to make sure AI is trustworthy]. AI, and the large pools of data that fuel it, are central to how computing works today. If we want apps, social networks, online stores and digital government to serve us as people -- and as citizens -- we need to make sure the way we build with AI has things like privacy and fairness built in from the get go. 


1. Help developers build more trustworthy AI.
Since writing that post, a number of us at Mozilla -- along with literally hundreds of partners and collaborators -- have been exploring the questions: What do we really mean by ‘trustworthy AI’? And, what do we want to do about it?


Goal: developers increasingly build things using trustworthy AI guidelines and technologies.


What we’re doing now: working with professors at 17 universities across the US to develop
'''How do we collaboratively make trustworthy AI a reality?'''
curriculum on ethics and responsible design for computer science undergraduates.


Where we need help: we are looking for partners to scale this work in Europe and Asia, and to find
We think part of the answer lies in collaborating and gathering input. In May 2020, we launched a request for comment on  v0.9 of Mozilla’s Trustworthy AI Whitepaper -- and on the accompanying theory of change (see below) that outlines the things we think need to happen.  
ways to work with developers, designers and project managers already working in the industry.




2. Generate interest and momentum around trustworthy AI technology.
'' What is trustworthy AI and why? ''


Goal: trustworthy AI products and services (personal agents, data trusts, offline data, etc.) are increasingly embraced by early adopters and attract investment.
We have chosen to use the term AI because it is a term that resonates with a broad audience, is used extensively by industry and policymakers, and is currently at the center of critical debate about the future of technology. However, we acknowledge that the term has come to represent a broad range of fuzzy, abstract ideas. Mozilla’s definition of AI includes everything from algorithms and automation to complex, responsive machine learning systems and the social actors involved in maintaining those systems.  


What we’re doing now: developing open source voice technology for others to build on, and supporting
Mozilla Fellows and others doing early pilot work on concepts like data trusts.


Where we need help: we’re looking for people with novel yet pragmatic ideas on how to make
Mozilla is working towards what we call trustworthy AI, a term used by the European High Level Expert Group on AI. '''Mozilla defines trustworthy AI as AI that is demonstrably worthy of trust. Privacy, transparency, and human well-being are key considerations and there is accountability for harms.'''
trustworthy AI a reality. We also want to meet and learn from investors in this space.




3. Build consumer demand -- and encourage consumers to be demanding.
Mozilla’s theory of change (below) is a detailed map for arriving at more trustworthy AI. It focuses on AI in consumer technology: general purpose internet products and services aimed at a wide audience. This includes products and services from social platforms, apps, and search engines, to e-commerce and ride sharing technologies, to smart home devices, voice assistants, and wearables.  
 
Goal: consumers choose trustworthy products when available and call for them when they aren’t.
 
What we’re doing now: highlighting trustworthy products through our Privacy Not Included buyer’s
guide, and pushing platforms like YouTube and PayPal for AI and data related product changes.
 
Where we need help: we’re looking for more trustworthy products to highlight, and for people both
inside and outside major tech companies who can help us drive product improvements.
 
 
4. Encourage governments to promote trustworthy AI.
 
Goal: new and existing laws are used to make the AI ecosystem more trustworthy.
 
What we’re doing now: building more momentum for trustworthy AI and better data protection in
Europe through Mozilla Fellows, partner orgs and lobbying across the region.
 
Where we need help: we’re looking for additional partners to help us sharpen our thinking on where
we can have the most impact on the current political window of opportunity in Europe.




'' About Mozilla ''
'' About Mozilla ''


Mozilla exists to guard the open nature of the internet and to ensure it remains a global public
The ‘trustworthy AI’ activities outlined in the white paper are primarily a part of the movement activities housed at the Mozilla Foundation — efforts to work with allies around the world to build momentum for a healthier digital world. These include: thought leadership efforts like the Internet Health Report and the annual Mozilla Festival, fellowships and awards for technologists, policymakers, researchers and artists, and advocacy to mobilize public awareness and demand for more responsible tech products. Mozilla’s roots are as a collaborative, community driven organization.  
resource, open and accessible to all. Founded as a community open source project in 1998, Mozilla
currently consists of two organizations: the 501(c)3 Mozilla Foundation, which leads our movement
building work; and its wholly owned subsidiary, the Mozilla Corporation, which leads our market-based
work. The two organizations work in concert with each other and a global community of tens of
thousands of volunteers under the single banner: Mozilla.


The ‘trustworthy AI’ activities outlined in this document are primarily a part of the movement
activities housed at the Mozilla Foundation -- efforts to work with allies around the world to build
momentum for a healthier digital world. These include: thought leadership efforts like the Internet
Health Report and the annual Mozilla Festival; $7M in fellowships and awards for technologists, policy
makers, researchers and artists; and campaigns to mobilize public awareness and demand for more
responsible tech products. Approximately 60% of the $25M/year invested in these efforts is focused on
trustworthy AI.


Mozilla’s roots are as a collaborative, community driven organization. We are constantly looking for
Mozilla’s roots are as a collaborative, community driven organization. We are constantly looking for
allies and collaborators to work with on our trustworthy AI efforts.
allies and collaborators to work with on our trustworthy AI efforts.


For more on Mozilla’s values, see: [https://www.mozilla.org/en-US/about/manifesto/]. Our Trustworthy AI
For more on Mozilla’s values, see: [https://www.mozilla.org/en-US/about/manifesto/]. Our Trustworthy AI
Line 149: Line 52:
and building an internet that enriches the lives of individual human beings (principles 3).
and building an internet that enriches the lives of individual human beings (principles 3).


For more on Trustworthy AI programs, see [https://wiki.mozilla.org/Foundation/AI]
 
For more on Trustworthy AI programs, see [https://wiki.mozilla.org/Foundation/AI https://wiki.mozilla.org/Foundation/AI]
 
<div style="display:block;-moz-border-radius:10px;background-color:#666666;padding:20px;margin-top:20px;">
<div style="display:block;-moz-border-radius:10px;background-color:#FFFFFF;padding:20px;margin-top:20px;">


= Theory of Change =
= Theory of Change =
Line 161: Line 68:
*    Results we will hold ourselves accountable to
*    Results we will hold ourselves accountable to


Many people have tried to come up with the right word to describe what 'good AI' looks like -- ethical, responsible, healthy.
The term we find most useful is 'trustworthy AI', as used by the European High Level Expert Group on AI. Mozilla's simple definition is:
"AI that is demonstrably worthy of trust. Privacy, transparency and human well being are key design considerations - and there is accountability for any harms that may be caused. This applies not just to AI systems themselves, but also the deployment and results of such systems."


[[File:Screenshot 2020-01-14 MoFo ToC v0p9(1) pdf.png|thumb|center|alt|1000px]]
We plan to use this term extensively, including in our theory of change and strategy work.  


[[File:MoFo AI Theory of Change (ToC) – Landscape Design.jpg|thumb|center|alt|800px]]


[[File:Screenshot 2020-01-14 MoFo ToC v0p9(1) pdf(1).png|thumb|center|alt|1000px]]
</div>
</div>


<div style="display:block;-moz-border-radius:10px;background-color:#cccccc;padding:20px;margin-top:20px;">
<div style="display:block;-moz-border-radius:10px;background-color:#FFFFFF;padding:20px;margin-top:20px;">


= 2020 OKRs =
= 2020 OKRs =


MoFo 2020 OKRs
MoFo 2020 OKRs
[draft - February 2, 2020]
[draft - March 27, 2020]


The following outlines the organization wide objectives and key results (OKRs) for Mozilla Foundation for 2020.  
The following outlines the organization wide objectives and key results (OKRs) for Mozilla Foundation for 2020.  
Line 177: Line 93:
Theory of change
Theory of change


These objectives have been developed as a part of a year long strategy process that included the creation of a multi-year theory of change for Mozilla’s trustworthy AI work (see appendix). The majority of objectives are tied directly to one or more short term (1 - 3 year) outcomes in the theory of change.
These objectives have been developed as a part of a year long strategy process that included the creation of a multi-year theory of change for Mozilla’s trustworthy AI work. The majority of objectives are tied directly to one or more short term (1 - 3 year) outcomes in the theory of change.


Partnerships
Partnerships


Mozilla Foundation’s overall focus is on growing the movement of organizations around the world committed to building a healthier internet. A key assumption behind this work is that MoFo maintains a small staff that is skilled at partnering, with most of its resources going into networking and supporting individuals and organizations within the movement. The 2020 OKRs include a strong focus on deepening our partnership practice, both through the movement-building objective (OKR4) and through a focus on partnership across all other objectives.  
Mozilla Foundation’s overall focus is on growing the movement of organizations around the world committed to building a healthier internet. A key assumption behind this work is that Mozilla maintains a small staff that is skilled at partnering, with most of its resources going into networking and supporting individuals and organizations within the movement. The 2020 OKRs include a strong focus on deepening our partnership practice.
 
Below is a bulleted list of our OKRs. You can read more about them [https://wiki.mozilla.org/Foundation/2020/OKRs here].
 
1. Thought Leadership <br />
'''Short Term Outcome:''' Clear "Trustworthy AI" guidelines emerge, leading to new and widely accepted industry norms. <br />
'''2020 Objective:''' Test out our theory of change in ways that both give momentum to other orgs taking concrete action on trustworthy AI and establish Mozilla as a credible thought leader. <br />
'''Key Results:''' <br />
* Publish a  whitepaper theory of change
* 250 people and organizations participate in mapping to show who is working on key elements of trustworthy AI and  offer feedback on the whitepaper
* 25 collaborations with partners working on concrete projects that align with short term outcomes in the theory of change
 
2. Data Stewardship <br />
'''Short Term Outcome:''' More foundational trustworthy AI technologies emerge as building blocks for developers (e.g. data trusts, edge data, data commons). <br />
'''2020 Objective:''' Increase the number of data stewardship innovations that can accelerate the growth of trustworthy AI. <br />
'''Key Results:''' <br />
* $3 million raised to support bold, multi-year, cross movement initiatives on data stewardship as an indicator of growing philanthropic support in this area.
* 10 awards or fellowships for prototypes or other concrete exploration re: data stewardship.
* 4 concentric “networks of practice” engaging with Data Futures Lab
 
3. Consumer Power <br />
'''Short Term Outcome:''' Citizens are increasingly willing and able to pressure and hold companies accountable for the trustworthiness of their AI. <br />
'''2020 Objective:''' Mobilize an influential consumer audience using pivotal moments to pressure companies to make ‘consumer AI’ more trustworthy. <br />
'''Key Results:''' <br />
* 3m page views to content on Mozilla channels, a majority of which focuses on trustworthy AI.
* 75k new subscribers drawn from sources (partnerships, contextual advertising, etc.) oriented towards people ages 18-35.
* 25k people share information with us (stories, browsing data, etc.) in order to gather evidence about how AI currently works and what changes are needed.
 
4. Movement Building <br />
'''Short Term Outcome:''' A growing number of civil society actors are promoting trustworthy AI as a key part of their work. <br />
'''2020 Objective:''' Partner with diverse movements to deepen intersections between their primary issues and internet health, including trustworthy AI, so that we increase shared purpose. <br />
'''Key Results:''' <br />
* 30% increase in partners with whom we (have both) published,  launched, or hosted something that includes shared approaches to their issues and internet health (e.g. shared language, methodologies, resources or events).
* 75% of partners from these diverse movements report deepening intersection between their issues and internet health/AI.
* 4 new partnerships in the Global South report deepened intersection between their work and ours.
 
= Timeline =


Activities


Each of the following pages also includes a notional list of activities that will need to happen in order for the objective to be achieved. Most of these activities will require work by multiple teams across the organization. Teams adopted, honed or proposed these new activities in January 2020. Prioritized activities will be included in the plan and budget presented to the Mozilla Foundation board in February 2020.
Trustworthy AI fits within the internet health movement building strategy Mozilla launched in 2016. Over the last 18 months we've been working to figure out how trustworthy AI can be a central focus in driving this movement forward.  


How to use this document
Below is a timeline of key steps and documents from this process. They collectively tell the story of how Mozilla got to this goal and why.


This document is intended for Mozilla staff, fellows, partners, board members and anyone else interested in what we are doing. At this stage, it is a working document. Expect changes, especially at the activity level, through February 2020.
January 2016 - [https://wiki.mozilla.org/MoFo_2020 Movement Building Strategy Launch]
* In January, 2016 Mozilla launched a movement building strategy. The goal was to combine our programmatic to catalyze a movement for a healthier internet.  
January 2018 - [https://drive.google.com/file/d/1Bl-h9d1IrhBXOskacm8eYkPteNMdxlN2/view Mozilla Strategy Brief & Theory of Change]
* After two years of implementing the internet health movement building strategy, Mozilla releases update language and proposes a rough theory of change.  


{| class="wikitable sortable" cellpadding="3" width="100%"
September 2018 - [https://drive.google.com/file/d/14izVPHwpy4hZGt4xclHpmaXr15pME5PI/view Mozilla 2016-2018 Program Evaluation]
|+
* The detailed strategy review resulted in many key learnings regarding what was working and what was not. Chief among them was that the movement building theory of change was too broad and that specific, measurable direction underneath this umbrella strategy was necessary. The takeaway was that Mozilla would choose an impact goal to drive this work forward.
|1.  
|Thought leadership
|+
|Short term outcomes
(1 - 3 years)
| Clear "Trustworthy AI" guidelines emerge, leading to new and widely accepted industry norms.
|+
|Narrative
|In a world awash in AI ethics guidelines, charters and manifestos, more and more people are asking: how do we turn all this talk into action?


In 2019, Mozilla focused on a) identifying actionable trustworthy AI patterns across a wide variety of guidelines plus the Mozilla Manifesto and b) mapped out these patterns plus a set of concrete technical, advocacy and policy steps that could be taken to make trustworthy AI a reality. In 2020, we will share the output of this work, map the work already underway that aligns with the Theory of Change and support the work of people who are taking action in the areas we describe in our research.  
November 2018 - [https://drive.google.com/file/d/1cExHEoEpaHKJhgTKWp2h50xvXMWeY76E/view Short Listed Impact Goals]
* Summarizes our recommendation to Mozilla's Board of Directors on why the impact goal focus should be 'better machine decision making' (now, trustworthy AI).
* Based on the strategy evaluation and a program review, Mozilla looked at a number of options for impact goals. We narrowed the short list to four goals outlined in this document, one of which was "better machine decision making" (which grew into trustworthy AI).  


We will model moving from concept to action ourselves by collaborating with others on concrete actions related to the short-term outcomes in the Theory of Change. ‘Concrete action’ is work that will result in  outputs that enable others to build and innovate including code, data, curriculum, law, litigation or other real world activity.  The work will not only allow us to move from talk to action -- it will also provide feedback and learning that will let us hone the thinking in the theory of change and related research so it can be used more widely.
November 2018 - [https://mzl.la/IssueBriefv01 Better machine decision making issue brief]
|+
* This document is the issue brief we wrote to describe what we meant by better machine decision making at the time and used this to start consulting with our allies and partners. It describes both the issue and the beginning of a roadmap on areas for improvement to get us to 'better'.  
|2020 objective
| Test out our theory of change in ways that both give momentum to other orgs taking concrete action on trustworthy AI and establish Mozilla as a credible thought leader.  
|+
|}


{| class="wikitable sortable" cellpadding="3" width="100%"
November 2018 - [https://medium.com/read-write-participate/slowing-down-asking-questions-looking-ahead-265f6b99810d Slowing Down, Asking Questions, Looking Ahead]
|+
* In November we announced 'better machine decision making' as our goal. This kicked off a period of engagement and consultation.  
|2.  
|Data stewardship
|+
|Short term outcomes
(1 - 3 years)
| More foundational trustworthy AI technologies emerge as building blocks for developers (e.g. data trusts, edge data, data commons).
|+
|Narrative
|In the era of machine learning, who controls our data and what they do with is a big deal. It determines not only what is possible with AI but also who knows what, who can innovate, who makes money, what decisions get made. Right now, the biggest pools of data sit with the tech platforms and other companies that underpin our digital lives. What if this wasn’t the case? What if large pools of data were stewarded in a way that would benefit and protect the people who created the data in the first place? Or that would collectively benefit the general public? Emerging ideas like data trusts, data cooperatives and data commons aim to do exactly this: to shift the power dynamic around data.  


In 2020, Mozilla and its partners will explore whether data stewardship models like these have the potential to accelerate the growth of trustworthy AI, offering everyone from developers to policy makers a new set of tools to use in their work. In the immediate term, Mozilla’s role in this work will be to a) provide an overview of trends and opportunities and b) to connect and fund people working on innovations in this space. These innovations may include laws, contracts, software, services and business models that put data stewardship concepts into concrete action. Over the longer term, Mozilla could enter into the business of being a data steward itself, helping members of the public collectively manage their relationships with platforms and others who use their data.
March 2019 - [https://marksurman.commons.ca/2019/03/06/mozillaaiupdate/ Mozilla, AI and internet health: an update]
|+
* By March, we'd adapted the language of our impact goal from better machine decision making to trustworthy AI. This blog post draws direct connections between our movement building theory of change and how trustworthy AI fits into that. It answers the question: how will we shape the agenda, rally citizens and connect leaders around trustworthy AI?
|2020 objective
| Increase the number of data stewardship innovations that can accelerate the growth of trustworthy AI.
|+
|}


{| class="wikitable sortable" cellpadding="3" width="100%"
April 2019 - [https://marksurman.commons.ca/2019/04/23/why-ai-consumer-tech/ Why AI + consumer tech?]
|+
* In April, 2019 we narrowed in on consumer technology as the key area where Mozilla can have the biggest impact in the AI field.  
|3.  
|Consumer Power
|+
|Short term outcomes
(1 - 3 years)
| Citizens are increasingly willing and able to pressure and hold companies accountable for the trustworthiness of their AI.
|+  
|Narrative
|As AI-enabled technology becomes increasingly pervasive, we have a critical window in which to educate and more deeply engage people to advocate for trustworthy AI. In their role as consumers, people can illustrate the demand for trustworthy AI and its economic potential, accelerating action by developers, investors and policymakers. Younger adults (18-35 yrs) have disproportionate power to influence company behavior given their current and projected purchasing power. Mozilla’s current audience (given our existing measures) is predominantly older and we need to diversify our audience as we expand our reach.


In 2020, we’ll increase awareness of trustworthy AI among key consumer audiences and then mobilize this cohort into deeper engagement on the issue. We’ll use pivotal moments (elections, holidays, etc.) among other tactics to show how AI impacts people and direct those who seek change a ‘hub’ for information, action and connection around ‘trustworthy AI’. We’ll focus our consumer mobilization on companies that produce AI-enabled consumer technologies widely available in the US/EU, including recommendation engines, targeting advertising and voice assistants. To deepen engagement, we’ll recruit people to gather evidence with us about the role and influence of algorithms.
May 2019 - [https://marksurman.commons.ca/2019/05/13/consider-this-ai-and-internet-health/ Consider this: AI and Internet Health]
|+
* Though the focus had been narrowed to consumer technology, we wanted to get even more specific about the impact we wanted to make. This blog explores which aspects of consumer technology that Mozilla considered focusing on. The list included: accountability; agency; rights; and open source.  
|2020 objective
| Mobilize an influential consumer audience using pivotal moments to pressure companies to make ‘consumer AI’ more trustworthy.  
|+
|}


{| class="wikitable sortable" cellpadding="3" width="100%"
August 2019 - [https://marksurman.commons.ca/2019/08/28/update-digging-deeper-on-trustworthy-ai/ Update: Digging Deeper on ‘Trustworthy AI’]
|+
* By August, 2019 we share our long term outcomes and long term trustworthy AI goal. We had landed on agency and accountability as our outcomes and "in a world of AI, consumer technology enriches the lives of human beings" as our goal.  
|4.  
|Movement building
|+
|Short term outcomes
(1 - 3 years)
| A growing number of civil society actors are promoting trustworthy AI as a key part of their work.
|+
|Narrative
|The internet health movement cannot succeed if it is siloed. Internet health and the likelihood of trustworthy AI increases as the need for both is prioritized by greater numbers of people. Internet growth rates in the global south measure over 10k% over the last decade. In Europe and North America where growth may now be slower, penetration rates hover around 90%. Regardless of region, digital platforms have become essential tools for 21st-century social movements.  


Interdependence is geographically inherent to the internet and a tenet upon which the efficacy of social movements relies. Building models of engagement that value geographic and social interdependence; reaching users in the fastest growing regions; and engaging those users already self-organized for purpose-driven activity, increases the likelihood of internet health becoming a priority more broadly, thus ensuring our success.
January 2020 - [https://youtu.be/zzkGhH-4FDs All Hands Plenary]
* By January 2020 we were starting to see this work show up in our programs. Here, Mozilla staff and fellows talk about how their work is helping us towards our trustworthy AI goal.
In 2020 Mozilla will prioritize partnering with constituencies where we may deepen our understanding and action toward common cause. Diverse movements can originate from expanded geographies, particularly the global south and east; and per the theory of change, from human rights-, consumer rights- and those sectors historically excluded from the progression of Internet health or artificial intelligence. “Partnership” is a synchronous engagement for growth, learning, benefit, change. Partnering can include funding, training, resourcing, research collaboration, united campaigning, convening.
|+
|2020 objective
| Partner with diverse movements to deepen intersections between their primary issues and internet health, including trustworthy AI, so that we increase shared purpose.
|+
|}


{| class="wikitable sortable" cellpadding="3" width="100%"
March 2020 - [https://foundation.mozilla.org/en/blog/privacy-pandemics-and-ai-era/ Privacy, Pandemics and  the AI Era]
|+
* This blog explores the connection between the COVID-19 pandemic, and the technological solutions being proposed. These issues are central to the long term impact of AI.  
|5.  
|Org effectiveness
|+
|Short term outcomes
(1 - 3 years)
| A growing number of civil society actors are promoting trustworthy AI as a key part of their work.
|+
|Narrative
|Over the last 5 years, MoFo has moved through a long period of growth and change. Through this change, we have built a solid set of programs focused on internet health and movement building, and have mapped out a vision for our work around trustworthy AI. As we follow through on this work, we need to build an increasingly high performing, effective organization with the supports and resources required to drive impact through these programs.  


We have strong foundations in place, but in many cases, we are still living with systems and models from a previous era. It is imperative that we understand the needs and shape of the Foundation now, and put updated approaches in place to confidently set us up to execute on our strategy for years to come. These include a long term funding model to ensure sustainability, new systems for gaining organizational insight and measuring performance, support for ensuring our people have the skills needed to excel in their roles and deliver on our goals, and a clear and transparent framework for decision-making.
April 2020 - [https://marksurman.commons.ca/2020/04/22/privacy-norms-and-the-pandemic/ Privacy Norms and the Pandemic]
|+
* Similar to the post above, here we explore the long term data governance implications of technology deployment during the pandemic.  
|2020 objective
| Update our organizational models and capabilities so that our strategy and people can succeed, and our ambition can grow over multiple years.
|+
|}


= Process =
May 2020 - [https://drive.google.com/file/d/1LD8pBC-cu7bkvU-9v-DZEyCmpWED7W7Z/view Mozilla v0.9 White Paper on Trustworthy AI]
These documents are process based and help share the story of how Mozilla got to this goal and why.
* In May of 2020 Mozilla released a white paper on our approach to trustworthy AI. The paper talks about how industry, regulators and citizens of the internet can work together to build more agency and accountability into our digital world. It also talks briefly about some of the areas where Mozilla will focus, knowing that Mozilla is only one small actor in the bigger picture of shifting the AI tide.  


* [https://drive.google.com/file/d/1cExHEoEpaHKJhgTKWp2h50xvXMWeY76E/view Impact goal summary] (November 2018)
May 2020 - [https://marksurman.commons.ca/2020/05/14/request-for-comment-how-to-collaboratively-make-trustworthy-ai-a-reality/ Request for comment: how to collaboratively make trustworthy AI a reality]
* [https://mzl.la/IssueBriefv01 Better machine decision making issue brief] (November 2018)
* Following the white paper launch, we opened a request for comments inviting our allies and community to feedback on this thinking. We welcome you as a part of that process. You can add your voice by writing a response to what you read, reaching out to us or [https://docs.google.com/forms/d/e/1FAIpQLSemkMhbjhtugjHUjxVwS0XlAkBlaP-prOm3pUsELPKjkXjupQ/viewform filling out this form].
* [https://drive.google.com/file/d/1Bl-h9d1IrhBXOskacm8eYkPteNMdxlN2/view Existing MoFo Theory of Change] (January 2018)


You can read more about the background for this project [https://wiki.mozilla.org/Foundation/AIBackgroundWork here].
You can read more about the background for this project [https://wiki.mozilla.org/Foundation/AIBackgroundWork here].

Latest revision as of 20:51, 14 October 2021

NOTE: this page has been made obsolete and is no longer maintained. For up to date information on Mozilla's strategy, AI work, OKRs and more, please visit: https://wiki.mozilla.org/Foundation

In 2019, Mozilla Foundation decided that a significant portion of its internet health programs would focus on AI topics. This wiki provides an overview of the issue as we see it, our theory of change and Mozilla's programmatic pursuits for 2020. Above all, it opens the door to collaboration from others.

Mozilla's most recent work on this topic is a trustworthy AI white paper released in May, 2020. Accompanying the paper is a request for comments that is open to the public. You can also watch our January, 2020 All Hands Plenary for additional background on this work.


Background: Mozilla and Trustworthy AI

In 2019, Mozilla Foundation decided that a significant portion of its internet health programs would focus on AI topics. We launched that work a little over a year ago, with a post arguing that: if we want a healthy internet -- and a healthy digital society -- we need to make sure AI is trustworthy. AI, and the large pools of data that fuel it, are central to how computing works today. If we want apps, social networks, online stores and digital government to serve us as people -- and as citizens -- we need to make sure the way we build with AI has things like privacy and fairness built in from the get go.

Since writing that post, a number of us at Mozilla -- along with literally hundreds of partners and collaborators -- have been exploring the questions: What do we really mean by ‘trustworthy AI’? And, what do we want to do about it?


How do we collaboratively make trustworthy AI a reality?

We think part of the answer lies in collaborating and gathering input. In May 2020, we launched a request for comment on v0.9 of Mozilla’s Trustworthy AI Whitepaper -- and on the accompanying theory of change (see below) that outlines the things we think need to happen.


What is trustworthy AI and why?

We have chosen to use the term AI because it is a term that resonates with a broad audience, is used extensively by industry and policymakers, and is currently at the center of critical debate about the future of technology. However, we acknowledge that the term has come to represent a broad range of fuzzy, abstract ideas. Mozilla’s definition of AI includes everything from algorithms and automation to complex, responsive machine learning systems and the social actors involved in maintaining those systems.


Mozilla is working towards what we call trustworthy AI, a term used by the European High Level Expert Group on AI. Mozilla defines trustworthy AI as AI that is demonstrably worthy of trust. Privacy, transparency, and human well-being are key considerations and there is accountability for harms.


Mozilla’s theory of change (below) is a detailed map for arriving at more trustworthy AI. It focuses on AI in consumer technology: general purpose internet products and services aimed at a wide audience. This includes products and services from social platforms, apps, and search engines, to e-commerce and ride sharing technologies, to smart home devices, voice assistants, and wearables.


About Mozilla

The ‘trustworthy AI’ activities outlined in the white paper are primarily a part of the movement activities housed at the Mozilla Foundation — efforts to work with allies around the world to build momentum for a healthier digital world. These include: thought leadership efforts like the Internet Health Report and the annual Mozilla Festival, fellowships and awards for technologists, policymakers, researchers and artists, and advocacy to mobilize public awareness and demand for more responsible tech products. Mozilla’s roots are as a collaborative, community driven organization.


Mozilla’s roots are as a collaborative, community driven organization. We are constantly looking for allies and collaborators to work with on our trustworthy AI efforts.


For more on Mozilla’s values, see: [1]. Our Trustworthy AI goals framework builds on key manifesto principles including agency (principle 5), transparency (principle 8) and building an internet that enriches the lives of individual human beings (principles 3).


For more on Trustworthy AI programs, see https://wiki.mozilla.org/Foundation/AI

Theory of Change

The Theory of Change update will enable Mozilla & our allies to take both coordinated and decentralized action in a shared direction, towards collective impact on trustworthy AI.

It seeks to define:

  • Tangible changes in the world we and others will pursue (aka long term outcomes)
  • Strategies that we and others might use to pursue these outcomes
  • Results we will hold ourselves accountable to

Many people have tried to come up with the right word to describe what 'good AI' looks like -- ethical, responsible, healthy.

The term we find most useful is 'trustworthy AI', as used by the European High Level Expert Group on AI. Mozilla's simple definition is:

"AI that is demonstrably worthy of trust. Privacy, transparency and human well being are key design considerations - and there is accountability for any harms that may be caused. This applies not just to AI systems themselves, but also the deployment and results of such systems."

We plan to use this term extensively, including in our theory of change and strategy work.

alt

2020 OKRs

MoFo 2020 OKRs [draft - March 27, 2020]

The following outlines the organization wide objectives and key results (OKRs) for Mozilla Foundation for 2020.

Theory of change

These objectives have been developed as a part of a year long strategy process that included the creation of a multi-year theory of change for Mozilla’s trustworthy AI work. The majority of objectives are tied directly to one or more short term (1 - 3 year) outcomes in the theory of change.

Partnerships

Mozilla Foundation’s overall focus is on growing the movement of organizations around the world committed to building a healthier internet. A key assumption behind this work is that Mozilla maintains a small staff that is skilled at partnering, with most of its resources going into networking and supporting individuals and organizations within the movement. The 2020 OKRs include a strong focus on deepening our partnership practice.

Below is a bulleted list of our OKRs. You can read more about them here.

1. Thought Leadership
Short Term Outcome: Clear "Trustworthy AI" guidelines emerge, leading to new and widely accepted industry norms.
2020 Objective: Test out our theory of change in ways that both give momentum to other orgs taking concrete action on trustworthy AI and establish Mozilla as a credible thought leader.
Key Results:

  • Publish a whitepaper theory of change
  • 250 people and organizations participate in mapping to show who is working on key elements of trustworthy AI and offer feedback on the whitepaper
  • 25 collaborations with partners working on concrete projects that align with short term outcomes in the theory of change

2. Data Stewardship
Short Term Outcome: More foundational trustworthy AI technologies emerge as building blocks for developers (e.g. data trusts, edge data, data commons).
2020 Objective: Increase the number of data stewardship innovations that can accelerate the growth of trustworthy AI.
Key Results:

  • $3 million raised to support bold, multi-year, cross movement initiatives on data stewardship as an indicator of growing philanthropic support in this area.
  • 10 awards or fellowships for prototypes or other concrete exploration re: data stewardship.
  • 4 concentric “networks of practice” engaging with Data Futures Lab

3. Consumer Power
Short Term Outcome: Citizens are increasingly willing and able to pressure and hold companies accountable for the trustworthiness of their AI.
2020 Objective: Mobilize an influential consumer audience using pivotal moments to pressure companies to make ‘consumer AI’ more trustworthy.
Key Results:

  • 3m page views to content on Mozilla channels, a majority of which focuses on trustworthy AI.
  • 75k new subscribers drawn from sources (partnerships, contextual advertising, etc.) oriented towards people ages 18-35.
  • 25k people share information with us (stories, browsing data, etc.) in order to gather evidence about how AI currently works and what changes are needed.

4. Movement Building
Short Term Outcome: A growing number of civil society actors are promoting trustworthy AI as a key part of their work.
2020 Objective: Partner with diverse movements to deepen intersections between their primary issues and internet health, including trustworthy AI, so that we increase shared purpose.
Key Results:

  • 30% increase in partners with whom we (have both) published, launched, or hosted something that includes shared approaches to their issues and internet health (e.g. shared language, methodologies, resources or events).
  • 75% of partners from these diverse movements report deepening intersection between their issues and internet health/AI.
  • 4 new partnerships in the Global South report deepened intersection between their work and ours.

Timeline

Trustworthy AI fits within the internet health movement building strategy Mozilla launched in 2016. Over the last 18 months we've been working to figure out how trustworthy AI can be a central focus in driving this movement forward.

Below is a timeline of key steps and documents from this process. They collectively tell the story of how Mozilla got to this goal and why.

January 2016 - Movement Building Strategy Launch

  • In January, 2016 Mozilla launched a movement building strategy. The goal was to combine our programmatic to catalyze a movement for a healthier internet.

January 2018 - Mozilla Strategy Brief & Theory of Change

  • After two years of implementing the internet health movement building strategy, Mozilla releases update language and proposes a rough theory of change.

September 2018 - Mozilla 2016-2018 Program Evaluation

  • The detailed strategy review resulted in many key learnings regarding what was working and what was not. Chief among them was that the movement building theory of change was too broad and that specific, measurable direction underneath this umbrella strategy was necessary. The takeaway was that Mozilla would choose an impact goal to drive this work forward.

November 2018 - Short Listed Impact Goals

  • Summarizes our recommendation to Mozilla's Board of Directors on why the impact goal focus should be 'better machine decision making' (now, trustworthy AI).
  • Based on the strategy evaluation and a program review, Mozilla looked at a number of options for impact goals. We narrowed the short list to four goals outlined in this document, one of which was "better machine decision making" (which grew into trustworthy AI).

November 2018 - Better machine decision making issue brief

  • This document is the issue brief we wrote to describe what we meant by better machine decision making at the time and used this to start consulting with our allies and partners. It describes both the issue and the beginning of a roadmap on areas for improvement to get us to 'better'.

November 2018 - Slowing Down, Asking Questions, Looking Ahead

  • In November we announced 'better machine decision making' as our goal. This kicked off a period of engagement and consultation.

March 2019 - Mozilla, AI and internet health: an update

  • By March, we'd adapted the language of our impact goal from better machine decision making to trustworthy AI. This blog post draws direct connections between our movement building theory of change and how trustworthy AI fits into that. It answers the question: how will we shape the agenda, rally citizens and connect leaders around trustworthy AI?

April 2019 - Why AI + consumer tech?

  • In April, 2019 we narrowed in on consumer technology as the key area where Mozilla can have the biggest impact in the AI field.

May 2019 - Consider this: AI and Internet Health

  • Though the focus had been narrowed to consumer technology, we wanted to get even more specific about the impact we wanted to make. This blog explores which aspects of consumer technology that Mozilla considered focusing on. The list included: accountability; agency; rights; and open source.

August 2019 - Update: Digging Deeper on ‘Trustworthy AI’

  • By August, 2019 we share our long term outcomes and long term trustworthy AI goal. We had landed on agency and accountability as our outcomes and "in a world of AI, consumer technology enriches the lives of human beings" as our goal.

January 2020 - All Hands Plenary

  • By January 2020 we were starting to see this work show up in our programs. Here, Mozilla staff and fellows talk about how their work is helping us towards our trustworthy AI goal.

March 2020 - Privacy, Pandemics and the AI Era

  • This blog explores the connection between the COVID-19 pandemic, and the technological solutions being proposed. These issues are central to the long term impact of AI.

April 2020 - Privacy Norms and the Pandemic

  • Similar to the post above, here we explore the long term data governance implications of technology deployment during the pandemic.

May 2020 - Mozilla v0.9 White Paper on Trustworthy AI

  • In May of 2020 Mozilla released a white paper on our approach to trustworthy AI. The paper talks about how industry, regulators and citizens of the internet can work together to build more agency and accountability into our digital world. It also talks briefly about some of the areas where Mozilla will focus, knowing that Mozilla is only one small actor in the bigger picture of shifting the AI tide.

May 2020 - Request for comment: how to collaboratively make trustworthy AI a reality

  • Following the white paper launch, we opened a request for comments inviting our allies and community to feedback on this thinking. We welcome you as a part of that process. You can add your voice by writing a response to what you read, reaching out to us or filling out this form.


You can read more about the background for this project here.