|
|
(112 intermediate revisions by 4 users not shown) |
Line 1: |
Line 1: |
| Mozilla has decided to focus on [https://mzl.la/IssueBriefFeb2019 ‘better machine decision making’] as an [https://drive.google.com/file/d/1cExHEoEpaHKJhgTKWp2h50xvXMWeY76E/view ‘impact goal’] that will inform internet health fellowships, campaigns and thought leadership work in the internet health movement over the coming years.
| | '''NOTE: this page has been made obsolete and is no longer maintained. For up to date information on Mozilla's strategy, AI work, OKRs and more, please visit: [https://wiki.mozilla.org/Foundation https://wiki.mozilla.org/Foundation] ''' |
|
| |
|
| The aim is to have a thematic goal with a clear [https://drive.google.com/file/d/1Bl-h9d1IrhBXOskacm8eYkPteNMdxlN2/view theory of change] that Mozilla and its allies can go after in a distributed, decentralized manner -- but with clear markers and milestones that allow everyone to see progress. This goal will build on the [https://wiki.mozilla.org/Foundation/2018/OKRs#Core_Theme:_Your_Data_and_You “your data and you” theme] that Mozilla used in 2018. As with that theme, ~60% of Mozilla’s movement building program resources will go towards one goal over the next few years.
| | <div style="display:block;-moz-border-radius:10px;background-color:#595cf3;padding:20px;margin-top:20px;"> |
| | <div style="display:block;-moz-border-radius:10px;background-color:#FFFFFF;padding:20px;margin-top:20px;"> |
|
| |
|
| This wiki is meant to house Mozilla's up-to-date thinking and will be regularly updated. What follows is a list of the key documents that inform this thinking as well as the why, what and how of following through on it. | | '''In 2019, Mozilla Foundation decided that a significant portion of its internet health programs would focus on AI topics.''' This wiki provides an overview of the issue as we see it, our theory of change and Mozilla's programmatic pursuits for 2020. Above all, it opens the door to collaboration from others. |
|
| |
|
| =Key Documents= | | Mozilla's most recent work on this topic is a [https://mzl.la/MozillaWhitePaper trustworthy AI white paper] released in May, 2020. Accompanying the paper is a [https://docs.google.com/forms/d/e/1FAIpQLSemkMhbjhtugjHUjxVwS0XlAkBlaP-prOm3pUsELPKjkXjupQ/viewform?usp=sf_link request for comments] that is open to the public. |
| | You can also watch our January, 2020 [https://youtu.be/zzkGhH-4FDs All Hands Plenary] for additional background on this work. |
|
| |
|
| ==Project documentation==
| |
|
| |
|
| * [https://mzl.la/IssueBriefFeb2019 Updated better machine decision making issue brief] (February 2019)
| | </div> |
| * [https://mzl.la/2019ImpactGoalActivities 2019 current and planned activities] (February 2019)
| | </div> |
|
| |
|
| ==Posts and articles== | | <div style="display:block;-moz-border-radius:10px;background-color:#b7b9fa;padding:20px;margin-top:20px;"> |
| | <div style="display:block;-moz-border-radius:10px;background-color:#FFFFFF;padding:20px;margin-top:20px;"> |
| | = Background: Mozilla and Trustworthy AI = |
|
| |
|
| (context)
| |
|
| |
|
| * [https://www.youtube.com/watch?v=5bVZOUdNPZM Orlando All Hands talks on ‘better machine decision making] (December 2018)
| | In 2019, Mozilla Foundation decided that a significant portion of its internet health programs would focus on AI topics. We launched that work a little over a year ago, with a post arguing that: [https://marksurman.commons.ca/2019/03/06/mozillaaiupdate/ if we want a healthy internet -- and a healthy digital society -- we need to make sure AI is trustworthy]. AI, and the large pools of data that fuel it, are central to how computing works today. If we want apps, social networks, online stores and digital government to serve us as people -- and as citizens -- we need to make sure the way we build with AI has things like privacy and fairness built in from the get go. |
| * [https://www.cnn.com/2019/01/15/opinions/artificial-intelligence-ethical-responsible-programming-surman/index.html "How to keep AI from turning into the Terminator" CNN Article]
| |
| * [https://medium.com/read-write-participate/improving-internet-health-what-and-how-can-we-win-together-c2dae7a5cf19 Blog post], "Improving Internet Health: What (and How) Can We Win Together?" Initial Blog Post
| |
| * [https://medium.com/read-write-participate/slowing-down-asking-questions-looking-ahead-265f6b99810d "Slowing Down, Asking Questions, Looking Ahead" Subsequent better machine decision making Blog Post]
| |
|
| |
|
| == Resources (or active projects?) ==
| | Since writing that post, a number of us at Mozilla -- along with literally hundreds of partners and collaborators -- have been exploring the questions: What do we really mean by ‘trustworthy AI’? And, what do we want to do about it? |
|
| |
|
| * [https://mzl.la/ImpactGoalLitReview Literature review on better machine decision making] (February 2019)
| |
| * Responsible CS Challenge?
| |
| * What else?
| |
|
| |
|
| == Process and project management ==
| | '''How do we collaboratively make trustworthy AI a reality?''' |
| * [https://drive.google.com/file/d/1cExHEoEpaHKJhgTKWp2h50xvXMWeY76E/view Impact goal summary] (November 2018)
| |
| * Better machine decision making issue brief (November 2018) https://mzl.la/IssueBriefv01
| |
| * [https://drive.google.com/file/d/1Bl-h9d1IrhBXOskacm8eYkPteNMdxlN2/view Existing MoFo Theory of Change] (January 2018)
| |
|
| |
|
| Sub things might be:
| | We think part of the answer lies in collaborating and gathering input. In May 2020, we launched a request for comment on v0.9 of Mozilla’s Trustworthy AI Whitepaper -- and on the accompanying theory of change (see below) that outlines the things we think need to happen. |
| - how we got to this focus
| |
| - impact goal 2019 project timeline
| |
|
| |
|
| =Why=
| |
|
| |
|
| Artificial intelligence (AI) has long occupied an outsized role in our collective imagination, in everything from pulp science fiction novels to James Cameron blockbusters. Perhaps humankind is moving toward an oppressive artificial superintelligence. In the meantime, artificial intelligence is already woven into our everyday lives. It provides us with things we love and need, from productivity advice to movie recommendations. Yet, when we don't carefully consider its impact on our democracies, our justice systems and our well-being, we open ourselves up to real risks.
| | '' What is trustworthy AI and why? '' |
|
| |
|
| Mozilla seeks to be a part of this careful consideration. The last three years have been about creating a solid foundation of work - defining the internet health frame, clear program offerings, and an effective planning/execution process. With that work nearing completion, it is time to point Mozilla's resources at answering the question, "What concrete improvements to the health of the internet do we want to make over the next 3-5 years?"
| | We have chosen to use the term AI because it is a term that resonates with a broad audience, is used extensively by industry and policymakers, and is currently at the center of critical debate about the future of technology. However, we acknowledge that the term has come to represent a broad range of fuzzy, abstract ideas. Mozilla’s definition of AI includes everything from algorithms and automation to complex, responsive machine learning systems and the social actors involved in maintaining those systems. |
|
| |
|
| Based on extensive engagement with staff, allies, experts and the board, it has been decided that ‘better machine decision making’ is the area to focus on. This goal was chosen because of the belief that it can:
| |
| * Leverage the ‘agenda setting’ influence by asking tough questions about AI and adding nuance and specificity to the public debate.
| |
| * Focus work in an area of further impact, leveraging the work started across all program areas over the past three years.
| |
| * Increase the number and quality of collaborations across Mozilla and the internet health movement by helping define specific areas to work with others to have impact.
| |
| * Help reach more people through messages and calls to action by increasing the clarity, specificity and relevance of the messages.
| |
| * Offer clear milestones and outputs to measure and report out on progress against the intended impact along the way.
| |
|
| |
|
| =What=
| | Mozilla is working towards what we call trustworthy AI, a term used by the European High Level Expert Group on AI. '''Mozilla defines trustworthy AI as AI that is demonstrably worthy of trust. Privacy, transparency, and human well-being are key considerations and there is accountability for harms.''' |
| In the first phase, we explored a wide range of potential impact goals through staff engagement, external interviews and board discussions. Further information on Phase One is available below and through the key documents links.
| |
|
| |
|
| Phase two of the impact project is designed to drive towards more specificity around the work and outcomes we’ll take on in pursuit of this goal.
| |
|
| |
|
| The goals of phase two are as follows:
| | Mozilla’s theory of change (below) is a detailed map for arriving at more trustworthy AI. It focuses on AI in consumer technology: general purpose internet products and services aimed at a wide audience. This includes products and services from social platforms, apps, and search engines, to e-commerce and ride sharing technologies, to smart home devices, voice assistants, and wearables. |
| * Build on efforts already underway by staff and partners within the internet health movement.
| |
| * Clearly define the kind of impact we want to have over the coming years with a compelling sketch in early Q1 and an updated theory of change in Q2.
| |
| * Refine the language around ‘better machine decision making,’ crafting a compelling, specific vision of an alternative, positive future.
| |
| * Identify key partners and allies to work with on this topic, and start to define shared outcomes and projects that we can pursue together in H2.
| |
| * Provide the raw material for initial public messaging about ‘better machine decision making,’with the aim of communicating on this issue starting in Q1.
| |
|
| |
|
| We will go after these goals in an open and collaborative manner, engaging staff, allies, partners and supporters as we define the issues we’ll focus on over the coming years -- and, of course looking for ways to advance the work together as we put this focus into action.
| |
|
| |
|
| ==== Open Collaboration ====
| | '' About Mozilla '' |
| More to come on this section in March
| |
|
| |
|
| =How=
| | The ‘trustworthy AI’ activities outlined in the white paper are primarily a part of the movement activities housed at the Mozilla Foundation — efforts to work with allies around the world to build momentum for a healthier digital world. These include: thought leadership efforts like the Internet Health Report and the annual Mozilla Festival, fellowships and awards for technologists, policymakers, researchers and artists, and advocacy to mobilize public awareness and demand for more responsible tech products. Mozilla’s roots are as a collaborative, community driven organization. |
|
| |
|
| Key activities right now include:
| |
|
| |
|
| ==== Interview series with experts ====
| | Mozilla’s roots are as a collaborative, community driven organization. We are constantly looking for |
| We’re launching an additional series of interviews with global experts on machine decision making to strengthen our understanding in key areas. In particular, we’ll be reaching out to experts who work outside the NA/EU context to understand how AI is taking shape in Africa, South America and Asia so we can better understand how we might support work in those regions; those who are working on gender equity and inclusion in this space; as well as technical experts who can help us understand the source and opportunities to move towards the goal on a technological level.
| | allies and collaborators to work with on our trustworthy AI efforts. |
|
| |
|
| We’ll draw insights from these interviews into this wiki, as well ask interviewees for recommendations of source material for our literature review (see below). If interested, check out the interview questions and list of interviewees. Feel free to suggest interviewees that have expertise in the areas outlined above by adding to the interview spreadsheet.
| |
|
| |
|
| The timeframe for this work is January-April.
| | For more on Mozilla’s values, see: [https://www.mozilla.org/en-US/about/manifesto/]. Our Trustworthy AI |
| | goals framework builds on key manifesto principles including agency (principle 5), transparency (principle 8) |
| | and building an internet that enriches the lives of individual human beings (principles 3). |
|
| |
|
| ==== Literature Review ====
| |
| Many organizations and individuals have researched and published recommendations regarding AI and machine decision making. Our literature review seeks to uncover those and identify themes, as well as gaps in the current recommendations that may warrant further research by us.
| |
|
| |
|
| The key questions the literature review seeks to answer are:
| | For more on Trustworthy AI programs, see [https://wiki.mozilla.org/Foundation/AI https://wiki.mozilla.org/Foundation/AI] |
| * What are the top ten challenges (and proposed solutions, with a focus on policy interventions, product standards and consumer behavior) that machine decision making poses to internet health (privacy & security, decentralization, web literacy, digital inclusion & openness)?
| |
| * What are the different terms being used to describe machine decision making and how are they being used?
| |
| * What are current legislative frameworks that are being proposed on this topic globally and what has the reception to those frameworks been?
| |
| * What ethical guidelines already exist in this area and what has the reception to those ideas been?
| |
|
| |
|
| See the key documents section for access to the literature review overview and current source material.
| | <div style="display:block;-moz-border-radius:10px;background-color:#666666;padding:20px;margin-top:20px;"> |
| | <div style="display:block;-moz-border-radius:10px;background-color:#FFFFFF;padding:20px;margin-top:20px;"> |
|
| |
|
| The timeframe for this work is January-March.
| | = Theory of Change = |
|
| |
|
| ==== Current & Planned 2019 Work on Machine Decision Making ====
| |
| This doc provides a platform for us to share and understand in greater depth how we’re already leveraging existing programs towards the impact goal, as well as how we’re stretching our programs and thinking into 2019, so we can begin talking about this work with the board, partners and the press. Feel free to check out this document.
| |
|
| |
|
| The timeframe for this work is Jan-February. | | The Theory of Change update will enable Mozilla & our allies to take both coordinated and decentralized action in a shared direction, towards collective impact on trustworthy AI. |
|
| |
|
| ==== Theory of Change ====
| | It seeks to define: |
| This work is focused on using the Theory of Change tool we’ve already developed for internet health to define a set of more specific, measurable long-term and short-term outcomes for our work on ‘better machine decision making.’
| | * Tangible changes in the world we and others will pursue (aka long term outcomes) |
| | * Strategies that we and others might use to pursue these outcomes |
| | * Results we will hold ourselves accountable to |
|
| |
|
| This work will be drawn from the insights and information generated from the interviews, literature review, and current work overview, as well as additional research if needed.
| | Many people have tried to come up with the right word to describe what 'good AI' looks like -- ethical, responsible, healthy. |
|
| |
|
| The process will include opportunities for staff learning and contribution. We are still developing the detailed Theory of Change development plan and will update staff, which will include staff learning and contribution components. | | The term we find most useful is 'trustworthy AI', as used by the European High Level Expert Group on AI. Mozilla's simple definition is: |
|
| |
|
| The goal is for the project to be completed by June and support H2 and 2020 action planning at All Hands. The timeframe for this work is Feb-June.
| | "AI that is demonstrably worthy of trust. Privacy, transparency and human well being are key design considerations - and there is accountability for any harms that may be caused. This applies not just to AI systems themselves, but also the deployment and results of such systems." |
|
| |
|
| =Background Work=
| | We plan to use this term extensively, including in our theory of change and strategy work. |
|
| |
|
| ==== What is an impact goal? ====
| | [[File:MoFo AI Theory of Change (ToC) – Landscape Design.jpg|thumb|center|alt|800px]] |
|
| |
|
| <big><big>'''''A specific ambition to measure and focus Mozilla Foundation's internet health programs over the next few years.'''''</big></big>
| | </div> |
| | </div> |
|
| |
|
| Using the model we started in 2018 with the ‘your data and you’ theme, we will point 50 - 60% of fellows, campaigns and other program resources at this goal.
| | <div style="display:block;-moz-border-radius:10px;background-color:#cccccc;padding:20px;margin-top:20px;"> |
| | <div style="display:block;-moz-border-radius:10px;background-color:#FFFFFF;padding:20px;margin-top:20px;"> |
|
| |
|
| A strong impact goal has four key qualities. It’s audacious, tackling the big issues of the day at a high level. It’s concrete, describing specific, tangible change. It’s targeted, describing the key players involved. And it’s inspirational, motivating us to roll up our sleeves and make it happen. In short: Mozilla’s impact goal should be ambitious but attainable, and have the potential to send positive ripples across millions of users’ online lives.
| | = 2020 OKRs = |
|
| |
|
| ==== History ====
| | MoFo 2020 OKRs |
| | [draft - March 27, 2020] |
|
| |
|
| As we look at the next few years of Mozilla’s work, it’s time to begin mapping our course forward. The last three years have been about creating a solid foundation for our work - the internet health frame, clear program offerings, and an effective planning/execution process.
| | The following outlines the organization wide objectives and key results (OKRs) for Mozilla Foundation for 2020. |
|
| |
|
| Looking forward, it’s critical that we use this strong foundation to focus on a specific set of outcomes that we can design and measure our progress against. Internet health remains a critical articulation of the broad change we’re seeking in the world. Now, it’s critical we also answer the question, “What concrete improvements to the health of the internet do we want to make over the next 3-5 years?”
| | Theory of change |
|
| |
|
| Since we launched the “your data & you” theme in 2018, there’s been positive feedback about the value of a theme and the broad topic area of data, as well as a request for us to continue to invest in this area beyond 2018. At the same time, staff, external partners and funders have urged us to be more specific about what this investment focus on data means for our work and our intended outcomes.
| | These objectives have been developed as a part of a year long strategy process that included the creation of a multi-year theory of change for Mozilla’s trustworthy AI work. The majority of objectives are tied directly to one or more short term (1 - 3 year) outcomes in the theory of change. |
|
| |
|
| <big>Three reasons we want an impact goal:</big>
| | Partnerships |
| * '''Focus and measure''' the impact of our internet health programs.
| |
| * '''Galvanize allies''' by giving us something concrete to work on together.
| |
| * Shift narrative '''from ‘fear’ to ‘ambition’''' -- and, eventually, ‘winning’.
| |
|
| |
|
| ==== Goal of this project ====
| | Mozilla Foundation’s overall focus is on growing the movement of organizations around the world committed to building a healthier internet. A key assumption behind this work is that Mozilla maintains a small staff that is skilled at partnering, with most of its resources going into networking and supporting individuals and organizations within the movement. The 2020 OKRs include a strong focus on deepening our partnership practice. |
|
| |
|
| The Impact project is designed to answer the question, <big>'''''“What concrete improvements to the health of the internet do we want to make over the next 3-5 years?”'''''</big> through a transparent, participatory process involving staff and external experts/partners. The specific goals of this project include:
| | Below is a bulleted list of our OKRs. You can read more about them [https://wiki.mozilla.org/Foundation/2020/OKRs here]. |
|
| |
|
| * '''Crafting a compelling, specific vision of an alternative, positive future that builds on the current “your data and you” theme'''
| | 1. Thought Leadership <br /> |
| | '''Short Term Outcome:''' Clear "Trustworthy AI" guidelines emerge, leading to new and widely accepted industry norms. <br /> |
| | '''2020 Objective:''' Test out our theory of change in ways that both give momentum to other orgs taking concrete action on trustworthy AI and establish Mozilla as a credible thought leader. <br /> |
| | '''Key Results:''' <br /> |
| | * Publish a whitepaper theory of change |
| | * 250 people and organizations participate in mapping to show who is working on key elements of trustworthy AI and offer feedback on the whitepaper |
| | * 25 collaborations with partners working on concrete projects that align with short term outcomes in the theory of change |
|
| |
|
| * '''Defining a set of specific goals to achieve that impact through clear milestones we can design and measure against'''
| | 2. Data Stewardship <br /> |
| | '''Short Term Outcome:''' More foundational trustworthy AI technologies emerge as building blocks for developers (e.g. data trusts, edge data, data commons). <br /> |
| | '''2020 Objective:''' Increase the number of data stewardship innovations that can accelerate the growth of trustworthy AI. <br /> |
| | '''Key Results:''' <br /> |
| | * $3 million raised to support bold, multi-year, cross movement initiatives on data stewardship as an indicator of growing philanthropic support in this area. |
| | * 10 awards or fellowships for prototypes or other concrete exploration re: data stewardship. |
| | * 4 concentric “networks of practice” engaging with Data Futures Lab |
|
| |
|
| * '''Developing a set of messages for the related target audience/s that we can begin to test and further refine'''
| | 3. Consumer Power <br /> |
| | '''Short Term Outcome:''' Citizens are increasingly willing and able to pressure and hold companies accountable for the trustworthiness of their AI. <br /> |
| | '''2020 Objective:''' Mobilize an influential consumer audience using pivotal moments to pressure companies to make ‘consumer AI’ more trustworthy. <br /> |
| | '''Key Results:''' <br /> |
| | * 3m page views to content on Mozilla channels, a majority of which focuses on trustworthy AI. |
| | * 75k new subscribers drawn from sources (partnerships, contextual advertising, etc.) oriented towards people ages 18-35. |
| | * 25k people share information with us (stories, browsing data, etc.) in order to gather evidence about how AI currently works and what changes are needed. |
|
| |
|
| * '''Mapping our theory of change against this vision so each existing program is set up to drive impact'''
| | 4. Movement Building <br /> |
| | '''Short Term Outcome:''' A growing number of civil society actors are promoting trustworthy AI as a key part of their work. <br /> |
| | '''2020 Objective:''' Partner with diverse movements to deepen intersections between their primary issues and internet health, including trustworthy AI, so that we increase shared purpose. <br /> |
| | '''Key Results:''' <br /> |
| | * 30% increase in partners with whom we (have both) published, launched, or hosted something that includes shared approaches to their issues and internet health (e.g. shared language, methodologies, resources or events). |
| | * 75% of partners from these diverse movements report deepening intersection between their issues and internet health/AI. |
| | * 4 new partnerships in the Global South report deepened intersection between their work and ours. |
|
| |
|
| We will go after these goals in an open and collaborative manner, engaging our allies, partners and supporters as we define the issues we’ll focus on over the coming years -- and, of course looking, for ways to work together as we put this focus into action.
| | = Timeline = |
|
| |
|
| The outcomes of this work, including discussions with the board, will be used as part of our 2019+ strategic plan, with the expectation ~60% of resources towards a common goal developed through this process and maintain ~40% work on a long tail of internet health issues.
| |
|
| |
|
| ==== Criteria ====
| | Trustworthy AI fits within the internet health movement building strategy Mozilla launched in 2016. Over the last 18 months we've been working to figure out how trustworthy AI can be a central focus in driving this movement forward. |
|
| |
|
| Ambition: importance and scale
| | Below is a timeline of key steps and documents from this process. They collectively tell the story of how Mozilla got to this goal and why. |
| * Will this work have a real and meaningful impact on internet health?
| |
| * Does it have the potential for a ‘domino effect’ that unlocks secondary impacts?
| |
| * Will this impact be felt widely, either in terms of total number of people impacted or parts of the world that benefit?
| |
|
| |
|
| Winability: likelihood of success
| | January 2016 - [https://wiki.mozilla.org/MoFo_2020 Movement Building Strategy Launch] |
| * Can we identify specific areas where we are likely to succeed? | | * In January, 2016 Mozilla launched a movement building strategy. The goal was to combine our programmatic to catalyze a movement for a healthier internet. |
| * What are the dependencies to success? How many of these dependencies might we control or have influence over?
| | |
| * Would limited success still be valuable and if so, how? | | January 2018 - [https://drive.google.com/file/d/1Bl-h9d1IrhBXOskacm8eYkPteNMdxlN2/view Mozilla Strategy Brief & Theory of Change] |
| | * After two years of implementing the internet health movement building strategy, Mozilla releases update language and proposes a rough theory of change. |
|
| |
|
| Momentum: some of our allies are already pursuing this goal
| | September 2018 - [https://drive.google.com/file/d/14izVPHwpy4hZGt4xclHpmaXr15pME5PI/view Mozilla 2016-2018 Program Evaluation] |
| * Are others already making progress on some aspect of this problem? | | * The detailed strategy review resulted in many key learnings regarding what was working and what was not. Chief among them was that the movement building theory of change was too broad and that specific, measurable direction underneath this umbrella strategy was necessary. The takeaway was that Mozilla would choose an impact goal to drive this work forward. |
| * Who else needs to win for us to win? Are they already working on this?
| |
| * Are there things Mozilla could do to increase momentum or help unlock success in this area?
| |
|
| |
|
| Resonance: current and potential resonance with public
| | November 2018 - [https://drive.google.com/file/d/1cExHEoEpaHKJhgTKWp2h50xvXMWeY76E/view Short Listed Impact Goals] |
| * Is this an issue the public is already aware of? If so, who and in what geography? | | * Summarizes our recommendation to Mozilla's Board of Directors on why the impact goal focus should be 'better machine decision making' (now, trustworthy AI). |
| * Are we addressing core concerns or values people have about life online?
| | * Based on the strategy evaluation and a program review, Mozilla looked at a number of options for impact goals. We narrowed the short list to four goals outlined in this document, one of which was "better machine decision making" (which grew into trustworthy AI). |
| * Is this issue likely to have broad public appeal?
| |
|
| |
|
| Fit: leverages Mozilla’s brand, expertise and programs
| | November 2018 - [https://mzl.la/IssueBriefv01 Better machine decision making issue brief] |
| * Does this work advance the principles of the Mozilla Manifesto? | | * This document is the issue brief we wrote to describe what we meant by better machine decision making at the time and used this to start consulting with our allies and partners. It describes both the issue and the beginning of a roadmap on areas for improvement to get us to 'better'. |
| * Can Mozilla bring a unique perspective, expertise or resources to this issue?
| |
| * Could existing staff, fellows and initiatives contribute significantly?
| |
| * Do we know where or how to leverage expertise around this work in places where we don’t have it already?
| |
| * Is this work likely to attract new allies, volunteers and community?
| |
|
| |
|
| ==== Possible Impact Goals ====
| | November 2018 - [https://medium.com/read-write-participate/slowing-down-asking-questions-looking-ahead-265f6b99810d Slowing Down, Asking Questions, Looking Ahead] |
| From July - November 2018 we worked with MoFo staff, fellows and partners to generate over a dozen
| | * In November we announced 'better machine decision making' as our goal. This kicked off a period of engagement and consultation. |
| options for impact goals, which were then narrowed down to four. The impact goal we chose is:
| |
|
| |
|
| * '''Better machine decision making''': we understand when machines are making decisions for us. We work alongside them and have a way to fix mistakes. | | March 2019 - [https://marksurman.commons.ca/2019/03/06/mozillaaiupdate/ Mozilla, AI and internet health: an update] |
| | * By March, we'd adapted the language of our impact goal from better machine decision making to trustworthy AI. This blog post draws direct connections between our movement building theory of change and how trustworthy AI fits into that. It answers the question: how will we shape the agenda, rally citizens and connect leaders around trustworthy AI? |
|
| |
|
| The three additional impact goal options we explored included:
| | April 2019 - [https://marksurman.commons.ca/2019/04/23/why-ai-consumer-tech/ Why AI + consumer tech?] |
| * '''Online ad economy''': what we click on, look at or buy online isn’t used to influence or manipulate our behaviour without our consent.
| | * In April, 2019 we narrowed in on consumer technology as the key area where Mozilla can have the biggest impact in the AI field. |
| * '''Respect online''': women or gender minorities are not afraid when they share opinions, ideas, or content online. | |
| * '''Digital bodies''': we all control the digital copies of our face, voice and DNA.
| |
|
| |
|
| You can see a more detailed brief that runs each of these options against the criteria [https://drive.google.com/file/d/1cExHEoEpaHKJhgTKWp2h50xvXMWeY76E/view here].
| | May 2019 - [https://marksurman.commons.ca/2019/05/13/consider-this-ai-and-internet-health/ Consider this: AI and Internet Health] |
| | * Though the focus had been narrowed to consumer technology, we wanted to get even more specific about the impact we wanted to make. This blog explores which aspects of consumer technology that Mozilla considered focusing on. The list included: accountability; agency; rights; and open source. |
|
| |
|
| ==== Selected impact goal: Better Machine Decision Making ====
| | August 2019 - [https://marksurman.commons.ca/2019/08/28/update-digging-deeper-on-trustworthy-ai/ Update: Digging Deeper on ‘Trustworthy AI’] |
| | * By August, 2019 we share our long term outcomes and long term trustworthy AI goal. We had landed on agency and accountability as our outcomes and "in a world of AI, consumer technology enriches the lives of human beings" as our goal. |
|
| |
|
| Based on extensive engagement with staff, allies, experts and the board, it has been decided that ‘better machine decision making’ is the area to focus on. This goal was chosen because of the belief that it can:
| | January 2020 - [https://youtu.be/zzkGhH-4FDs All Hands Plenary] |
| | * By January 2020 we were starting to see this work show up in our programs. Here, Mozilla staff and fellows talk about how their work is helping us towards our trustworthy AI goal. |
|
| |
|
| * Leverage the ‘agenda setting’ influence by asking tough questions about AI and adding nuance and specificity to the public debate.
| | March 2020 - [https://foundation.mozilla.org/en/blog/privacy-pandemics-and-ai-era/ Privacy, Pandemics and the AI Era] |
| * Focus work in an area of further impact, leveraging the work started across all program areas over the past three years.
| | * This blog explores the connection between the COVID-19 pandemic, and the technological solutions being proposed. These issues are central to the long term impact of AI. |
| * Increase the number and quality of collaborations across Mozilla and the internet health movement by helping define specific areas to work with others to have impact.
| |
| * Help reach more people through messages and calls to action by increasing the clarity, specificity and relevance of the messages. | |
| * Offer clear milestones and outputs to measure and report out on progress against the intended impact along the way.
| |
| * Phase two of the impact project is designed to drive towards more specificity around the work and outcomes we’ll take on in pursuit of the goal of better machine decision making.
| |
|
| |
|
| =Timeline=
| | April 2020 - [https://marksurman.commons.ca/2020/04/22/privacy-norms-and-the-pandemic/ Privacy Norms and the Pandemic] |
| | * Similar to the post above, here we explore the long term data governance implications of technology deployment during the pandemic. |
|
| |
|
| ===== Q3 2018 =====
| | May 2020 - [https://drive.google.com/file/d/1LD8pBC-cu7bkvU-9v-DZEyCmpWED7W7Z/view Mozilla v0.9 White Paper on Trustworthy AI] |
| * Draft 5-10 impact statements and criteria
| | * In May of 2020 Mozilla released a white paper on our approach to trustworthy AI. The paper talks about how industry, regulators and citizens of the internet can work together to build more agency and accountability into our digital world. It also talks briefly about some of the areas where Mozilla will focus, knowing that Mozilla is only one small actor in the bigger picture of shifting the AI tide. |
| * Staff input & additions to existing impact statements and criteria | |
| * Strategy Retreat participants begin deeper analysis
| |
| * External partners provide additional ideas and input on impact goals
| |
|
| |
|
| ===== Q4 2018 =====
| | May 2020 - [https://marksurman.commons.ca/2020/05/14/request-for-comment-how-to-collaboratively-make-trustworthy-ai-a-reality/ Request for comment: how to collaboratively make trustworthy AI a reality] |
| * Draft short analysis for each impact goal
| | * Following the white paper launch, we opened a request for comments inviting our allies and community to feedback on this thinking. We welcome you as a part of that process. You can add your voice by writing a response to what you read, reaching out to us or [https://docs.google.com/forms/d/e/1FAIpQLSemkMhbjhtugjHUjxVwS0XlAkBlaP-prOm3pUsELPKjkXjupQ/viewform filling out this form]. |
| * MozFest session, displays, and meetings to solicit feedback and ideas from our community | | |
| * Review & discuss process and progress with MoFo board program committee
| |
| * Meet with additional internal/external partners and community members for feedback
| |
| * Exec team reviews final analyses; makes decision re: recommendation
| |
| * Discuss & get feedback during board meeting (Nov 15)
| |
| * Staff and fellows engage with impact goal at All Hands
| |
|
| |
|
| ===== Q1 2019 =====
| | You can read more about the background for this project [https://wiki.mozilla.org/Foundation/AIBackgroundWork here]. |
| * Launch expert interviews
| |
| * Begin literature review
| |
| * Document existing & planned work around machine decision making
| |
| * Develop comms & engagement strategy for staff, partners, and public
| |
| * Complete literature review and identify gaps in understanding requiring additional research and/or commissioned reports
| |
| * Continue expert interviews
| |
| * Share updated issue summary, lit review and current/planned work with board
| |
| * Coding, analysis and synthesis of expert interviews
| |
| * Synthesize list of possible better machine decision making outcomes, based on interviews, literature reviews & convenings
| |
| | |
| ===== Q2 2019 =====
| |
| * Gather input and feedback with staff, experts, and allies around high-potential outcomes for Theory of Change
| |
| * Continue public and partner communication about process, learnings and opportunities to engage
| |
| * Finalize 1-3 long-term outcomes for machine decision making for the theory of change overlay
| |
| * Fully introduce impact goal theory of change overlay with staff at All Hands
| |
| * Begin to map out use for 2020 planning
| |
| * Share impact goal theory with board
| |