# 翻譯共筆:‘Creating Trustworthy AI: A summary of our white paper’
###### tags: `Trustworthy AI`
出處: https://foundation.mozilla.org/en/blog/trustworthy-ai-abridged-version/
---
# Creating Trustworthy AI: A summary of our white paper
<!-- header image with two people working on a computer -->
Mozilla's strategy and the programs behind it are focused on building a healthier internet. Since 2019, we've layered in a focus on making artificial intelligence more trustworthy. In May of this year, we released a white paper that outlines our thinking and theory of change on trustworthy AI. It is a comprehensive document, but it may not be accessible to everyone. Below, we've created a more digestible, abridged version to share.
You can read Mozilla's Trustworthy AI White Paper here.
## Background
Mozilla’s theory of change is a detailed map for arriving at more trustworthy AI. We developed our theory of change over a one-year period, during which we consulted with scores of AI domain experts from industry, civil society, academia, and the public sphere. We conducted a thorough literature review. And we learned by doing, running advocacy campaigns that scrutinized AI, funding art projects that illuminated AI’s impact on society, and publishing research in our Internet Health Report.
Mozilla’s theory of change focuses on AI in consumer technology: internet products and services aimed at a wide audience. This includes products and services from social platforms, apps, and search engines, to e-commerce and ride sharing technologies, to smart home devices, and loan algorithms used by banks.
## 人工智慧所面臨的挑戰
人工智慧對於增進我們的生活品質有著巨大的潛力,但將複雜的運算系統整合進我們平時所使用的平台以及產品時,將有可能產生資訊安全以及侵犯隱私的疑慮。除非以一些特殊的手段使這些系統變得更加可以信賴,要不然人工智慧的發展將有可能使原本就不平衡權力變得更加不平衡。
主要的挑戰包括:
* **權力集中化**: 只有少數的科技業巨頭有足夠的資源打造人工智慧,使其缺乏創新力以及競爭力。
* **個資的隱私保護**: 為了發展複雜的人工智慧系統,需要大量的資料。許多人工智慧系統都是由侵入式的技術去蒐集人們的個資而發展的。
* **刻板印象及歧視現象**: 人工智慧的運算模型、資料以及基本框架原本就含著刻板印象以及歧視現象,這使得其運算出帶有偏見的結果,而這會導致一些較弱勢的群體受到巨大的影響。
* **責任制及透明化**: 有很多的原因造成人工智慧系統的不透明 – 有時是因為固有的機器學習系統的本性,而其他時候是因為受到企業保密條款的限制。總而言之,許多企業並沒有明確公開他們的人工智慧系統是如何運作的,導致責任的模糊化以及不利於第三方單位進行監督。
* **企業規範**: 因為許多企業發展及應用太過於迅速,很多人工智慧系統所隱含的價值觀以及假設並沒有在產品生產時被提出而進行詢問。
* **剝削員工及環境**: 生產人工智慧需要大量的能源以及人力,然而這些狀況常常被人們所忽略,因此經常受到危害。負責維護人工智慧的科技業員工特別容易因加班而使工作超越他們所能的負荷量。人工智慧也對氣候變遷造成影響,大量耗能而產生自然資源短缺的危機。
* **安全疑慮**: 不法份子將可能侵入人工智慧系統而使出更為繁雜的攻擊手段。
根據上述分析,Mozilla提供了一套能將人工智慧變得更加值得信賴的理論。這套理論中所描述的解決方法以及改變應該要被應用在各種不同的領域。
## The path forward
While these challenges are daunting, we imagine a world in which AI systems are designed in ways that strengthen human agency and accountability. We should not assume that AI can do everything that people claim, and we should question whether such systems should be researched, built, or deployed at all under certain circumstances.
In order to make this shift, we believe industry, civil society, and governments need to work together to make four things happen:
shifting-industry-norms-2@2x-80.jpg
### A shift in industry norms
Many of the teams building consumer-facing AI products are developing processes and tools to ensure greater accountability and responsibility. We need to encourage investment in this approach at every stage in the product research, development, and deployment pipeline. At the same time, organizational culture and industry norms will need to change.
We’ll know we’re having a positive impact when:
* Best practices emerge in key areas of trustworthy AI, driving changes to industry norms.
* The people building AI are trained to think more critically about their work and they are in high demand in the industry.
* Diverse stakeholders are meaningfully involved in designing and building AI.
* There is increased investment in trustworthy AI products and services.
There are a number of ways that Mozilla is already working on these issues. We’re supporting the development of undergraduate curricula on ethics in tech with computer science professors at 17 universities across the US. We’re also seeking partnerships to meaningfully scale the development of trustworthy AI applications in Africa, in part because we see early signs that African researchers are seeking a different approach to AI that is independent from the US and Chinese companies who dominate the field. In addition, we’re supporting research that will develop and test methods to explain AI processes within consumer products and services.
We are and will continue to seek out partnerships with: a broader set of AI practitioners (data scientists, developers, designers, project managers) working in the industry; people and organizations who are working to translate broad AI principles into actionable frameworks and best practices; and experts in participatory design and development, including non-technical stakeholders.
<!-- new-tech-and-products@4x.png -->
### New tech and products are built
To move toward trustworthy AI, we will need to see everyday internet products and services come to market that have features like stronger privacy, meaningful transparency, and better user controls. In order to get there, we need to build new trustworthy AI tools and technologies and create new business models and incentives. We’ll know we’re having a positive impact when:
* New technologies and data governance models are developed to serve as building blocks for more trustworthy AI.
* Transparency is a feature of many AI-powered products and services.
* Entrepreneurs and investors support alternative business models.
* Artists and journalists help people critique and imagine trustworthy AI.
As a first step towards action in this area, Mozilla is investing significantly in the development of new approaches to data governance. Our new Data Futures Lab will connect and fund people around the world who are building product and service prototypes using collective data governance models like data trusts and data co-ops. It also includes our own efforts to create AI building blocks that can be used and improved by anyone, starting with Common Voice, a collection of voice technology training data that is increasingly focused on underserved languages.
We are actively seeking additional collaborations with: practitioners who are planning or already utilizing new governance models; investors who seek to offer information and guidance regarding data governance models to portfolio companies; and creatives who seek to demonstrate the value of new models through art, investigation and speculative design.
<!-- consumer-demand@4x.png -->
### Consumer demand rises
Citizens and consumers can play a critical role in pressuring companies that make everyday products like search engines, social networks, and e-commerce sites to develop their AI differently. We’ll know we’re having a positive impact when:
* Trustworthy AI products emerge to serve new markets and demographics.
* Consumers are empowered to think more critically about which products and services they use.
* Citizens pressure and hold companies accountable for their AI.
* Civil society groups are addressing AI in their work.
We have yet to see trustworthy AI design integrated into consumer products at scale. Mozilla seeks to increase the consumer demand for products with trustworthy AI features by providing people with information to evaluate AI-related product features, as we have done with our *Privacy Not Included Guide. We are also organizing people who want to push companies to change their products and services through large-scale, grassroots campaigns directed at Facebook, YouTube, Amazon, Venmo, Zoom, and other industry leaders. Together, these actions not only increase consumer awareness and demand, but they also show the potential for future trustworthy innovations and investments.
To support and strengthen our work around consumer demand, we seek collaborations with organizations that represent consumers and civil society organizations globally whose constituents are directly impacted by AI-enabled products.
<!-- effective-regulations@4x.png -->
### Effective regulations and incentives are created
Market incentives alone will not produce tech that fully respects the needs of individuals and society. New laws and regulations, grounded in technical and social realities, may need to be created and existing laws enforced to make the AI ecosystem more trustworthy. We’ll know we’re having a positive impact when:
* Governments develop the vision, skills, and capacities needed to regulate AI.
* There is wider enforcement of existing laws like the GDPR.
* Regulators have access to the data and expertise they need to scrutinize AI.
* Governments develop programs to invest in and procure trustworthy AI.
Mozilla has a long history of working with governments to come up with pragmatic, technically-informed policy approaches to complex issues. Specifically, we support policy fellows who are developing model legislation including AI procurement guidelines for governments; developing advocacy campaigns that demonstrate the limits of the current self-regulatory frameworks; and launching a European AI Fund with partners to spark investment across civil society.
We will continue to seek collaborations with organizations and individuals who are working to inform, engage and empower governments and policymakers to create effective, technically-specific regulation of AI systems that will support innovation, empower consumers and hold companies accountable for the societal impact of their products.
## Conclusion
As noted throughout this summary, Mozilla is already starting to work in these areas through direct investments and high-impact partnerships like:
* Responsible Computer Science Challenge
* Data Futures Lab
* *Privacy Not Included Buyers Guide
* European AI Fund
* Fellowships
* Advocacy Campaigns
We also know that developing a trustworthy AI ecosystem will require a major shift in the norms that underpin our current computing environment and society. The changes we want to see are ambitious, but they are possible. We saw it happen 15 years ago as the world shifted from a single desktop computing platform to the open platform that is the web today. And, there are signs that it is already starting to happen again. Online privacy has evolved from a niche issue to one routinely in the news. Landmark data protection legislation has passed in Europe, California, and elsewhere around the world. And consumers are increasingly demanding that companies treat them — and their data — with more care and respect. All of these trends bode well for the kind of shift that we believe needs to happen.
The best way to make this happen is to work like a movement: collaborating with citizens, companies, technologists, governments, and organizations around the world. As the actions we list above show, Mozilla sees itself as part of this. We hope that you do, too. With a focused, movement-based approach, we can make trustworthy AI a reality.
We need and want to work alongside a network of people and organizations striving towards the same goals in order to make trustworthy AI a reality. To learn more about our work and explore potential collaborations, please contact Sarah Watson at sarahw@mozillafoundation.org.