# Policy Note Draft
Early on Friday morning, an anonymous Discord user leaked a memo credited to a Google employee. [The leaked memo](https://https://www.semianalysis.com/p/google-we-have-no-moat-and-neither) paints the AI race in a different light, one with major implications for those concerned with the development and governance of this new technology. Freely available, Open Source AI, though likely behind the curve, is developing at a breakneck pace.
The story so far has been of a clash between two industry giants - Microsoft and Google - and the [race to be the market leader](https://https://www.reuters.com/technology/google-ai-chatbot-bard-offers-inaccurate-information-company-ad-2023-02-08/) in using large language models (LLMs) like Microsoft-backed ChatGPT while stressing their credentials as responsible custodians of a profoundly consequential technology.
* The AI Race (200 words)
* Something around sovereignty, geopolitics, industry and geopolitical actors? (200 words)
* Consequences for governments looking to control AI?
It would be a mistake to imagine Open Source AI as a third or fourth runner in the AI race, competing with big technology companies like Google or Microsoft, or with states. In reality, the overlaps between Open Source technologies and communities and those wrapped up in commercial products are enormous. Managing an Open Source environment - the Android Operating System, for instance - allow companies to set roadmaps, control access, and benefit from the expertise of contributors to the project. Open Source itself [emerged from the hunt for a business-friendly alternative](https://https://www.computer.org/csdl/magazine/co/2021/02/09353517/1r8kwgBjU9W) to Free Software. Somewhat paradoxically, the note highlights the probable benefits reaped by Meta after one of their AI models was leaked:
*"They have effectively garnered an entire planet's worth of free labor. Since most open source innovation is happening on top of their architecture, there is nothing stopping them from directly incorporating it into their products."*
But in spite of the complicated relationship between commercial actors and Open Source technology, make no mistake: this is a paradigm shift. What for months has been a debate about which technology company would capture the market can now turn to what free access and low barriers to entry might mean.
If this note is to be believed, one coder, with one laptop, anywhere in the world, could freely apply this groundbreaking and powerful new technology. Perhaps these applications lag behind commercial offerings today. Responses to the leak have questioned whether any of today's Open Source models comes close to the most polished commercial offerings like GPT-4, but the speed at which Open Source AI has developed in the past months hints that things might be different tomorrow.
The author of the Google memo hints that the cat may be further out of the bag than we might have expected, and it is clear that AI-enabled technologies are now available to those who might misuse them. In a parallel with the long-running encryption debate, powerful new technologies can be applied for ill as often as good, but once the technology is in the wild [you can't ban maths](https://uk.finance.yahoo.com/news/wiki-boss-encryption-ban-banning-maths-085045861.html).
Good. Open Source has its challenges. Underfunded, underappreciated, unfairly maligned. But at its core, it demands that new technologies are not locked away behind a paywall or an intellectual property clause, but free to be used by governments, universities, civil society, journalists and anyone else capable of getting their head around Github. Open Source enables technology built with non-commercial mandates, built to meet the demands of equal access, or democratic participation, or social good.
Open Source AI demands vigilance. It will underpin new threats and security challenges that governments will have to respond to. But this vigilance should extend to preserving access. Open resources are vulnerable to capture. Commons need [commons governance](https://https://earthbound.report/2018/01/15/elinor-ostroms-8-rules-for-managing-the-commons/) to remain a commons. [Knee-jerk policy responses](https://https://www.gfmag.com/magazine/may-2023/italy-temporarily-bans-chatgpt) driven only be the risks posed by new technology will get us nowhere: good governance balances harm mitigation with preserving access for those who might apply AI in solving the world's problems.