An AI system is a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.
The decades of Open Source experience have demonstrated that massive benefits accrue to everyone when you remove the barriers to learning, using, sharing and improving software systems. These benefits are the result of using licenses that follow the Open Source Definition. The benefits can be distilled to autonomy, transparency, and collaborative improvement.
Everyone needs these benefits in AI. We need essential freedoms to enable users to build and deploy AI systems that are fair, reliable, transparent, trustworthy, secure and safe.
A precondition for a system to be Open Source software is that developers must have unrestricted access to the “preferred form to make modifications to the work”.
For AI systems, the preferred form to make modifications to the work depends on the specific kind of AI.
[Provide an example, based on machine learning?]
The Open Source AI Definition doesn’t say how to develop and deploy an AI system that is ethical, trustworthy or responsible, although it doesn’t prevent it. What makes an AI system ethical, responsible or trustworthy is a separate discussion.
To be Open Source, an AI system needs to make its components available under licenses that individually grant the freedoms to:
Being Open Source doesn't mean an AI system will also be safe, secure, trustworthy, explainable and fair but it won’t be an impediment either.
[Provide an example, based on machine learning?]
TODO