AI Ethics & Governance Scoping Workshop (1) === ###### tags: `Scoping Workshops` :::info **Using HackMD** You can add comments to this document by selecting the relevant portion, and then selecting 'comment' in the pop-up box. You do not need to sign-in to leave comments. ::: This document provides an overview of the modules for the AI Ethics and Governance (AEG) course. When providing feedback, please consider the following points: - Do the proposed modules and topics meet your expectations for what a course on AI Ethics & Governance should cover? - Are there are any gaps that need to be addressed? - Are there any foreseeable challlenges with delivering these modules and content in an online setting? **Document Navigation** [toc] ## AEG Modules :::warning **Summary** This course is designed to introduce participants to the basic concepts, frameworks, and methods of critical reflection and deliberation needed to understand the ethical and social issues surrounding current and developing uses of data and data-driven technologies. The course provides a brief introduction to basic concepts in data science and AI, which help ground the respective ethical concepts by making it clear how they apply to practical issues in data science and AI. With these foundations in place, the participants will then explore some of the central ethical and social issues facing modern, data-driven societies, and how a process-based form of participatory governance can ensure that data-driven technologies promote an inclusive conception of the social good. ::: ### 1) Course Introduction #### Overview of the Course In this module we will motivate the need for this course by exploring some of the challenges facing modern, data-driven societies, including a need for broad reflection on and consideration of the global impact of data science and AI technologies. We will present a picture of some of the prominent responses in the field of AI ethics and governance to these challenges and provide an overview of how the course will delve deeper into the most relevant of such responses. ### 2) Basic Concepts of Data Science and AI #### Designing, Developing, and Deploying a Predictive Algorithm Rather than simply offering a list of definitions, this module will present a hypothetical project and explore what is involved in designing, developing, and deploying a predictive algorithm. By grounding our conceptual understanding of the basic concepts of data science and AI in a concrete example, students will have a clearer understanding of the significance of ethical reflection for responsible research and innovation. ### 3) An Introduction to Moral Philosophy #### The Role of Values, Principles, and Normative Theories in Moral Deliberation This module will introduce students to the general role that ethical values, principles, and moral theories play in guiding ethical reflection and deliberation. We will widen the lens of AEG in this session, to provide a starting point for the student to think in a structured way about ethical dilemmas and moral choices. #### The Psychology of Moral Deliberation Although moral philosophy is typically concerned with the domain of the normative (i.e. what individuals or groups _ought_ to do), there is also a need to consider facts and theories about _how_ individuals and groups actually reason and decide. By considering the descriptive, alongside the normative, students will have a better understanding of the constraints that _human factors_ or _cognitive biases_ can play in the design, development, and deployment of data-driven technologies. :::info **Guest Lecture (Title TBC)** *Dr Edward Brooks (University of Oxford)* Dr Edward Brooks is Executive Director of the Oxford Character Project at the University of Oxford. His research is currently focused on virtue ethics, hope, and character and leadership development. Particular interests include the relationship between character and culture in commercial organisations, leadership for human flourishing, exemplarist moral theory, and character and leadership development in universities and businesses. ::: ### 4) An Introduction to AI Ethics #### Understanding the potential harms of AI and data-intensive research and innovation In this module, we will motivate the need for using tools of ethical reflection to anticipate the impacts and potential harms of AI and data-intensive research and innovation. We will explore how issues such as the loss of individual agency and social connection, the entrenchment of bias and discrimination, and the infringement on freedoms of expression and association can help us to better understand the need for deliberate ethical approaches to designing, developing, and using socially sustainable AI models. #### A starting point in human rights and bioethics This module will explore how bioethics and human rights have shaped widespread understandings of the fundamental values and principles of AI ethics. Working from the previous module’s presentation of AI ethics as emerging out of dynamics of real-world harm, we will be better able to understand the appeal of human rights and bioethics. The values and principles that have emerged from both traditions found their origins in moral claims that have responded directly to tangible, technologically inflicted harms and atrocities. That is, both traditions emerged out of concerted public acts of resistance against violence done to disempowered or vulnerable people. In this session we will interrogate how such responses to moral injury have provided a starting point for thinking about values that support, underwrite, and motivate responsible AI research and innovation. #### Thinking interculturally, communicating inclusively In this module, we will move beyond the Western frame of human rights and bioethics to consider how some of the non-Western and Indigenous systems of values and beliefs might inform and widen the lens of AI ethics. We will touch upon a range of horizon-expanding ethical perspectives such as Neo-Confucianism, Ubuntu beliefs, and Indigenous and Native American ontologies and practices. We will also begin examining how to create an inclusive communication environment in which a plurality of beliefs and values can be incorporated into meaningful dialogues about the stakes and impacts of AI research and innovation. #### Fostering sustainable AI research and innovation through stakeholder analysis and impact assessment To engage in sustainable AI research and innovation, scientists and technologists need methods that help them understand the individuals and communities likely to be impacted by their projects. Undertaking this kind of anticipatory reflection involves first gaining a contextually informed understanding of the social environment and human factors that may be impacted by the tools or models under development. It then involves assessing the ways that stakeholders could be negatively or positively affected by the potential project. This is the purpose of stakeholder analysis and impact assessment. In this module, we will examine the ethical logic and methods underlying stakeholder analysis and impact assessment, and we will go through some concrete examples of how these can be put into practice. #### Weighing and balancing values through meaningful dialogue Processes of assessing the potential social and ethical impacts of prospective AI projects often raise issues about how to weigh values against one another and how to consider trade-offs when values come into tension with each other in specific use-cases. For instance, there may be circumstances where the use of an AI system could optimally advance the public interest only at the cost of safeguarding the wellbeing or the autonomy of a given individual. In other cases, the use of an AI system could preserve the wellbeing of a particular individual only at the cost of the autonomy of another or of the public welfare more generally. In this module, we will examine the issue of adjudicating between conflicting values. This has long been a thorny dimension of collective life, and the problem of discovering reasonable ways to overcome the disagreements that arise as a result of the plurality of human values has occupied thinkers for just as long. To work through this, we will explore several useful procedural and deliberative approaches to managing the tension between conflicting values that have emerged over the course of the development of modern democratic and plural societies. #### Fairness, equity, and bias mitigation In this module, we will explore a range of topics related to the role that considerations about fairness, equity, and bias mitigation should play across the AI research and innovation lifecycle. We will examine how social and historical forms of bias and discrimination can have cascading effects in datasets, model production, and implementation, how formal approaches to fairness can inform design choices, and how wider, non-ideal considerations about equity and justice can be incorporated into AI research and innovation projects from start to finish. #### Accountability and Transparency Principles of accountability and transparency are end-to-end governing principles. They provide procedural mechanisms and means through which AI systems can be justified and by which their producers and implementers can be held responsible. Accountability entails that humans are answerable for the roles they play across the entire AI design and implementation workflow. It also demands that the results of this work are traceable from start to finish. The principle of transparency entails that design and implementation processes are justifiable through and through. It demands as well that an algorithmically influenced outcome is interpretable and made understandable to affected parties. In this module we will explore the nature and relationship of accountability and transparency, and we will examine some real-world AI use cases where the application of these principles would have helped system users to avoid harm. #### Interpretable and explainable AI The ethical desiderata of interpretable and explainable AI have been widely articulated. Putting the priority of model interpretability into practice enables the data scientist to build an explanatory bridge to users, implementers, and affected data subjects. It also enables gains in the objectivity and robustness of the forms of human reasoning it supports by making a greater range of patterns accessible to humans—patterns that would have been otherwise unavailable to human-scale deliberation. Moreover, not only does a high degree of interpretability allow data scientists and end users to better understand why things go wrong with a model when they do, but it can also help them to continually evaluate its limitations while scoping future improvements. In this module, we will explore the importance of interpretable and explainable AI for responsible research and innovation, and we will broach some methods and techniques for putting a holistic and comprehensive approach to interpretable and explainable AI into practice. #### Responsible research and innovation This module will consider what 'responsibility' looks like in the domain of AI research and innovation. Students will explore several interlocking components of an ethical framework for data science and AI research and innovation, including a) ethical values and principles that support reflection and deliberation, b) a process-based form of project governance that emphasises the socially situated nature of research and innovation, and c) a procedural method for inclusive and participatory engagement that seeks to establish moral legitimacy for the decisions and actions taken throughout a project's lifecycle. #### AI ethics, power, and sociotechnical systems Although it is helpful to approach AI research and innovation from a high-level, normative perspective, there are myriad ethical issues that only become visible when we consider elements of AI research and innovation at a more fine-grained perspective that takes account of the dynamics of power at play in the production and use of sociotechnical systems. For instance, the ethics of data extraction, transformation, and use raises questions that are perhaps best understood only when asymmetrical power relations among technology producers and impacted parties is sufficiently considered. Therefore, this module will focus in on some specific topics related to the intertwinement of AI ethics, power, and sociotechnical systems—examining, in particular, how legacies of structural injustice and discrimination manifest in forms of sociotechnical power and how collective actions of resistance to these can shed light on the practicability of AI ethics. :::info **Guest Lecture (TBC)** ::: ### 5) Ethical Governance #### Mapping the Project Lifecycle In the previous section, students were introduced to an ethical framework comprising values and principles, and a process-based form of participatory governance. In this module, we will develop the latter component, by mapping the ethical issues previously covered onto a typical project lifecycle. For instance, we will explore what actions ought to be undertaken during project design, model development, or system deployment, in order to ensure that a variety of ethical goals can be justifiably claimed to have been considered and supported. #### Roles and Responsibilities The individuals who make up a project team are key to the effective and ethical governance of any AI project. In this course we consider how their interconnected roles and responsibilities ought to be defined, and why it is necessary to acknowledge the inextricable collective responsibility that the project team have to ensure that their research or innovation supports an inclusive vision of the public good. #### Ethical Assurance Concluding this course, we present a practical mechanism, known as 'ethical assurance', which has been designed to offer project teams a systematic and structured means for developing and communicating a justifiable argument that their system has been designed, developed, and deployed in a manner that meets clearly specified ethical goals. This methodoloy also enables affected stakeholders to actively enquire into the constitutive claims of the ethical argument, in order to contest specific claims and ensure accountability.