Responsible Research & Innovation Scoping Workshop (1) === ###### tags: `Scoping Workshops` :::info **Using HackMD** You can add comments to this document by selecting the relevant portion, and then selecting 'comment' in the pop-up box. You do not need to sign-in to leave comments. ::: This document provides an overview of the modules for the Responsible Research and Innovation (RRI) course. When providing feedback, please consider the following points: - Do the proposed modules and topics meet your expectations for what a course on RRI should cover? - Are there are any gaps that need to be addressed? - Are there any foreseeable challlenges with delivering these modules and content in an online setting? **Document Navigation** [toc] ## RRI Modules :::warning **Summary** This course will explore what it means to take (individual and collective) responsibility for (and over) the processes and outcomes of research and innovation in data science and AI. The notion of 'responsibility' employed throughout this course will be grounded in an understanding of the relationship between science and society, exploring both historical and contemporary examples of RRI practices. As well as looking at the theoretical basis of RRI this course will also take a hands-on approach by exploring a variety of tools and procedures that can help operationalise and implement a robust notion of responsibility within research and innovation practices. ::: ### 1) Course Introduction #### Overview of the Course In this module we will motivate the need for this course by exploring some of the challenges facing modern, data-driven societies, and why it is vital to embed practices of RRI in projects that make use of or develop data-driven technologies. #### A History of (Ir)responsible Research and Innovation Many of the extant ethical and legal frameworks or regulatory mechanisms, which are today used to monitor the impacts of research and innovation on society or promote best practices, originally emerged as a response to serious misconduct from researchers or failures to consider the wider impact of technological innovation. Understanding these failures helps to establish the background context for subsequent topics in this course. Therefore, we will provide a potted history of some notable failures in scientific research and technological innovation, and ask where they went wrong. ### 2) Technological Research, Innovation, and Society #### Defining Responsibility There are many ways of defining the term 'responsibility', and it is easy to confuse the concept with close neigbours, such as 'accountability'. In this module, we will look at popular frameworks for defining responsibility, with respect to research and innovation (e.g. [EPSRC's Framework for Responsible Innovation](https://epsrc.ukri.org/research/framework/)) and collectively evaluate whether they provide an adequate conception of 'responsibility'. #### Scientists, Science, and Society In this module we will look at the relationship between individual scientists (and also researchers, and developers), research and development teams, scientific practice, and society more generally. We will ask what it means to exercise responsibility from each of these perspectives, and also explore the difference between a risk-based and a values-based approach to measuring the impact of technological research and innovation. #### Use Cases and Context The context in which a project is situated can challenge some of the principles of RRI. For instance, a rigid hierarchical structure can create a culture that makes it difficult for junior researchers to challenge or question problematic or unethical decisions made by senior project managers. Or, structural inequalities across the globe can make it difficult to create collaborative projects between developed and developing countries, due to uneuqal funding opportunities. In this module, we will investigate some of these issues in order to better understand how use cases and context shape our understanding of RRI. :::info **Guest Lecture (TBC)** ::: ### 3) Responsible Data Science and AI #### The Project Lifecycle This section of the course will introduce and explore a model of a typical data science or AI project, focusing on the design, development, and deployment of an automated technology (e.g. predictive algorithm). This module will start by introducing the model and also explaining how it will be used in the subsequent modules. Each subsequent module will then take a closer look at the three main stages from the perspective of RRI, including group hands-on activites that are designed to help participants reflect on what it means to act responsibly in a practical setting. #### (Project) Design (Project) design involves the initial planning of the project, and involves a process of anticipatory reflection in order to identify specific actions or choices that have a bearing on whether the project and associated technology are delivered in a responsible manner. We will explore what can (and ought) to be done to exercise responsibility throughout the stages of 'project planning', 'problem formulation', 'data extraction or procurement', 'data analysis', and 'preprocessing and feature engineering'. #### (Model) Development (Model) development involves a series of technical and computational activities, many of which require high-levels of expertise and access to sufficient resources (e.g. adequate levels of compute power to run advanced learning algorithms). In spite of this, we are still able to explore what can (and ought) to be done to exercise responsibility throughout the stages of 'preprocessing and feature engineering', 'model selection', 'model training', 'model testing and validation', 'model reporting', and 'model productionalization'. #### (System) Deployment (System) deployment is where the model that has been trained and validated in the previous stages is implemented into the system that users will interact with (e.g. decision support tool). There are a variety of human and organisational factors that require consideration—it is not sufficient to simply develop a model and then hand it over to others to implement. We will explore a variety of these aforementioned factors to explore how an organisations readiness for algorithmic systems affects whether a system is deployed and used responsibly. The stages we will consider are 'model productionalization', 'user training', 'system use and monitoring', model updating and deprovisioning'. ### 4) Demonstrating and Communicating RRI <!-- Research inequities and preconditions for RRI at team level. Revise section 2 as well. --> #### Transparency and Trust Even if a project team has acted responsibly throughout the design and development of an AI system, user or public trust may be undermined if the activites the project team have undertaken are not communicated appropriately. This includes ensuring sufficient transparency over the processes, or providing justification for why transparency is inappropriate (e.g. non-disclosure of sensitive information). This module will explore how to evaluate principles like transparency and accountability when communicating research or innovation outputs, including the organisation of stakeholder engagement activities. #### Argument-Based Assurance Drawing all of the prior modules together, we will end with a discussion of a procedural approach, known as argument-based assurance, that can be used for communicating how a project team has acted responsibly. We will explore current methods in this domain, including the production of safety or security cases, as well as methods for communicating and evaluating ethical properties of a project or technological system.