:::info
**Key info**
Audience: undergrad interns (7)
Date and time: 15th July 1-5 pm
Duration: 4h
Format: workshop
Resources: [Turing Commons RRI Skills Track](https://alan-turing-institute.github.io/turing-commons/skills-tracks/rri/index.html)
Instructor: Mishka Nemes (Skills Manager)
:::
### Description
This half-day workshop explores what it means to take (individual and collective) responsibility for (and over) the processes and outcomes of research and innovation in data science and AI. It offers participants discussion opportunities to reflect on moral responsiblities and to identify leverage points within the project lifecycle for ethical significance prompts.
The workshop is a *taster* of a wider curricula within Responsible AI & Ethics, and participants will be signposted to further learning and upskilling opportunities should they wish to explore any of the topics and concepts in more detail.
### Learning outcomes
1. Understand what is meant by the term ‘responsible research and innovation’, including the motivation and historical context for its increasing relevance.
2. Gain familiarity with the related concepts of ‘moral agent’ and ‘moral subject’, and how to identify them within a particular scenario.
3. Learn how to identify moral responsibilities, as well as when (and why) they may come into conflict.
4. Identify and evaluate the ethical issues associated with the key stages of a typical data science or AI project lifecycle: (project) design, (model) development, (system) deployment.
~~Explore practical tools and mechanisms for operationalising the several ethical principles, which have been designed to guide the responsible design of data science and AI projects.~~
~~Understand the importance of responsible communication in the design, development, and deployment of data science and AI projects, and explore ways to exercise this responsibility.~~
## Agenda & Topics covered
#### INTRO - 15 mins
Ice-breaker exercise
Motivation for running the session
Overview / Agenda
#### PART 1 - 1h 15 mins
- What is RRI
- Understanding Responsiblity ie Responsibility vs Accountability (exlcuding Manhattan project & Harmless torturer)
- Defining RRI
- Introduction to the AREA framework
#### BREAK - 15 mins
#### PART 2 - 1h 30 mins including breakout discussion
- What is the project lifecycle?
- Project design
- Model development
- System deployment
**Activity / breakout discussion** (30 mins)
- Materials: the project lifecycle interactive PDF
- Example prompt: in groups disuss why we call this model heuristic, and identify how you could employ ethical consideration within your own Turing project, at different stages
#### BREAK - 15 mins
#### WRAP-UP - 30 mins
- Conclusion & wrap-up
- Intro to TREx (Dennae) (15 mins)
- Further reading, resources & signposting
- SAFE-D principles (Fairness & Transparency including bias cards)
- Public guidance workbooks (AIethics platform)
- Responsible AI courses on OLP
- Actor cards interactive deck
- Turing Commons case study repository
- more to be added!
### Questions for Chris
- What is it feasible to run through everything here in the time allocated given we want to make it interactive?
- Are there existing slide decks I could reuse?
- Have the remaining 3/5 SAFE-D modules been complete yet?
- How to use the case studies?