owned this note
owned this note
Published
Linked with GitHub
CC10: Making teaching relevant to real-world problems
### Agenda
| Agenda | Speaker | Time |
| ------------------------------------------------------- | ------------- | ------------- |
| Intro and overview | Mishka | 13:30 - 13:45 |
| Making teaching relevant to real-world problems </br> inc. breakout activities and Q&A | Dr Zachary Goldberg | 13:45 - 14:45 |
| Wrap-up | Mishka | 14:45 |
---
## Feedback and Q&A
### Questions from participants :question:
Place your questions here and we will pick these up in the session :smile_cat:
- Should applied use of ethics as being taught to industry, be taught in CS classes? I find our curriculum is weak in the social ethical aspect. And that educators may feel that applied ethics is not for CS... (I disagree). Should we be adding this to the requirements for all CS courses?
-
-
-
-
-
-
### Useful Links/references :bookmark_tabs:
- [Insert title of link in square brackets](insert URL in round brackets)
- [My talk (Sam Ahern) on SLAM, autonomius vehicles and mapping](https://mediacentral.ucl.ac.uk/Play/4428)
- [From ethical AI frameworks to tools: a review of approaches](https://link.springer.com/article/10.1007/s43681-023-00258-9)
- [Upcoming workshop: Educating Engineers for Safe AI, Newcastle August 15th & 16th](https://www.eventbrite.co.uk/e/educating-engineers-for-safe-ai-tickets-646998809857)
- [TRUSTED AI: TRANSLATING AI ETHICS FROM THEORY INTO PRACTICE](https://realworlddatascience.net/ideas/posts/2023/07/03/trusted-AI.html)
- [Datopolis - ODI's boardgame about open data and data ethics](https://learning.theodi.org/courses/datopolis)
- [Moral machine](https://www.moralmachine.net/)
- [Course: Operationalising Ethics AI, intermediate](https://www.turing.ac.uk/courses/operationalising-ethics-ai-intermediate)
- [Course: Operationalising Ethics AI, expert](https://www.turing.ac.uk/courses/operationalising-ethics-ai-expert)
- [Voicing code in STEM: recontexualisation and transitional othering](https://ieeexplore.ieee.org/document/9388095)
- [Book: Automanting Inequality, Viriginia Eubanks (review)](https://blogs.lse.ac.uk/lsereviewofbooks/2018/07/02/book-review-automating-inequality-how-high-tech-tools-profile-police-and-punish-the-poor-by-virginia-eubanks/)
- [Book: Race After Technology, Ruha Benjamin](https://www.ruhabenjamin.com/race-after-technology)
- [AI incident database](https://incidentdatabase.ai/)
- [QAA Education for Sustainable Development guidance](https://www.qaa.ac.uk/the-quality-code/education-for-sustainable-development)
---
:::success
## Game
Imagine you work for a think tank advising a startup on how they can best develop an algorithm for a self-driving vehicle.
Imagine there’s a malfunction and a subsequent car crash.
Stakeholders: Developer, Driver, Legislator, Insurer
Values: Traceability, explainability, accountability, privacy,
trust, human decision-making, fairness
Prioritize using XL, L, M, S, whose interests matter most and for what purpose.
Formulate using user story method: As a… (stakeholder), I want…(values)…in order to…(interests) given…(context)
Ex. As a developer, I want to increase traceability in order to track the system error and avoid future collisions.
Ex. As a legislator, I want to increase explainability to impose stricter requirements on the system in case of a collision.
**Notes:**
- [name=Samantha] Developer (understand system) = XL traceability, L explainability, M accountability, S fairness, XS privacy; Driver (trust vehicle) = XL trust, L human decision-making, M privacy, S fairness, xs accountability; Legislator (public safety and regulation) = XL accountability, L explainability, M traceability, S fairness, XS human decision-making; Insurer (liability and culpability) = XL accountability, L traceability, M explainability, S human decision-making, xs trust
- [name=INSERT/anonymous] Legislator = XL- fairness,accountability,trust, L-traceability, M-explainability,privacy, S-human decision-making. As a legislator they are concerned with the balancing act where fairness must be first main concern.
- [name=Mishka] As a XL = pedestrian, I want algorithmic fairness in order for L = me to feel confident that the SDV is trained on humans who represents me including.
- [name=INSERT/anonymous] Developer (Traceability XL, explainability L, accountability M, fairness S, trust XS) Driver (Trust XL, human decision-making L, privacy M, fairness S, accontability XS) Legislator (Explainability XL, trust L, accountability M, fairness S, privacy XS) Insurer (Accountability XL, traceability L, explainability M, human decision-making S, fairness XS)
:::
## Whole cohort discussion :speaking_head_in_silhouette:
- What challenges have you faced bridging the gap between academia and industry?
- Different terminologies, differing viewpoints/lenses, competing priorities, different ways of working, relating and distilling theory to practice
-
- Terminology as was mentioned has been a big challenge
-
- Data/problem sets used in academia often don't raise many ethical questions (eg cat/dog classifier).
- Issues really arise with/at deployment, which is usually not part of academia.
-
-
-
-
- What additional solutions would you add to those discussed today?
- ...
-
- More testing before deployment. Being more thoughtful.
-
-
-
-
-
-
-
-
- What other questions do you have?
- ...
-
-
-
-
-
-
-
-
-
-
-