# AIUK Fringe event - Melbourne - Panel Briefing
## Event blurb
### Data-Driven Futures: Responsible AI in Climate and Health Policy
Join us at the AIUK Fringe Event, in collaboration with CAIDE & CSIRO's data61, for a focused exploration into responsible AI's role in developing climate-resilient health policies. This session will delve into the ethical challenges and governance issues of AI in the context of climate change and its affects on public health. Additionally, the event features a case study activity, focusing on practicing skills in identifying and managing risks associated with AI-supported decision making. This is an opportunity to deepen your understanding and enhance your ability to identify and find management solutions for risks associated with AI in health and environmental contexts.
[Registration Link](https://docs.google.com/forms/d/e/1FAIpQLSdZmN6CcUtuQ8-wunCdpw4E5tQNAN5wSMZSN8edp1NBoJTADQ/viewform)
Location: [The Woodward Centre](https://unihouse.org.au/functions-at-university-house/), Law Building, 10th floor, 106/185 Pelham St, Carlton VIC 3053
## Agenda
Feel free to arrive as early as you like (from 3:30 pm onwards) but please make sure to be there at least 15 minutes before we start the panel discussion. If you have to leave before the ending of the event, the earliest would be 5:20 pm.
15:30 - 16:00 Set-up
16:00 – 16:30 Welcome & Networking
16:30 – 17:00 Panel Discussion
17:00 - 17:20 Audience Q&A
17:30 - 19:00 Interactive activity
19:00 - 19:30 Drinks
## Panel details
### Goals
- multidisciplinary
- illustrative examples from your work
- conversational
### Speaker profiles
[Prof. Gavin Shaddick](https://www.turing.ac.uk/people/researchers/gavin-shaddick) - Executive Dean of Engineering, Physical and Mathematical Sciences at Royal Holloway University of London
- Data Science and AI for Environmental Intelligence
- Spatio-temporal modelling, microsimulation
- Health applications of AI (air pollution)
[Prof. Didar Zowghi](https://people.csiro.au/z/D/Didar-Zowghi) - Senior Principal Research Scientist and Diversity & Inclusion in AI Lead, CSIRO's Data 61
- Diversity and Inclusion in AI
- operationalising AI ethics & principles
- Technical background / software engineering
[Prof. Jacqueline Peel](https://findanexpert.unimelb.edu.au/profile/8713-jacqueline-peel) - Director of Melbourne Climate Futures, University of Melbourne
- Environmental and climate change law
- risk regulation
- Topics of expertise: climate change and the links to human health, growth of a human rights based approach in international climate policy.
[Prof. Jeannie Paterson](https://law.unimelb.edu.au/about/staff/jeannie-paterson) - Director of Centre for AI and Digital Ethics,
Digital Access and Equity Research Program, Melbourne Social Equity Institute
- Consumer and data protection law and ethics in the context of emerging digital technologies.
- Regulation perspective - Can law be set up in a way to prompt responsiveness rather than being prescriptive?
### Discussion roadmap
1) Context Setting: Understanding AI's Role in Climate and Health
What connects climate & health?
Why do we talk about this now?
What type of AI solutions are promising in this challenge?
[See here an example for a research project modelling environmental hazards](https://www.turing.ac.uk/research/research-projects/impacts-climate-change-and-heat-health)
2) Responsible AI: Bias correction and risk mitigation in AI
What is meant when we say Responsible AI?
What are the prevalent biases and risks in the context of climate & health?
*Would be great to hear not only the types of biases you find relevant but also some explicit examples of how they apply to / how you address them in your work*
3) Opening the dialogue: What voices should be included & how to involve people successfully?
What are good examples of how responsible AI can be operationalized?
What does participatory design look like in practice?
### Identified Themes
(Importance of responsible AI )
- People think there are no ethical issues when we talk about environment; if we want to use these models to change policy, then there is
- Microsimulation purpose is to get get exposures on subgroups to make policy decisions, this requires gathering information such as where individuals are at every given point in time - sensitive information!
- It requires engaging with the real limitations on the ground (not everyone can afford, data sparseness for marginalized subgroups)
- Bias, uncertainty, missing data will all influence decisions made at the end; we need to understand these deeply
(AI Maturity and Societal Readiness)
- bias and risk is the outcome of developing without inclusivity in mind, whereas inclusivity is the process.
- Are there some aspects of AI that we are not ready to make decisions/policy on? Are we ready (not just technically)? How do we globalise our approach?
- Optimising for minimising global error rate vs individual group error rate
- ==Risk-based approach to regulation ?==
- Design of AI: if using tools with high degrees of uncertainty, how do we communicate this to decision makers? risk-based approaches would capture that, the more uncertainty of accuracy and bias the more precautions are needed.
- Ideal state in regards to equity and inclusion is not reached. IF you do make an effort you can build inclusive AI which can then inform future efforts.
- Positive feedback loop is the goal and avoiding negative feedback loop that reinforces existing biases.
(Inclusivity during data collection / data justice)
- When people submit data, are they aware that that data is being used for all sorts of purposed that it wasn't necessarily designed for/may affect their life in ways they may not agree with?
- Example of banning SUVs around schools due to surveys, at the population level it's better for everyone, but at a personal level they perhaps aren't realising down the line that their input will shut down their route to school
- ==Human rights-based approach?==
- risk-based approach needs to be grounded in soemthing, eg in human rights
- if responding to policy that decides how we use spaces, a human rights based approach would allow to include affected groups of population (equity approach to who is consulted)
- we cannot think about environment and climate change independently from the people who are affected by it.
(Inclusivity & participatory design more broadly)
- Is participatory design == inclusive?
- inclusion is broader. don't just be proactive but advocating that diversity & inclusion is a human rights issue. The bedrock of repsonsible AI, needs to be thought of end-to-end throughout project lifecycle. participatory design is one aspect (but inclusion matters also after design)
- human rights perspective intertwined with engineering perspective
(Explainability)
- If you pay attention to RRI principles and make efforts to applying at different stages of AI lifecycle, you're bound to build an inclusive AI which can help us interrogate and learn from our mistakes
- Gavin: how can we establish a positive feedback loop for improving and avoid a negative feedback loop that reinforces existing bias?
- Didar: an example is aircraft accidents, learning from flight recorder stream to improve safety; we should take the negative harms from AI and learn from in the same way
- Tracing back is a huge challenge (esp for generative AI) / explainability
- Microsimulation models are probabilistic, so you can track back each personal individual through states; it's not as "black box" per se; it's technically complicated but possible to explain
- Covid exposure project example (the original DyME for epidemics)
- However, most people don't necessarily build the explanation into the model, even though it's feasible, because it's "less exciting"\
- explainability as response to opaque AI systems over regulation. Is it the question we want to ask? We can understand outcomes without touching on impact of a system
- Transparency rather than explainability is more important in relation to governance, how is a system being developed. Technical work on explainability is more about causation and weighting in the models.
### Resources / Papers
- [A dynamic microsimulation model for epidemics](https://pubmed.ncbi.nlm.nih.gov/34717286/)
- [A Data Integration Approach to Estimating Personal Exposures to Air Pollution](https://research.manchester.ac.uk/en/publications/a-data-integration-approach-to-estimating-personal-exposures-to-a)
- [AI and the quest for diversity and inclusion: a systematic literature review](https://link.springer.com/article/10.1007/s43681-023-00362-w)
- [Diversity and Inclusion in Artificial Intelligence](https://arxiv.org/abs/2305.12728)