# Collaborations Workshop Delivery :::warning :information_source: From the organisers: ### General Guidance 60 minutes may not feel like a long time for a workshop or demonstration, so it’s important to plan how you will use the time. If you are able to incorporate a combination of presentation and collaborative group activities, discussions, audience Q&A or feedback, we find this is a great way to engage an audience. You can also provide information about how participants can become involved in follow-up activities. ### Coaching Sessions for Facilitators The recording of the session is available to watch here: https://drive.google.com/file/d/1JXLkIxZMxX-bBLsf3UH9B9Ag_-F6WS7r/view You can find the otter.ai transcript of the session at https://otter.ai/u/px0NzAjHFEpedk4qLY4LML7nb-w?utm_source=copy_url - it might help to narrow 40 minutes down to those parts that you’d like to catch up with. Notes from the session (including links to a floor plan of the venue and their virtual tour) can be found at: https://docs.google.com/document/d/1-VAe9MsO6j9nN1npVo0JRWvJL9TOoiR7tJrC815Q8SE/edit ### CW24 Agenda You can view the [full agenda](https://www.software.ac.uk/cw24-agenda) here and the [mini-workshop and demo session abstracts here](https://www.software.ac.uk/cw24-mini-workshops-and-demo-sessions). Each block has 4 mini-workshop tracks in parallel, each comprising one 60-minute session. Please let me know if you want any updates/changes made to your abstract, or if you have prerequisites for participants that need to be shared. ### Registration If you or any of your co-facilitators have not yet registered for CW24, please do so as soon as possible via Eventbrite. Registration for in-person tickets closes on 5 April 2024, deadline to register for remote attendance is 11 April 2024. ### Zoom CW24 will take place as a hybrid event, and remote participation will be facilitated using Zoom Events. This allows for each of the parallel mini-workshop tracks to happen in an independent Zoom room so that you are able to create breakout rooms in your sessions if you need to. ### Help from the Organising Team The CW24 team will help with instructions or hands-on support to manage the video connection between the in-person room and the associated Zoom room, and helpers from the CW24 Organising Committee will be present to assist during the session. ### Session Structure and Collaborative Notes Documents We will be creating a collaborative note-taking Google document for each of your workshops. The document templates will have some general guidance at the top and a space for participant roll call. We aim to send you the links to your respective documents by Thursday, 11 April 2024, and then you are free to populate the rest of the document however you see fit in order to best facilitate your session (e.g. provide an agenda, space for questions and notes, any links, etc.). You can view examples from last year at CW23 [here](https://docs.google.com/document/d/1Rn26lgF7MgN8LfQirrkcJ2zfzu3LRNHj6Hvc_KT--KU/edit) and [here](https://docs.google.com/document/d/19KqRg6BP6dKCk2FoD5pVumFkABqoD0hNKim6r0VPYQo/edit). If this is not useful to your workshop, no problem, they are optional and there for your convenience. We just want to keep the ones that are used centralised with all the documents generated as part of CW24. By default, these documents will be licensed [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) as per the [Participation Guidelines](https://www.software.ac.uk/cw24-participation-guidelines) unless requested otherwise. We will provide guidance and advice around creating these documents during the Coaching Sessions for Facilitators. ### Slido You have access to a Slido here: https://admin.sli.do/event/9uSraHcksyWGVz5dwfGB5P/polls ==provide others with access== ![image](https://hackmd.io/_uploads/SyeXX0clR.png) ### Session materials If you would like to share any session materials such as presentation slides, you can submit them to the CW24 Zenodo community following these guidelines: https://zenodo.org/doi/10.5281/zenodo.10848450 ::: ## Workshop Agenda proposal ==**30 April, 15:45-16:45**== | Time | Length (min) | Lead | Agenda point | | ----------- | ------------ | ------------ | ----------------------------------------- | | 15:45-15:55 | 10 | Chris | <h4>Introduction</h4>Brief introduction = outline the goals + significance of the TEA platform in promoting ethical AI practices. Also ==~~plug the survey?~~==<br/><br/>Follow this with a quick intro to the platform's purpose, development by Turing + University of York, and its role in the AI/ML community. Then showcase the platform's interface and core functionalities. Highlight how it facilitates the creation and management of assurance cases. (CemrgApp as example + QR code to the film.) | | 15:55-16:05 | 5-10 | James |<h4>Ethics in AI/DT data science</h4>A brief talk by James on the importance of ethical assurance in environmental data science, particularly for Digital Twins, to set the context for why these practices are critical. | | 16:05-16:15 | ~~10~~ | ~~Marina/Jose~~ | ~~<h4>Real-World Application</h4>Video walkthrough of CemrgApp as a real-world case study developed using the TEA platform.~~ ==_We have decided that CemrgApp will be part of Chris's intro instead and we will then share the video from Marina and José with the participants._== | 16:15-16:20 | 5 | Chris | <h4>Interactive Q&A</h4>Using ==a Slido poll==, open floor for questions from attendees to probe deeper into concerns or suggestions about the platform. | | 16:20-16:40 | 20 | Chris (in-person)/ Sophie (online) | <h4>Hands-On Training</h4>Attendees engage in a ==hands-on group activity== where they will be creating a mini assurance case using the TEA platform.<br /><br />The activity will focus on safety-critical autonomous vehicles (in-person, led by Chris) and facial recognition (online, led by Sophie). Preparations for this can be found [below](#Preparations-for-hands-on-part-of-workshop). | | 16:40-16:45 | 2 | Chris | <h4>Wrap-up</h4>Summarise key takeaways from session + encourage ongoing engagement with the TEA platform through our usual channels.<br /><br />We may also want to direct attendees to ==a (Slido?) poll== to gauge satisfaction with the session + gather suggestions for platform improvements. ==**2-pager shared as CTA**== | ## Additional stuff/questions/considerations - [X] Perhaps we could have James + Cemrg prep a longer version of what they want to say as a pre-recorded video that we can share with participants, as 10 mins each isn't a lot of time... - [ ] ==*prep*== Prep demo videos of platform as well? Too much to do? But since we only have 10 mins of demo of capabilities, we might want to share pre-recorded material to make sure we're on time... + we also only have access to the old platform - [ ] ==*prep*== Do we need/should we update the 2-pager? Print more for this event + have one version up on the repo or something? - [ ] Print a few more - [ ] ==*prep*== Prepare a follow-up email with all resources + links etc. - [ ] Make sure all the tech has run-through beforehand as we have a tight schedule ## Agenda for April 23 - [ ] Finalise preparations for ["hands-on part of workshop"](#Preparations-for-hands-on-part-of-workshop) below - i.e. agreement on use-cases (autonomous vehicles/facial recognition) => formulating goals, strategies, property claims - [ ] Coordinate the Collaborations Workshop with all the organisers ## Meeting notes -- 16 April - :heavy_check_mark: Intro by Chris - :heavy_check_mark: Presentations by Cemrg + James - Facilitate a group creation of assurance case (Chris in-person, Sophie remote) through structured questions - goals + strategies - identify a claim - think of potential evidence - come up with a potential scenario - autonomous driving - to local council, who is trying to prove the license to trial the deployment a fleet of these vehicles for public use (SF did this). They want an assurance case to say that it is safe. - 1. ![image](https://hackmd.io/_uploads/Sy6UfW3eC.png) - 2. ![image](https://hackmd.io/_uploads/HySdf-neR.png) - 3. create a goal: ![image](https://hackmd.io/_uploads/BkesGW2lA.png) - 4. create context: ![image](https://hackmd.io/_uploads/r1KizZhxC.png) - 5. create context: ![image](https://hackmd.io/_uploads/Hkf6Gb2g0.png) - 6. high-level ways that you would approach safety - ![image](https://hackmd.io/_uploads/ByNgmW2gR.png) - ![image](https://hackmd.io/_uploads/B1yQmW2g0.png) - could change goal description: Mitigating risk to the point as low as reasonably practical (ALARP). - move from technical/safety towards ethics... Safe for who? (so we talk about *fairness in safety* here) - cyclists / elderly people - (racial) bias in facial recognition - fairness in Decision Support System: patient assessed in routine psychiatric (add LLM?) - voice-to-text => annotated to psychiatrist ("Your patient said X, maybe this is something you want to follow up on.") - How would you evaluate this and produce an assurance case? What would you do? - LLMs in general - do two different cases for the online/in-person - By 23rd, 10 min presentation from Chris - Chris will invite James, Marina, and Jose for the 23rd - Aim: coordinate the Collaborations Workshop. Agreement on use-cases (autonomous vehicles/facial recognition) => formulating goals, strategies, property claims --- ## Preparations for hands-on part of workshop **Aim: Facilitate a group creation of assurance case through structured questions** Chris will lead in-person, Sophie will lead remote. In advance, think of: - goals + strategies - identify a claim - think of potential evidence **Two selected examples**: 1. [Autonomous vehicles](#Autonomous-vehicles) 2. [Facial recognition technologies](#Facial-recognition-technologies) ### Autonomous vehicles By structuring these elements, Brookhaven's local council can build a robust and transparent assurance case that effectively communicates the safety and reliability of their autonomous vehicle fleet to all stakeholders. This approach not only aids in regulatory compliance but also enhances public trust in the deployment of new technologies. #### Goal Claim The goal claim for Brookhaven's autonomous vehicle project is: **"The deployment of our autonomous vehicle fleet ensures fair and equitable safety for all road users, including pedestrians, cyclists, and the elderly."** This goal emphasizes not just the safety but the fairness of the technology’s impact on diverse community members. #### Property Claims To support this ethically oriented goal claim, the property claims are expanded to specifically address the needs of all road users: - **Inclusive Safety Protocols**: Safety measures account for the diverse capabilities and needs of all users, including the elderly and those with disabilities. - **Cyclist Detection and Response**: The vehicles are equipped with advanced detection systems that specifically recognize cyclists and tailor vehicle behavior to ensure their safety. - **Pedestrian Priority Systems**: Implement systems that prioritize pedestrian safety in all scenarios, particularly focusing on the elderly and children in urban and crosswalk areas. - **Equity in Safety Features**: Ensure that safety features do not disproportionately benefit one user group over another, maintaining equity across different demographics including socio-economic backgrounds. #### Strategy The strategy to demonstrate that the autonomous vehicles promote fairness in safety involves: - **Detailed User Scenarios**: Developing scenarios that include a variety of road users, such as cyclists and elderly pedestrians, to test vehicle responses in diverse situations. - **Inclusive Design Principles**: Applying inclusive design principles in the development of vehicle systems to ensure accessibility and fairness. - **Community Engagement**: Engaging with community groups, including cycling clubs and senior citizen centers, to gather feedback and incorporate it into system refinement. - **Equity Audits**: Conducting periodic equity audits to assess and address any disparities in safety outcomes among different user groups. #### Evidence The types of evidence to support these claims include: - **User Feedback and Consultation Reports**: Documentation of consultations with stakeholders representing cyclists, the elderly, and other vulnerable groups, along with their feedback on safety features. - **Inclusive Testing Results**: Results from testing scenarios that demonstrate vehicle performance in situations involving diverse road users. - **Design and Feature Documentation**: Detailed descriptions of the inclusive design features implemented in the vehicles. - **Audit Outcomes**: Reports from equity audits that analyze the fairness of safety outcomes across different demographics. #### Context The context considerations now also focus on ethical implications: - **Ethical Standards and Guidelines**: Adherence to ethical standards that mandate equal treatment and non-discrimination in automated systems. - **Demographic Diversity of Brookhaven**: Understanding the specific demographic composition of Brookhaven to tailor vehicle systems to the community’s unique needs. - **Legal Frameworks for Equity**: Compliance with local and national laws that require equitable treatment of all citizens in public services. ### Facial recognition technologies :::info **Hypothetical Case study** You are a tech company developing facial recognition software to empower law enforcement agencies in swiftly scanning crowds and comparing faces against a comprehensive database. This sophisticated technology assists authorities in identifying individuals on watchlists, including suspects, missing persons, and other persons of interest to the police, enhancing the efficiency of investigations. By providing real-time matches, your software aims to revolutionize public safety efforts, aiding in the prompt apprehension of suspects and the swift location of missing persons. ::: #### Goal Claim The goal claim for the project to implement facial recognition technology is: **"The facial recognition system is designed and operated to minimize bias and ensure equitable treatment of all demographic groups."** This goal focuses on combating bias and promoting fairness in technological applications. #### Property Claims To support the goal claim, the following property claims are vital: **Biased Identification**: If the facial recognition system consistently fails to accurately identify certain demographic groups, such as people of a particular race or gender, it could lead to unfair treatment by denying them access to the gym even though they are legitimate paying members. **Data Security Breaches**: Unfair treatment could occur if the system fails to adequately protect user data, leading to breaches or misuse of personal information. For example, if the system's database of facial biometrics is compromised, it could result in identity theft or unauthorized access to sensitive information, disproportionately affecting certain individuals. **Lack of Transparency**: If the operation of the facial recognition system is not transparent to users, it could lead to mistrust and suspicion, especially if individuals feel their privacy is being violated without their knowledge or consent. This lack of transparency could contribute to a sense of unfair treatment, even if biases are not present in the system's algorithms. **Ineffective Redress Mechanisms**: If the gym does not provide adequate avenues for individuals to challenge incorrect identification or address concerns about the facial recognition system, it could result in unfair treatment by denying legitimate members the opportunity to rectify errors or misunderstandings. #### Strategy The strategy to demonstrate how the facial recognition system counters bias includes: - **Diverse Dataset Acquisition and Validation**: Acquiring and continually updating the dataset to include a broad representation of demographic groups and validating the dataset for balance and diversity. - **Implementation of Bias Detection Tools**: Using statistical tools and techniques to identify and correct biases in the system’s outputs. - **Feedback Mechanisms**: Establishing mechanisms for collecting and incorporating user feedback to refine and update the model. - **Documentation and Reporting**: Creating comprehensive reports detailing the system’s design, operational protocols, and the steps taken to ensure transparency. #### Evidence Evidence necessary to substantiate these claims includes: - **Dataset Description and Metrics**: Detailed information on the composition and diversity of the training datasets, along with metrics demonstrating their representative nature. - **Bias Audit Reports**: Independent audits of the system’s performance, focusing on bias metrics across different demographic groups. - **User Feedback Documentation**: Summaries of user feedback, particularly concerning perceived biases and the system’s responsiveness to such feedback. - **Transparency Logs**: Logs and records that document the decision-making process of the system, accessible to stakeholders for review. #### Context Contextual elements that influence the assurance case include: - **Ethical Guidelines and Compliance**: Adherence to ethical guidelines and compliance with legal standards related to discrimination and privacy. - **Technological Landscape**: The state of technology in facial recognition, including common pitfalls and challenges in minimizing bias. - **Societal and Cultural Dynamics**: The societal and cultural dynamics of the system’s operational environment, which might affect the perception and effectiveness of the technology. ### Meeting notes - 23rd April - walk through agenda - James: compound assurance of a DT twin system. demands us to think about this on a larger scale -- systematic claims and their validations. How to scale the digital twins. ### Actions - [name=Kalle] share slido login - [name=Kalle] find out what Marina & Jose will present in video