# Assuring Digital Twins for Resilient and Human-Centred Cyber-Physical Infrastructure :::info ℹ️ **About this Document** This document is a briefing note regarding a funding proposal for the UKRI BRAID Responsible AI Demonstrator call. The working title of this proposal is 'Assuring Digital Twins for Resilient and Human-Centred Cyber-Physical Infrastructure'. **NB: this document is for internal-use only.** A separate version will be created to share with project partners, which omits some of the material. - Original call document: [https://www.ukri.org/opportunity/braid-responsible-ai-demonstrators/](https://www.ukri.org/opportunity/braid-responsible-ai-demonstrators/) - Maximum award: £1,100,000 - Max duration: 36 months - Dates: - Expression of Interest: 16th May 2024 4:00pm UK time - Final Submission: 27 June 2024 4:00pm UK time - Notification: December 2024 - Fixed Start Date: 1 February 2025 ::: ## Background The [Trustworthy and Ethical Assurance of Digital Twins]() project commenced in February 2024, for a duration of 6 months, and is funded as part of the UKRI BRAID programme for scoping research. This project is a collaboration between the Alan Turing Institute, the University of York, and the Department for Science, Innovation, and Technology. The three interlocking objectives of this project are: 1. To conduct multi-disciplinary scoping research that identifies how a novel RRI tool, known as the Trustworthy and Ethical Assurance (TEA) Platform, can be used by project teams to guide the development of structured arguments that demonstrate how a variety of goals and ethical principles (e.g. security, interoperability, explainability, fairness) have been assured across the lifecycle of digital twinning research and innovation. 2. To co-create accessible and reproducible standards for assuring digital twin technologies (including ML- or AI-enabled components). 3. To cultivate an inclusive and fit-for-purpose assurance ecosystem, building on the work of the UK Government's Department of Science, Innovation, and Technology. Further information about this project can be found here: https://www.turing.ac.uk/research/research-projects/trustworthy-and-ethical-assurance-digital-twins-tea-dt :::success ✅ **Notable Impact** - TEA platform was featured in UK Government policy guidance on AI Assurance: https://www.gov.uk/government/publications/introduction-to-ai-assurance - The TEA-DT project ran a demonstrator stand at AIUK 2024, supported by a case study from the CemrgApp team (Imperial College London). Jean Innes brought Sagib Bhati MP (Parliamentary Under-Secretary of State for Tech) to the stand to speak with the project lead and discuss alignment to current policy initiatives. - Current stakeholder engagement and project partners providing case studies, presentations, and other forms of support include: Met Office, British Antarctic Survey, Heartflow, NHS England, MHRA, Pinsent Masons, CemrgApp Team (Imperial College London), DT Hub, and DTNet+. Google and Microsoft have also provisionally agreed to support project. - Project and platform have previously received funding from UKRI BRAID (£310,000), UKRI TAS Hub (£150,000), and Lloyd's Register Foundation/Assuring Autonomy International Programme (£110,000). ::: ## Project Outline ### Project Goals The following interlocking goals have been identified: 1. Co-create open, community-centred infrastructure for assuring digital twins (e.g. TEA platform, open documentation and guidance, repository of assurance cases and patterns). 2. Carry out meaningful multi-stakeholder engagement and interaction, to build capabilities within the UK's emerging AI assurance ecosystem in an equitable manner. 3. Clearly demonstrate impact and value of responsible research and innovation, provide empirical evidence to validate adoption of assurance techniques, and co-produce best practices and standards (with associated metrics or indicators) that can be reused and extended upon by others. 4. Produce high-quality research outputs and other deliverables that demonstrate open, responsible, and collaborative leadership within the AI assurance and digital twinning communities. ### Work Packages The following work packages will ensure the multi-disciplinary and cross-organisational project team have sufficient autonomy to leverage their expertise in pursuit of the goals, and to engage project partners in a manner that is tailored to their respective needs and capabilities, while also ensuring that the project team is governed by a complementary set of objectives. #### Work Package 1—An Open and Community-Centred Approach to Resilient Innovation ##### Objective 1.1—Produce robust empirical evidence and measures of how RRI practices contribute to resilient innovation and infrastructure - Develop study to evaluate impact of trustworthy and ethical assurance in DT project(s) at multiple stages of the project lifecycle - Design study to be as open as possible to promote reproducibility - Identify robust measures (possible Delphi study) to inform assurance practices for DTs - Share insights and collaborate with others across the BRAID network to promote generalisable research and improve resilience (e.g. knowledge gaps) ##### Objective 1.2—Co-develop a sustainable and multi-disciplinary community of practice for assurance of digital twins - Co-develop a community narrative that represents a diverse and inclusive range of members - Expand the structured needs analysis and capability building exercises from the TEA-DT project. ##### Objective 1.3—Establish an open skills and training curricula to help build capabilities for the assurance ecosystem - Develop accessible and freely available materials to help upskill key actors within the assurance ecosystem - Adhere to the FAIR principles when designing these materials to promote reuse and development of content across the community (e.g. BRAID network) - Implement materials within the TEA platform to help improve usability (e.g. during development of an assurance case), as well as the Turing's Online Learning Environment in line with its strategic mission. ##### Objective 1.4—Demonstrate the value and impact of the TEA platform for diverse stakeholders and users within the assurance ecosystem - Continue to engage with policy-makers and regulators in specific domains (e.g. MHRA, CQC, HRA) as well as more generally to help demonstrate the value of RRI in informing practical mechanisms for assuring the resilience of the UKs cyber-physical infrastructure. - Strengthen the case for adoption of the TEA platform as a novel approach to responsible research and innovation, by widening the scope of our case studies and partners beyond those currently participating in the TEA-DT project. #### Work Package 2—Legal and Regulatory Resilience for Cyber-Physical Infrastructure ##### Objective 2.1—Carry out "penetration testing" on current assurance techniques and practices to build legal and regulatory resilience - Working with Pinsent Masons and University of York, establish an open framework for "penetration testing" of current DT assurance techniques, with an emphasis on enhancing the UKs legal and regulatory resilience (e.g. identifying gaps in current laws, such as liability for harms caused by autonomous DTs). - Publish "rulings" that can serve as a precedent for others to build upon, as well as making transparent the gaps that exist in currently adopted governance mechanisms. ##### Objective 2.2—Design and conduct a mock trial for digital twins in public sector decision-making to meaningfully engage members of the public and other stakeholders - Working with Pinsent Masons and University of York, inform and engage members of the public in a mock trial involving the use of a digital twin to inform public sector decision-making. - Provide a model for meaningful public engagement, by being transparent about how and where the findings of this mock trial will actively shape ongoing research. ##### Objective 2.3—Expand the set of project stakeholders to better engage members of the professions (e.g. law, public policy, medicine, engineering) - Continue needs analysis and capacity-building with new stakeholders, to further enhance the impact and sustainability of the assurance ecosystem. ##### Objective 2.4—Publish an edited collection or handbook on the governance of digital twins - ==To draft with Phillip, Stefano, and Sue== #### Work Package 3—Human-Centred Assurance of Digital Twins ##### Objective 3.1—Convene a series of multi-disciplinary research and innovation workshops focused on a human-centred approach to the assurance of digital twins - Workshop on formal reasoning and augmentation of DT-based explanations (e.g. use of LLMs for decision support). - Special issue (or journal article) on modelling and representation of arguments for assurance of DTs. ##### Objective 3.2—Stress test the TEA platform's formal schema by exposing it to critical evaluation from across the disciplines - Multi-disciplinary workshop (e.g. philosophy, mathematics, systems and software engineers, human factors, AI safety and evaluation, cognitive science) focused on ensuring the design of the TEA platform's schema is informed by cutting-edge science and innovation. ##### Objective 3.3—Develop FAIR case studies showing how assurance can enhance the human-centred explainability of digital twins, as well as other data-driven technologies (e.g. genAI) - Ensure all case studies (and assurance cases) are findable, accessible, interoperable, and reusable, to improve the scalability and extensibility of the TEA platform. - Widen the scope of existing cases and patterns to new domains and contexts to enhance value and impact. ##### Objective 3.4 [To build on Paul Grice's cooperative principle to enhance the conversational and communicative aspects of assurance ] - ==To draft with Ibrahim== - Ensure effective communication as a basis for, or as a prerequisite for, evidence-based assurance - Empirically and analytically assess the impact of implementing assurance as a conversation (via user studies, literature and workshops) - ## Project Partners The following list of partners represent a mixture of organisations that are either a) existing partners of the TEA-DT project, b) current stakeholders and partners for the TRIC-DT or TPS programme, or c) partners who have already expressed interest in collaboration. - Pinsent Masons: have committed to co-designing and organising a legal/governance "penetration testing" of a real-world digital twin. They will help us setup a robust process/environment for testing a hypothetical decision informed by the DT in a public sector context. The specific DT used in this process is still to be determined. - Microsoft: have offered their expertise and support for developing the technical infrastructure required by the project, and discussed providing Azure credits. They are interested in co-designing patterns for the TEA platform based on their responsible AI Standards. - BSI: [name=Shakir @chris how do you specifically see BSI contributing to this work as they are also partners within the AI Standards Hub - Please speak to us before reaching out to them] are currently exploring SMART standards that could be integrated into assurance cases. In collaboration with the AI Standards Hub, we're currently testing integration of their API as a proof of concept for how AI and DT standards can inform assurance cases. - Google: initial discussions have been held to identify shared interest in conducting empirical research into using and testing the TEA platform within Google Deepmind on some active projects. Unrestricted grant has been mentioned as one form of support. - Imperial College London: researchers within Professor Steve Niederer's team (CemrgApp) have already co-developed a partial assurance case focused on fairness and health equity. And, the TEA platform and assurance research is included in the currently embargoed EPSRC project (CVD-Net), which involves the TRIC-DT team. - MHRA: initial discussions have been held with their CTO and Director of Clinical Practice Research Datalink to agree on support for and involvement with the upcoming TEA-DT workshops, based on mutual value and alignment with MHRA's current strategic objectives and projects (e.g. AI Airlock). - BAS: current project partner of TEA-DT and TRIC-DT, and participating in engagement workshops to co-develop an assurance case for one of their DT projects. - Met Office: current project partner of TEA-DT and Turing, and participating in engagement workshops to co-develop an assurance case for one of their DT projects. - University of Bristol: support from Dr Jason Konek with developing formal schema (incl. design and validation) used by the TEA platform, based on insights from formal epistemology (e.g. modelling arguments using knowledge graphs). - NHS England: current project partner of TEA-DT providing time and expertise to ensure alignment with emerging policy and strategic areas. - Heartflow: current project partner of TEA-DT, and participating in engagement workshops to co-develop an assurance case for one of their DT projects. - DTNet+: support from the theme leads (e.g. Chris Dent) to help design a community survey (co-badged with DTHub) exploring the current needs and capabilities for assurance of DTs. - Institute for Safe Autonomy (University of York): project collaborator and possible case study based on their own digital twin (smart building). - Data61 (CSIRO): possible international partner. Engaged at February workshop in Australia, and have significant expertise around the use of microsimulations for mitigating effects of climate hazards on public health. - CAIDE: possible international partner. Engaged at February workshop in Australia, and have good connections to New South Wales government who have published their own guidance on assurance. - Responsible Technology Adoption Unit (Department for Science, Innovation, and Technology): existing partner and invaluable connection to ensuring national policy impact. - Ada Lovelace Institute: existing project partner for embargoed CVD-Net project and interested in assurance - Ufonia: provide clinical and AI expertise in the use of AI-based conversational agents in healthcare - Lloyd's Register Foundation: providing advice on policy, particularly in the maritime sector ### What are we looking for? - Collaboration with project team by providing access to real-world case studies of digital twins (or digital twinning technology) to inform the development of assurance cases or argument patterns. Skills and training for how to use the TEA platform will be provided by the project team. - Expertise and resources to support the technical development of the TEA platform. This can be full-stack web development (currently Next.js) or systems engineering (e.g. Azure deployment, LLM evaluation, MLOps). - Funding for workshops or other public events. - Funding to support the creation of educational materials, which will be released freely under a Creative Commons license. - Active involvement in community-building practices and widening participation in the assurance ecosystem. - Expertise to steer and inform the direction of research and innovation practices, based on public or private sector experience. ### What we can offer? ==To draft== ## Strategic Value for the Turing and Alignment with Turing 2.0 This second phase of UKRI funding will enable the following goals to be realised: 1. Further develop their research and development around Trustworthy and Ethical Assurance, including the continue development of our open-source tooling and platform, as well as ensuring the sustainability of the emerging community of practice—supported by the TRIC-DT and the Turing 2.0 core capabilities (e.g. TPS, REG, and Data Wrangling). 2. Build on the existing impact of the project, and extend the network of partners and stakeholders to a broader range of public and private sector partners, including international organisations (e.g. CSIRO in Australia). 3. Build closer ties between the TRIC-DT and the grand challenges, by co-designing and implementing cross-cutting infrastructure and best practices for responsible research and innovation, in line with the Turing 2.0 strategy and theory of change. 4. Provide clear opportunities for building on the existing work of the Turing's research programmes (e.g. Digital Society and Policy's Public Sector Guidance, AI Standards Hub, the Turing Way's Practitioner's Hub, and BridgeAI).