# Trustworthy and Ethical Assurance Platform

## What is the TEA platform?
The TEA platform is an open-source and community-centred tool that helps users and project teams define, operationalise, and implement ethical principles throughout a project's lifecycle (e.g. digital twin, AI system).
The goal of the TEA platform is to provide a structrued and systematic approach to collaborative deliberation and assessment of how goals, such as fairnes, explainability, safety, and openness, can be realised at key stages of a project's lifecycle and to communicate how these goals have been assured in an open and reproducible manner for others to learn from.
The TEA platform achieves this by guiding individuals and project teams to identify the relevant set of claims and evidence that justify their chosen ethical principles, using a participatory approach that can be embedded throughout a project's lifecycle.
The output of the platform—a user-generated assurance case—can be co-designed and vetted by various stakeholders, fostering trust through open, clear, and accessible communication.
:::success
🛠️ **Resources**
- GitHub Repository: https://github.com/alan-turing-institute/AssurancePlatform
- Project Roadmap: https://github.com/orgs/alan-turing-institute/projects/240
- Research Preview: https://staging-assuranceplatform.azurewebsites.net/
:::
## Co-Designing Argument Patterns
> Argument patterns are reusable templates for assurance cases, which address the types of strategies and claims that must be covered to justify a claim pertaining to a particular normative goal.
As part of our UKRI/BRAID-funded project on [Trustworthy and Ethical Assurance of Digital Twins]() we have co-created three (draft) argument patterns for digital twins with ~45 researchers and developers of digital twin systems.
1. A pattern for explainable digital twins in health research and health care
2. A pattern for safe and fair digital twins in infrastructure
3. A pattern for accurate digital twins in natural environment
These patterns will help set standards and best practices for others to learn from and contribute towards.
When these patterns are ready, they will be available to all users as templates to start from when developing their own assurance cases using the TEA platform. We will also have [documentation](https://github.com/alan-turing-institute/AssurancePlatform) available to allow others in the community to develop, upload, and review a public repository.
:::info
📩 **Invitation**
Having review the Model Openess Framework and Tool, we would like to extend an invitation to co-develop an argument pattern using the requirements outlined in the paper and tool (e.g. completeness).
This will allow our users and stakeholders (e.g. public sector organisations, academic researchers, commercial developers) to justify how their systems assure openness (as well as related goals, such as transparency or explainability) by direct reference to the MOF.
As the content of the MOF has already been established, this should not require any significant time investment but would help drive engagement with the MOF/MOT through the stakeholder engagement we continue to organise with key partners and organisations.
:::