# Safety Argument Pattern
:::info
ℹ️ **About this Document**
This is a work-in-progress argument for the second infrastructure workshop for the TEA-DT project.
Please read the instructions below before reviewing the content of the argument pattern.
The `.JSON` file can be accessed here: https://raw.githubusercontent.com/alan-turing-institute/AssurancePlatform/main/examples/Safety%20(TEA-DT%20Workshop)-2024-5-19T11-41-48.json
:::
## Instructions
1. Review the [goal and context](#goal-and-context) for this pattern.
2. Discuss the questions. If you answer "no" to either of the questions, please make adjustments as required.
3. Review the [strategies and property claims](#strategies-and-property-claims) for this pattern.
4. Discuss the questions and make revisions as required.
5. Optional:
- Flag any evidence types (e.g. randomised control trial, risk assessment report) that you would use to support any claims.
- Flag any claims that you are unsure how you would evidence.
:::success
📝 **Cheat Sheet**
A cheat sheet with descriptions of each of the core elements as well as some general tips for developing clear and concise assurance cases is available.
👉 [Access Cheat Sheet](https://hackmd.io/@tea-platform/BJyX12aSA)
:::
## Goal and Context
The *goal claim* adopted for this pattern is as follows:
> The digital twin is safe for use in its intended operational environment.
The *context* for this pattern is as follows:
> {Use case of module or system as intended by project team}.
Some more specific context claims to consider:
- **C1:** Description of the digital twin system and its purpose.
- **C2:** Operational environment and intended use cases.
- **C3:** Applicable regulations and standards for safety and security.
- **C4:** Assumptions about external systems and user interactions.
:::warning
❓ **Questions**
1. Is this goal claim appropriately specified? Or, do you need to change to focus on a specific module, component, or model?
2. Is the goal claim clear?
3. Does the context statement capture all intended use cases?
:::
## Strategies and Property Claims
### Strategy 1: Argument Over System Safety
- **Property Claims**:
1. All potential hazards associated with the system have been identified.
2. Risks associated with identified hazards are assessed and mitigated to acceptable levels.
3. Failure modes for all critical components and functions have been identified.
4. Appropriate corrective actions for identified failure modes are implemented and effective.
5. Compliance with safety requirements is verified through testing and analysis.
6. Safety controls and measures to mitigate identified risks are implemented.
### Strategy 2: Argument Over Component Safety
- **Property Claims**:
- The system components are safe.
- All potential hazards associated with the system have been identified.
- Risks associated with identified hazards are assessed and mitigated to acceptable levels.
- Failure modes for all critical components and functions have been identified.
- Appropriate corrective actions for identified failure modes are implemented and effective.
- Compliance with safety requirements is verified through testing and analysis.
- Safety controls and measures to mitigate identified risks are implemented.
- The system components are secure.
- {Component N} maintains confidentiality of sensitive information.
- {Component N} maintains integrity and accuracy of operational data.
- {Component N} monitors actions to enable transparency and accountability.
### Strategy 3: Argument Over Fair Impacts
- **Property Claims**:
1. The impacts of the system are fair for all users within the intended operational environment.
- Positive impacts of the system's use and deployment do not unfairly accrue to specific sub-group of users.
- Negative impacts of the system's use and deployment do not disproportionately affect specific sub-groups of users.
2. All relevant sub-groups of users have been meaningfully consulted across the lifecycle of the system.
- Users were consulted during the system design phase.
- Users were consulted during the system development phase.
- Users were consulted during the system deployment phase.
3. All relevant biases have been identified and mitigated across the project lifecycle.
- Cognitive biases have been assessed and addressed.
- Statistical and data biases have been assessed and addressed.
- Social biases have been assessed and addressed.
### Strategy 4: Argument Over Continuous Monitoring
- **Property Claims**:
1. A system for continuous monitoring of safety performance is established.
- Appropriate thresholds have been set to identify undesirable model drift.
- Automated alerts are triggered when thresholds for model drift are exceeded.
2. Procedures for updating the safety case and system design in response to monitoring feedback are in place.
:::warning
❓ **Questions**
1. Do the strategies collectively support (and help operationalise) the goal claim?
2. Do the strategies help identify all necessary requirements for the project or system, which can be developed into property claims?
- If you answered "no" to this question, which property claims are missing?
- NB: when first approaching this pattern, you should assume that many relevant property claims are missing.
3. Are new strategies required? Do existing strategies need to be revised (e.g. merged or split)?
4. Do any of the current strategies or claims fall outside of the scope of your responsibilities (e.g. assurance for property X would be delivered by a third-party)?
5. (Optional) Do any of the property claims suggest a specific type of evidence (e.g. results of model testing, user evaluation report)?
:::