# Ethereum Metrics Repository Organization Strategy
## Overview
This strategy establishes a parallel structure between **Beacon Metrics (CL)** and **Execution Metrics (EL)** repositories, following the proven organizational patterns implemented in various ethereum repos including, but not limited to [consensus-specs](https://github.com/ethereum/consensus-specs), [pm](https://github.com/ethereum/pm/blob/master/Network-Upgrade-Archive/Dencun/4844-readiness-checklist.md), and [EIPs](https://github.com/ethereum/EIPs).
:::info
**Open Questions:**
1. Should metrics be organized by EIP (e.g., eip-7594-metrics.md) or by functional scope (e.g., peerdas-metrics.md) for better, spec-alignment, client team support and maintainability? ( see: Tier 2 for add'l context )
2. Should `beacon-metrics` and `execution-metrics` repositories use the same structure?
3. If we are just starting this process, why not rename `beacon-metrics`, `consensus-metrics`?
4. Will there be instances of metrics that span both EL and CL?
5. Should client teams be required to create tracking issues, and if so, should we expect those to be in their repos? or the metrics repos?
6. Is there a specification for how metrics need to be implemented?
:::
## Repository Structure Design
>Applied identically to both `becon-metrics` and `execution-metrics` repos
### Three-Tier Hierarchy
```
README.md (Landing Page)
├── Hard Fork Index Table ( <-- Tier 1a )
│ ├── Current Fork N/ ( <-- Tier 2 )
│ │ ├── eip-A-metrics.md ( <-- Tier 3 )
│ │ ├── eip-B-metrics.md
│ │ └── eip-C-metrics.md
│ ├── Current Fork N+1/
│ │ ├── eip-D-metrics.md
│ │ ├── eip-E-metrics.md
│ │ └── eip-F-metrics.md
│ └── Future Forks...
├── Global Metrics Glossary/ ( <-- Tier 1b )
│ └──README.md (full alphabetical index of all metrics in repo)
└── Templates/
└── request-a-metric.md
```
## Landing Page (README.md) Design
### Repository Purpose Section
Clear explanation of:
- What metrics are tracked and why
- If `beacon-metrics`, link to `execution-metrics` vise versa
- Relationship to Ethereum protocol upgrades
- How client teams should use this repository
- Links to EthPandaOps dashboards and tooling
### Hard Fork Index Table ( Tier 1a )
| Fork | Epoch | Status | Metrics |
|------|-------|--------|------|
| Current Fork N | xxxxxx | ✅ Active | [15 metrics] |
| Current Fork N+1 | xxxxxx | 🚧 Testing | [23 metrics] |
| Current Fork N+2 | TBD | 📋 Planning | [TBD] |
### Global Metrics Glossary Link ( Tier 1b )
Prominent link to comprehensive metrics dictionary with cross-references to `hard fork`, `eip`, `dashboard`.
## Fork-Level Organization ( Tier 2 )
### Fork README Template
Each hard fork directory contains:
#### Fork Overview
- **Activation Details**: Epoch, timestamp, block numbers
- **Summary**: Key protocol changes requiring metrics
- **Blog Link**: Direct links to Ethererum dot org Mainnet Announcement ([sample](https://blog.ethereum.org/2025/04/23/pectra-mainnet))
#### OPTION 1: Page Index Table (by EIP)
| EIP | Title | EIP Metrics | EIP Dashboard |
|-----|-------|-------------|---------------|
| EIP-A | Title of EIP | 8 | [link] |
| EIP-B | Title of EIP | 3 | [link] |
#### OPTION 2: Page Index Table (by Scope*)
| Scope Name | EIPs | Metrics | Dashboard |
|-----|-------|-------------|---------------|
| PeerDAS | EIP-A, EIP-B, EIP-C | 8 | [link] |
| Gas Changes | EIP-D, EIP-E | 3 | [link] |
*the "scope" could algin with the specs ([like here](https://github.com/ethereum/consensus-specs/tree/dev/specs/fulu))
## Metrics & Implementation Tracking ( Tier 3 )
### EIP Metrics File Template
Each EIP or Scope .md file tracks:
- **EIP Reference**: Direct link(s) to EIP(s)
- **Dashboards**: Direct link to applicable PandaOps dashboards
- **Client Reference Issue**: Direct link to issue created by each client team to track implementation progress for applicable metrics
#### Client Reference Issues
| Client 1 | Client 2| Client 3 | Client 4 | Client 5 | Client 6 |
| -------- | ------- | -------- | -------- | -------- | -------- |
| [link] | [link] | [link] | [link] | [link] | [link] |
#### Metrics and Implementation Status
| Definition | Metric | Client 1 | Client 2| Client 3 | Client 4 | Client 5 | Client 6 |
| ---------- | ------ | -------- | ------- | -------- | -------- | -------- | -------- |
| [link] | `metric_name_1` | ✅ | ✅ | ✅ |✅ | ✅ | ✅ |
| [link] | `metric_name_2` | ✅ | 📝 | ✅ |✅ | 📝 | □ |
| [link] | `metric_name_3` | ✅ | ✅ | □ |✅ | 📝 | ✅ |
✅ - implemented
📝 - in progress, requiring adjustments
□ - not implemented
## Global Metrics Glossary Design ( Tier 1b )
### Comprehensive Metric Definitions
Each metric entry includes:
- **[Categorization]**:
- **Network Metrics**: Propagation, bandwidth, peer discovery
- **Consensus Metrics**: Attestations, finality, committee performance
- **Execution Metrics**: Transaction processing, state transitions
- **Fork-Specific Metrics**: Unique to particular upgrades
- **Technical Definition**: Precise specification of what's measured
- **Cross-References**:
- Back-links to Tier 3 page
- **Unit of Measure**:
- The unit the metric should be measured in
## Metrics Glossary
### A
- `attestation_inclusion_delay` - [Consensus] Time between attestation creation and inclusion (measured in seconds)
- `availability_score` - [Network] Node's data availability performance (measured in percentage)
### B
- `blob_propagation_time` - [Network] Time for blob to reach 90% of network (measured in seconds)
- `block_processing_time` - [Execution] Time to process and validate block (measured in seconds)
### D
- `das_sample_success_rate` - [PeerDAS] Percentage of successful DAS samples (measured in percentage)