## Architecture

The image shows a High level architecture of EigenDA (specifically the disperser, data plane, and control plane). I've written a high level break down piece by piece for the same:
### End User
- Submit Blob / Query Blob Status → The user (e.g., an L2 sequencer or rollup client) submits a blob (batch of data) or asks for its status
- They interact only with the API Server
- End users also pull blobs from relays and chunks from validators depending on their retrieval strategy (fast vs higher availability and correctness)
### Disperser
This is the service that takes in blobs and disperses them into chunks across validators. It has three main parts:
#### 1. API Server
- Entry point for clients
- Receives blob submissions and status queries
- Passes blobs and metadata to the rest of the system
#### Blob/Chunk Store
- Storage layer inside the disperser.
- Holds blobs and their erasure-coded chunks before sending them out
- Talks to Encoders (to get data encoded) and Relays (to distribute blobs)
#### Encoders
- Turn raw blobs into encoded chunks using erasure coding
- Send chunk headers (commitments, metadata) back to the Controller
- Push chunks into the Blob/Chunk Store
#### Controller & Metadata Store
- **Controller**: Orchestrates the whole process. Pulls headers from Encoders, manages dispersal, and coordinates validators
- **Metadata Store**: Keeps blob metadata (commitments, quorum parameters, cert references)
- Together, they manage consistency between data plane (blobs/chunks) and control plane (certs, quorum state)
### Data Plane
Handles data movement (fast retrieval & redundancy)
#### Relays
- Optimized nodes for fast blob retrieval
- Pull chunks from the Blob/Chunk Store and serve entire blobs quickly to end users
- Provide a low-latency path for retrieval
#### Validators
- Long-term storage & high-availability path
- Hold encoded chunks and provide them when relays fail or for reconstruction
- End users can pull chunks directly from validators
- Validators also receive headers from Controller and return signatures (attestations)
### Control Plane
Handles membership, certificates, and security guarantees.
#### Validators (again, in the control plane context)
- Return signatures over headers (attest that they store chunks)
- Provide quorum membership data
#### Churner
- Module that manages validator membership (joins, leaves, slashing, stake tracking)
- Syncs with Ethereum (L1 smart contracts) for stake weight and membership proofs
#### Ethereum
- Anchors validator set, quorum membership, and finality on-chain
- Acts as the root of trust for EigenDA’s quorums and certificate validity
So an E2E flow would looks something like this:
1. **User submits a blob** to API Server
2. Blob is **encoded** → split into chunks → stored in Blob/Chunk Store
3. Encoders send headers (commitment, metadata) to Controller
4. Controller distributes headers to validators
5. Validators sign headers (proving they got chunks)
6. Signatures are aggregated → **produce a DA Certificate**
7. Relays and Validators serve data:
- **Relays**: fast blob access
- **Validators**: high-availability chunk retrieval
8. Ethereum contract holds validator membership and quorum registry → ensures certificates are tied to stake-weighted signatures
TLDR;
- **Disperser** = preprocessing (encode, split, send chunks, gather sigs)
- **Data Plane** = how clients retrieve blobs (Relays = fast, Validators = reliable)
- **Control Plane** = governance, signatures, and validator set management anchored to Ethereum
## blobs from relays vs chunks from validators
the end user (which in practice = an L2 sequencer, rollup node, or any client that needs the data) has two retrieval paths in EigenDA:
### Pull blobs from Relays (fast path)
- Relays exist specifically to give users low-latency full blob retrieval
- A relay talks to the disperser’s blob/chunk store and re-serves entire blobs directly
- Useful for sequencer → derivation pipelines or for light clients that don’t want to do chunk reconstruction
- relays are not trusted for correctness; they’re just a performance optimization. If a relay lies, the client can fall back to validators
### Pull chunks from Validators (authoritative path)
- Every validator stores its assigned chunks of the encoded blob
- A client can query enough validators ($\ge \gamma$ fraction of stake = reconstruction threshold) to retrieve a sufficient set of chunks
- The client verifies each chunk’s **KZG opening** against the on-chain commitment, then runs Reed–Solomon decoding to reconstruct the blob
- This is the cryptographic fallback guarantees safety even if relays are byzantine or offline
#### Why I think both exist
- **Relays** = performance (fast path, user-friendly, but not trust-critical)
- **Validators** = correctness & availability (slower, chunk-based, but trust-anchored in the DA cert)
EigenDA’s design gives you both Fast blob serving in the common case (via relays) and Cryptographic assurance and liveness guarantees in the fallback (via validators).