*When EIP-4844 was proposed, Chuck Norris didn't just endorse it, he roundhouse-kicked consensus into existence. Now, every Ethereum node secretly bears a small engraving: "Approved by Chuck Norris."*
# Proto-Danksharding (Deneb & Electra)
## Introduction to Blobs
[Proto-Danksharding](https://www.eip4844.com) introduces a new concept to Ethereum: **blobs**.
> EIP-4844 introduces a new kind of transaction type to Ethereum which accepts "blobs" of data to be persisted in the beacon node for a short period of time. These changes are forwards compatible with Ethereum's scaling roadmap, and blobs are small enough to keep disk use manageable.
Blobs can hold any type of data but **cannot** be interpreted by Ethereum's execution layer. Instead, they exist on the consensus layer and are stored by Ethereum beacon nodes.
## Characteristics of Blobs
- **Cost-efficient**: Blobs are cheaper to use than CALLDATA.
- **Block limit**: A maximum of 6 blobs (Deneb) and 9 blobs (Electra) can be included per block.
- **Data capacity**: Each blob can store up to 128 KB of data.
- **Limited lifespan**: Blobs are retained for 4096 epochs (~18 days).
## How Ethereum Rollups Use Blobs
Instead of posting data directly via CALLDATA, rollup batch posters now utilize blobs to store and share data more efficiently.

## General topics structure
On the consensus layer network, each node connects to other nodes, referred to as **peers**.
Nodes can exchange information with their peers on various topics. To ensure it receives all messages on a specific topic, a node must subscribe to that topic.
For instance, the `beacon_block` topic is used to transmit beacon blocks between the block proposer and other nodes in the network.
In Prysm, all subscribed topics are recorded in the logs when the node starts.
```
INFO sync: Subscribed to topic=/eth2/69ae0e99/beacon_block/ssz_snappy
INFO sync: Subscribed to topic=/eth2/69ae0e99/beacon_aggregate_and_proof/ssz_snappy
INFO sync: Subscribed to topic=/eth2/69ae0e99/voluntary_exit/ssz_snappy
INFO sync: Subscribed to topic=/eth2/69ae0e99/proposer_slashing/ssz_snappy
INFO sync: Subscribed to topic=/eth2/69ae0e99/attester_slashing/ssz_snappy
INFO sync: Subscribed to topic=/eth2/69ae0e99/sync_committee_contribution_and_proof/ssz_snappy
INFO sync: Subscribed to topic=/eth2/69ae0e99/bls_to_execution_change/ssz_snappy
INFO sync: Subscribed to topic=/eth2/69ae0e99/blob_sidecar_0/ssz_snappy
INFO sync: Subscribed to topic=/eth2/69ae0e99/blob_sidecar_1/ssz_snappy
INFO sync: Subscribed to topic=/eth2/69ae0e99/blob_sidecar_2/ssz_snappy
INFO sync: Subscribed to topic=/eth2/69ae0e99/blob_sidecar_3/ssz_snappy
INFO sync: Subscribed to topic=/eth2/69ae0e99/blob_sidecar_4/ssz_snappy
INFO sync: Subscribed to topic=/eth2/69ae0e99/blob_sidecar_5/ssz_snappy
INFO sync: Subscribed to topic=/eth2/69ae0e99/beacon_attestation_54/ssz_snappy
INFO sync: Subscribed to topic=/eth2/69ae0e99/beacon_attestation_55/ssz_snappy
```
If the beacon node is connected to a validator client managing validators currently assigned to sync committee duty, the following log(s) will be displayed:
```
INFO sync: Subscribed to topic=/eth2/69ae0e99/sync_committee_{n}/ssz_snappy
```
With `n=0..3`.
All nodes subscribe to:
- `beacon_block`,
- `beacon_aggregate_and_proof`,
- `voluntary_exit`,
- `proposer_slashing`,
- `attester_slashing`,
- `sync_committee_contribution_and_proof`,
- `bls_to_execution_change`,
- `blob_sidecar_{n}` with `n=0..5`, and
- `beacon_attestation_{n}`
The `beacon_attestation_{n}` topics are unique, as there are a total of `64` beacon attestation topics. By default, each beacon node subscribes to exactly `2` attestation topics. However, if the `--subscribe-all-subnets` flag is enabled, the beacon node subscribes to all `64` topics.
The following figure represents a mesh of nodes regarding the `beacon_attestation_{n}` subnets. (There is only `3` beacon attestation topics in this figure.)

:::warning
**Warning**
The following figure illustrates a network mesh of nodes connected through the `beacon_attestation_{n}` subnets. For simplicity, this example includes only `3` beacon attestation topics.
:::
:::info
**Note**
Subscribing to a topic means that a node takes on the responsibility of re-broadcasting messages it receives on that topic to its peers who are also subscribed.
However, a node can still send messages related to a topic **without** being subscribed to it.
:::
:::info
**Note**
The number of attestation topics a beacon node subscribes to **does not** depend on the number of validators managed by any connected validator client. (This was the case in the past but is no longer true.)
:::
## Peer ID
*How does a peer determine which `beacon_attestation_{n}` topics it should subscribe to?*
Each node has a private key, which is either:
- randomly generated at boot, or
- read from a file if the `--p2p-priv-key` flag is used.
A public key is deterministically derived from this private key, and a peer ID is then deterministically derived from the public key.
The base58-encoded peer ID looks like this:
```
16Uiu2HAm7uTzeVgFAB3M3gdKHcRYeSNLcagvKVBXKTwws3CKFQ52
```
and is displayed in the beacon node logs.
```
INFO p2p: Running node with peerId=16Uiu2HAm7uTzeVgFAB3M3gdKHcRYeSNLcagvKVBXKTwws3CKFQ52
```
The peer ID is also included in the peer multi-address.
The following log shows that our node is TCP reachable on IP address `192.168.1.3` on port `13000`, and our node ID is `16Uiu2HAm7uTzeVgFAB3M3gdKHcRYeSNLcagvKVBXKTwws3CKFQ52`.
```
INFO p2p: Node started p2p server multiAddr=/ip4/192.168.1.3/tcp/13000/p2p/16Uiu2HAm7uTzeVgFAB3M3gdKHcRYeSNLcagvKVBXKTwws3CKFQ52
```
Finally, the two `beacon_attestation` topics to which the beacon node subscribes are deterministically derived from the Peer ID.
## Focus on `blob_sidecar_{n}` topics
Since EIP-4844 - Proto Danksharding:
- Each block can be accompanied by 6 blobs.
- Each blob contains `128 KB` of data.
- Each blob must be available for at least `4,096` epochs.
Under maximum usage (with no missed blocks and every block containing `6` blobs), EIP-4844 results in a `96 GB` increase in storage requirements.
Each blob has its own topic. For the reasoning behind this decision, see [this PR](https://github.com/ethereum/consensus-specs/pull/3244).
**All** nodes are required to download **all** blobs, effectively meaning **there is no sharding** in Proto Danksharding.

The downside of Proto Danksharding is that the required bandwidth and storage scale linearly with the number of blobs per block.
## How are blobs committed to a block?
Each blob includes a KZG commitment and a KZG proof.
Example on a [Holesky block](https://holesky.beaconcha.in/slot/1395917#blobs):

The block corresponding to these blobs includes a `data.message.body.blob_kzg_commitments` list.
Example with http://<url-of-a-beacon-node>/eth/v2/beacon/blocks/1395917:
```json
...
"blob_kzg_commitments": [
"0xa14bdaf61b3e064d51e6dbfc3b6e09524df3b287305fa53b8b294208a2c43bcd55703e7227fdb9dc576e93fb0a95a83d",
"0x86f6e5cc74d57519130f5904865d51fd0bf04b5c349aeea9bf6e40c39fbf907f3b6ccc1e4c020504cb013e1dc25e7199",
"0x98200ae7d82cd9c547a9c730b013031b8b5a9b4985c4ec3e7f85387756298cbeeba7afec04f4b930b11f411505cddae1",
"0x99a52d01d4932224fad07ac59cb1835a22b9277b7ce4cebc9239ece7998b5787e9a460de97ee8a86992bb724d84273df",
"0x840287537b781520f595ab6bf743c0619c822b69cb179828e8ceebd9f4662d416b3bba65557e3c3c973916a86db76fcf"
]
...
```
One might wonder why the KZG commitment and KZG proof apparatus are necessary to link a blob to a beacon block. In the context of EIP-4844 - Proto Danksharding, they are not strictly required. The same task could be accomplished by linking the beacon block to the blobs using a hash of the blob instead of the `blob_kzg_commitments`. However, the full potential of KZG commitments and KZG proofs will be realized when true sharding is implemented (which will be discussed later in this book).
## How are blobs stored?
In Prysm, blobs are stored directly in the filesystem using the pattern `blobs/<blockRoot>/<blobIndex>.ssz`.
Example:
```
blobs
├── 0x00022463ed9299403b70d78cd0ab5049fc03eefdd0bf87108ee387587eadd06b
│ ├── 0.ssz
│ ├── 1.ssz
│ ├── 2.ssz
│ ├── 3.ssz
│ ├── 4.ssz
│ └── 5.ssz
├── 0x00026f77d57e29089085edf99bc79a37f67cdb844f01aa5f85435e77de3c5d2b
│ └── 0.ssz
├── 0x0002fd2893c692ac508b502ba31fbfe351738280af8e92acc397e456af888c7a
│ └── 0.ssz
├── 0x0003be159d8a3d04ccbc288dfb19b39eba35dfb9eba1788a966cd90d23426c21
│ ├── 0.ssz
│ ├── 1.ssz
│ ├── 2.ssz
│ ├── 3.ssz
│ ├── 4.ssz
│ └── 5.ssz
├── 0x0004d2a59f6ae7996a87a5b6e928b2fad0b864171dcf3bf609f74b8584f44722
│ ├── 0.ssz
│ ├── 1.ssz
│ └── 2.ssz
├── 0x000537721bdc9fb0de40e0f14220155eb9cb94dd46e9b73f0a0e6fd852fa5539
│ ├── 0.ssz
│ ├── 1.ssz
│ ├── 2.ssz
│ ├── 3.ssz
│ ├── 4.ssz
│ └── 5.ssz
...
```
## How do blobs impact the fork choice rule?
The `on_block` function is modified as following:
```python
def on_block(store: Store, signed_block: SignedBeaconBlock) -> None:
"""
Run ``on_block`` upon receiving a new block.
"""
...
# [New in Deneb:EIP4844]
# Check if blob data is available
# If not, this block MAY be queued and subsequently considered when blob data becomes available
# *Note*: Extraneous or invalid Blobs (in addition to the expected/referenced valid blobs)
# received on the p2p network MUST NOT invalidate a block that is otherwise valid and available
assert is_data_available(hash_tree_root(block), block.body.blob_kzg_commitments)
...
```
The `is_data_available` function is described [here](https://github.com/ethereum/consensus-specs/blob/dev/specs/deneb/fork-choice.md#is_data_available).
Before declaring a block valid, the node must:
- Fetch all related blobs (and associated proofs).
- Verify the proofs are correct.
## What is the issue of the current approach?
### Target vs. maximum
Earlier, we mentioned that a block can contain a maximum of 6 blobs. In reality, publishing a blob incurs a specific cost, which is determined by the blob fees.
The blob fees for block `n+1` are computed based on the number of blobs contained in block `n`.
Two parameters are defined: the target count and the maximum blob count. Currently, the target count is set to `3`, and the maximum blob count is set to `6`.
- If block `n` contains a number of blobs equal to the target count, the blob fee at block `n+1` will be the same as the blob fee at block `n`.
- If block `n` contains more blobs than the target count, the blob fee at block `n+1` will be the blob fee at block `n` multiplied by a value greater than `1`. The higher the blob count in block `n`, the higher the multiplicative value. This results in an exponential increase in blob fees if the blob count consistently exceeds the target.
- If block `n` contains fewer blobs than the target count, the blob fee at block `n+1` will be the blob fee at block `n` multiplied by a positive value less than `1`. The lower the blob count in block `n`, the lower the multiplicative value. This results in an exponential decrease in blob fees if the blob count consistently falls short of the target.
Here is an example of the blob fees (in Gwei) from November 11, 2024, to November 18, 2024:

**Source:** [ultrasound.money](https://ultrasound.money)
From time to time, blob fees spike, indicating that the blob count exceeds the target. This reflects the fact that the demand for blobs—primarily from layer 2s—surpasses the available supply, resulting in fee spikes.
Additionally, since November 2024, the average blob count has consistently matched the target blob count. This indicates that blobs are now being utilized at capacity.

**Source:** [Dune](https://dune.com/hildobby/blobs)
The question we aim to address is how to increase the blob count, providing more space for layer 2s.
### Increasing the target and/or the maximum blob count.
A straightforward solution to this problem would be to increase the target and/or the maximum blob count.
However, doing so could penalize some solo stakers with limited bandwidth. Assuming that blocks and blobs must be received by all nodes within a maximum of `4` seconds into the slot, `6` blobs would require an average bandwidth of `192 kB/sec`.
Multiplying the maximum blob count by `5` (increasing it to `30` blobs) would result in an average bandwidth requirement of `960 kB/sec`, which is much too high (especially for uploads) for some solo stakers with limited internet bandwidth, potentially undermining network decentralization.
### Introducing the sharding.
To address this issue, we introduce the concept of sharding and data availability sampling (DAS), which enables an increase in the target and maximum blob count.