Try   HackMD

Blob Sharing for Based Rollups

Problem

Based rollups face high data availability costs because they cannot delay and aggregate L1 data submissions the way centralized sequencers can. As a result, they often underutilize blob space. For example, in this block proposal, Taiko leaves around half of its blob unused. As more rollups—especially low-traffic ones—appear, each having to post its own blobs or revert to call data only amplifies this inefficiency. This poses a significant barrier to widespread adoption of based rollups.

Solution

We introduce a protocol for based rollups to share blobs with each other so they can fill the blobs more efficiently and reduce L1 gas cost.

Goals

The goal of this document is to design a maximally simple on-chain component for sharing blobs between based rollups created by the same builder. The design should be as generic as possible, enabling its use across a wide range of based rollup stacks and potentially extending to non-based rollup use cases.

Note that in this document, we assume all L2-based rollup blocks are created and aggregated by a single entity. This simplifies the protocol as it eliminates the need for complex off-chain coordination. This is a natural simplification for the based-rollup setup, as the L1 builder (or L1/L2 preconfirmer) would likely emerge to fulfill this role. For discussion on how to generalize to a scenario where different entities build and aggregate L2 blocks, see the Future Work section.

The Protocol

Below is a high-level diagram of the blob-sharing protocol.

Blob Sharing

It works as follows:

  • The Blob Merger Server:
    • Aggregates L2 txs from different rollups into blobs.
    • Produces metadata for each blob (firstBlobIndex, numBlob, addresses, offsets, lengths, payloads) on how the blobs are segmented and shared between rollups. The payload here represent additional payload to pass to the inbox contract.
    • Calls the blob splitter contract with the blob and metadata.
  • The Blob Splitter Contract routes blob segments to the appropriate rollup inboxes based on the given metadata.
  • The Rollup Inboxes (Receiver Contracts) receives the blob segment by implementing a standard interface.

Be aware that the blob splitter contract will now appear as the msg.sender to the rollup inbox contract. This can cause issues if the inbox contract relies on msg.sender for certain checks (for example, verifying the next preconfer in a lookahead). In such cases, the rollup must include the sender’s credential in the payload field rather than relying on msg.sender.

Here is a diagram that illustrates the relationship between blobs and segments:

image

Below is a pseudo code of the splitter contract:

contract BlobSplitter { /** * @notice Represents a segment of a blob in the blob-sharing protocol. * Each blob is split into segments, and each segment is * intended to be sent to a specified receiver contract for * processing. */ struct BlobSegment { // Address of the receiver contract for this blob segment address receiverAddress; // Index of the first blob uint64 firstBlobIndex // Number of blobs that this segment spans. uint8 numBlobs; // Offset within the blob where this segment starts uint64 offset; // Length of the segment in bytes uint64 length; // Additional payload to pass to the receiver contract bytes payload; } /** * @notice Interface for contracts that can act as receivers in the * blob-sharing protocol. These contracts are expected to * implement the `receiveBlob` function to process incoming * blob segments. */ interface IBlobReceiver { /** * @notice Processes a blob segment sent via the blob-sharing protocol. * @param segment Struct containing details of the blob segment */ function receiveBlob( BlobSegment calldata segment ) external; } /** * @notice Posts an array of blob segments to their designated receiver * contracts as part of the blob-sharing protocol. * @param segments Array of blob segments to be dispatched. */ function postBlob( BlobSegment[] calldata segments ) external { for (uint256 i = 0; i < segments.length; ++i) { BlobSegment calldata segment = segments[i]; IBlobReceiver receiver = IBlobReceiver( segment.receiverAddress ); // Dispatch the blob segment to the receiver contract receiver.receiveBlob(segment); } } }

EIP-7702 Integration

TBF

Future Work

Off-chain Coordination

In the current protocol, we assume all L2-based rollup blocks are created and aggregated by a single entity. To generalize for a scenario where different entities build and aggregate L2 blocks, we must introduce a off-chain coordination protocol between the entities. One potential approach is to utilize the L1 builder as the aggregator like this:

  1. Introduce a P2P network for propagating segments.
  2. An L1 builder then aggregates these segments and packs them into a single blob.
  3. Each segment submitter pays a fee proportional to the data size they consume.

This needs further consideration, such as:

  • Per-segment fee payment: How does segment submitters pay fees to the builder in a trustless manner? Maybe have the L2 inbox contract pay the blob aggregator some fee specified in their payload?
  • Blob-packing problem: Optimally packing multiple (segment size, fee) pairs into a blob is essentially the Knapsack problem, which is NP-complete. This isn’t a major concern when the number of segments is small, but as it grows, heuristic or approximate methods may be necessary.

Cross-segment Compression

To maximize compression we may want to introduce cross-segment compression where we compress the whole blob after concatenating the segments. One challenge is that segment offsets change when they are further compressed, which invalidates the submitter’s original signature which likely applies to the segment hash before the cross-segment compression.

Reference