# Sparse mempool with Blobpool tickets In this note we propose a variant of blobpool tickets as a mechanism that can be integrated into the sparse blobpool design in order to enhance the latter's resistance against DoS attacks. # Sparse mempool The core idea here is to randomly distribute the burden of downloading Blob data. Concretely, once a blob transaction and associated payload are submitted to the blobpool, nodes will with probability $p$ be allocated the role of "provider" and download the entire payload. With probability $1-p$, nodes will be allocated the role of "sampler" and sample individual cells of the payload which they will receive from "provider" nodes. Choosing $p=0.15$ seems to give a sufficiently high probability that a sampler node will be able to fetch its custody cells from its peers. Hence, at any given time each node is being a "sampler" for a certain set of blobs and "provider" for a different set of blobs. This provides a mechanism for distributing the load of blob propagation since provider nodes are effectively responsible for and bearing the burden (in terms of bandwidth consumption) of propagation. Sampler nodes on the other hand contribute only indirectly to propagation. Figure 1 illustrates schematically the above described structure. The fact that sampler nodes only have partial information about the blob introduces a fundamental difficulty, namely the difficulty of determining availability from local samples. In absence of any coordination strategy whereby availability is determined collectively across a number of nodes, this can lead to a DoS attack as described extensively in [this note](https://hackmd.io/@gMH0wL_0Sr69C7I7inO8MQ/rk3TSjgJbe). ![Sparse_mempool.001](https://hackmd.io/_uploads/HkZVyW46lg.jpg) >**Figure 1**: Illustration of a single blob's cells downloaded by individual nodes in sparse mempool. Nodes $1,5$ and $8$ are provider nodes while the rest are sampling nodes. Assume that nodes $1-4$, $5-7$ and $8-11$ are each others peers respectively. Then, nodes $2-4$ fetch their cells from $1$, nodes $6,7$ fetch their cells from $5$ and nodes $9-11$ fetch their cells from node $8$. Thus, provider nodes $1,5,8$ bear the main burden (in terms of bandwidth) for propagation of this blob. # Blobpool tickets [Blobpool tickets](https://ethresear.ch/t/variants-of-mempool-tickets/23338) and similar approaches such as [unconditional payments](https://hackmd.io/5cby1hKFSPuIt8GPPKnGcQ) can provide resistance towards DoS attacks by ensuring that fees are being charged independently of inclusion of a blob into a block. Put differently, such mechanisms can be seen as a way of differentiating between the service of data prepropagation through the blobpool and ordinary data propagation. By differentiating between these two resources and pricing them independently (e.g the former via Blobpool tickets and the latter via the Blob base-fee mechanism) we ensure that data corresponding to any blob (available or unavailable) that is being propagated in the blobpool does so at a certain price. ## Concrete proposal We propose a two-step mechanism for blobpool ticket allocation. Here a blob submitter obtains the right to propagate a blob in a certain time window $\Delta$ in the future. Concretely, the blobpool tickets can be allocated via a smart contract `TicketAllocator` and the acquisition is established via an ordinary transaction with fields `to = TicketAllocator`,`data = l, beneficiary address` and the usual fee fields. For each slot the number of blobpool tickets is set to some constant $k=c\times \mathrm{Max}\mathrm{Blobs}$ and the builder for each slot determines how to allocate them by a local rule (e.g by order of priority fees). The smart contract ensures that in a slot at most $k$ tickets are issued and then establishes a correspondence between the sender's (or beneficiary's) address, a certain block and a constant $l$ (number of acquired tickets) by emitting a log with these three parameters. Every node computes and holds a record of all addresses with active tickets at any time which is used as a condition for blob propagation in the blobpool. Tickets are deemed active for a certain amount of time $\Delta$ measured in slots (e.g 3 slots). One concrete way to achieve this is to make ticket validity slot-dependent by linking it to the block at which it was minted. So long the current block is a descendant with distance $\leq \Delta$ the ticket is deemed active. Thus, when a node receives a new blob transaction through the blobpool, it checks its local record and will fetch/sample and forward the blob according to the sparse blobpool mechanism if and only if the blob submitter's active ticket count is $>0$. As mentioned, the ticket purchase includes the address to associate the tickets with, which can differ from the transaction sender. An L2 operator that submits blob transactions from a single address (e.g. the sequencer address) can then use any EOA, and any number of EOAs, to purchase tickets, while still binding all tickets to that single address. This lets the L2 operator submit as many blobs as the blobpool allows (the max number of concurrently active tickets), without running into constraints like client limits on how many transactions, and in particular blob transactions, can be queued from a single EOA. As this is already a point about which L2s complain, and likely going to get worse as blob throughput increases and more parallel submission of blobs becomes necessary, we think that it is another very meaningful benefit that this proposal brings to the existing sparse blobpool mechanism. ![blobpool_with_tickets](https://hackmd.io/_uploads/HkgL1D-7-e.jpg) >**Fig. 1:** Schematic representation of workflow. The lower part of the figure illustrates how propagation in the blobpool is made contingent on the the ticket mechanism. A blob submitter starts by sending a ticket transaction to the transaction mempool (or directly to builder). Once the ticket transaction lands on chain the blob submitter has a time window $\Delta$ to propagate a blob in the blob mempool using the allocated ticket. Nodes participating in the blobpool check their local record of ticket holders to fetch and propagate newly announced blobs. In the figure Blobpool nodes $1$ and $2$ have seen Alice's ticket transaction on chain and therefore fetch her announced blob1 while blobpool node 3 hasn't seen her ticket transaction yet and therefore refuses to fetch and propagate blob 1. The blob finally gets included by a a builder in a subsequent block. In summary the mechanism functions as follows: 1. Prior to slot $N$: Transaction is sent to the mempool or communicated to builder privately. Transaction includes number of tickets to be acquired, $l$ . 2. In slot $N$: Transaction included in the block. Transaction calls smart contract which generates a ticket. Ticket references sender's (or beneficiary's) address, $l$ and current block. 3. Once blob submitter receives block: Blob transaction and sidecar get submitted to the blobpool. Blob gets propagated according to sparse blobpool mechanism as long as blob submitter has active tickets according to nodes local view of records. In particular if a node receives an announcement for new blob transactions, it fetches/samples the blobs if the sender has sufficient active tickets, reducing the senders ticket counter accordingly.