# Asynchronous blobs
## Abstract
We propose *asynchronous blobs* as a means of disentangling blob data propagation from the blob transaction interacting with the EL state. This mechanism can be understood as a way of providing in-protocol pre-propagation of blob data rather than leaving pre-propagation out-of-protocol as is the case today through the blobpool. Thus, asynchronous blobs allow for a more evenly spread out bandwidth consumption, increasing the achievable data throughput, better censorship guarantees as well as providing a mechanism for reducing the propagation window in the critical path, alleviating the free option problem.
## Introduction
We say that a blob is *synchronous* if its availability is enforced by the protocol in the same slot in which the blob is propagated. In contrast, we say that a blob is *asynchronous* if its availability is enforced in a slot subsequent to the one in which the blob is propagated.
Today, blob txs are explicitly constructed to only carry *synchronous blobs*: if a block contains a blob tx, its inclusion in the canonical chain is conditional on blob availability. This enables the blob tx to *execute while having an availability guarantee on its blobs*, e.g. to immediately record the blob's availability in a rollup contract. With asynchronous blobs, any execution that depends on availability must also wait at least one slot.
However, there is currently no real usage of blob synchronicity other than such minor bookkeeping, which does not require it (and in fact some rollups do otherwise, e.g. OP stack rollups send blob txs without any calldata).

***Figure 1.** Functional difference between synchronous and (a version of) asynchronous blobs.
**Top:** A synchronous blob in Ethereum. When a blob tx is included in a block, three things happen: the blob data is propagated alongside the block, availability of the blob data is determined, and the blob tx affects the EL state during the block's execution. The execution has the guarantee that the blob data is available, because the block would not become canonical otherwise.
**Bottom:** An asynchronous blob. Blob data propagation, availability determination and the blob tx affecting the EL are done in order, though the last two can (but don't have to) still happen in the same block.*
These three stages — **propagation**, **availability determination**, and **execution** — form the conceptual backbone for understanding async blobs. The key design question is: how should the protocol allow these stages to be decoupled?
- **Today's system**: Propagation is out-of-protocol (via the blobpool), but availability determination and execution are tightly coupled — both occur when the blob tx is included in a block. There is no protocol mechanism to decouple them.
- **Simplified system (tickets without DA contract)**: Propagation moves in-protocol via CL tickets, but availability determination remains coupled with blob tx execution — versioned hashes (cryptographic commitments that uniquely identify each blob) only enter the payload through blob txs.
- **Full system (with DA contract)**: All three stages can occur independently. The payload can assert availability (via versioned hashes) without a blob tx, and the DA contract records this for later execution.
Before looking at the use cases for async and sync blobs, let's look at them purely from the perspective of what the protocol can provide.
### The protocol's perspective (supply side)
The asynchronous blob throughput that the system can provide is strictly higher than synchronous blob throughput. In itself, this is not a particularly interesting statement, because sync blobs are essentially just a subset of async blobs: a user that wants to use an async blob can always just use a sync blob instead, just like they do today. However, we argue that the difference in the sync vs async throughput that the protocol can provide is going to be quite meaningful.
The key insight is that synchronous blobs must propagate within a narrow time window, constrained by:
- The next proposer's and builder's need to determine availability before acting
- The [free option problem](https://collective.flashbots.net/t/the-free-option-problem-in-epbs/5115), which becomes worse the longer we make this window
This creates bandwidth surges during the propagation window while leaving bandwidth effectively unutilized during the remainder of the slot. With pre-propagation from asynchronous blobs, propagation spreads over a larger time window (see Figure 2), smoothing bandwidth consumption and avoiding bottlenecks. Effective pre-propagation can achieve steady bandwidth usage at capacity, translating to increased throughput. Today we rely on out-of-protocol pre-propagation in the blobpool, for which we have less developed mechanisms and inherently cannot provide guarantees of successful completion.

***Figure 2.** Illustration of bandwidth variation throughout slot in synchronous vs asynchronous blobs. Synchronous blobs require bandwidth over smaller time periods leading to a more spiky consumption while asynchronous blob's propagation is spread out leading to smoother bandwidth consumption.*
### The user's perspective (demand side)
The main use case for synchronous blobs for rollups is synchronous composability with the L1, which requires [based sequencing](https://ethresear.ch/t/based-rollups-superpowers-from-l1-sequencing/15016) as well as real time proving. In contrast, externally sequenced rollups, as well as [based rollups with preconfirmations](https://ethresear.ch/t/based-preconfirmations/17353) (see for example [Taiko](https://docs.taiko.xyz/taiko-alethia-protocol/protocol-design/based-preconfirmation/)), could make use of asynchronous blobs.
There might also be a [hybrid design](https://ethresear.ch/t/combining-preconfirmations-with-based-rollups-for-synchronous-composability/23863) of based rollups that combine both preconfirmations and synchronous composability, essentially by using preconfs during most of the L1 slot but switching to based mode during the period of L1 block production, such that synchronous interactions are possible *in this period*. Such rollups might consume a mix of sync and async blobs.
Crucially, synchronous blobs become significant for the L1 as well, once we project ourselves in the future we're rapidly moving towards, where zkEVM + DAS are used to scale the L1 itself by placing its payload into blobs, essentially turning the EL into a validity rollup.
Even with these use cases in mind, it is hard to argue exactly how the sync vs async blob demand will shape up in the future. We know synchronous composability is beneficial, but how beneficial exactly? We know it is costly (even ignoring the data availability portion of this cost which is precisely the higher cost for the protocol to provide sync blobs), but how costly exactly?
We argue that we do not need be able to *precisely* predict the future demand of these two blob types in order to determine whether it is worth it to introduce a separate pipeline for async blobs in the protocol. *As long as there is meaningful demand for asynchronous blobs (which seems very likely, as not all rollups will want to be fully based)*, *all blob users benefit from moving asynchronous blob throughput into its own lane.* As argued above, asynchronous blobs can make use of bandwidth outside the critical path that is currently left unused. Letting them use those resources frees up the critical path for synchronous blobs. This means that introducing asynchronous blobs benefits not only users who consume them, but also users of synchronous blobs who gain access to more of the constrained critical-path resources.
## Asynchronous Blobs
In the following section we will discuss the explicit mechanism behind async blobs, in stages:
1. **Blob tickets**: Moving blob propagation to the CL with a ticket system — what this gives us as a standalone feature, and its limitations.
2. **Inclusion guarantees**: How a DA contract addresses the limitations of blob tickets alone.
3. **Blob txs with the DA contract**: How blob tx validity and mempool behavior adapt to the new system.
### Pre-propagation today
As mentioned, pre-propagation of blob data is crucial in order to reduce bottlenecks through surges in bandwidth consumption and thus is an essential tool for increasing throughput. Today pre-propagation is done out-of-protocol through the blobpool, see Figure 3. In this light, a key part of the mechanism we propose can be understood as a way to move pre-propagation closer to an in-protocol mechanism, with clear guarantees and a clear role in the data availability pipeline.

**Figure 3.** *Compare with the async blob in figure 1. We still have pre-propagation, through the EL blobpool, but no in-protocol mechanism to decide which blobs are allowed to propagate, i.e., to allocate propagation bandwidth.*
### Blobpool tickets
Much has been written about blob ticketing mechanisms (see [here](https://hackmd.io/PnHbLie4Tm6vz-3-fVD2MA#Blob-mempool-tickets), [here](https://ethresear.ch/t/on-the-future-of-the-blob-mempool/22613), [here](https://ethresear.ch/t/variants-of-mempool-tickets/23338)), that propose to auction the right to propagate a blob. Recently, a [blobpool ticket mechanism](https://hackmd.io/@gMH0wL_0Sr69C7I7inO8MQ/HJKi-RXpge) was proposed as a step in this direction, augmenting the blobpool to ensure DoS resistant pre-propagation of blobs in the blobpool, and retaining such a guarantee even if implemented alongside a blobpool sampling mechanism (see [vertically sharded mempool](https://eips.ethereum.org/EIPS/eip-8070) and [EIP-8070: Sparse Blobpool](https://eips.ethereum.org/EIPS/eip-8070)). Concretely, in order to submit and propagate a blob in the blobpool a submitter would be required to hold a valid ticket, acquired by interacting with a designated ticket contract (for example implementing a first-price auction). This would ensure a limit on the number of blobs propagated through the blobpool, and fairly allocate the limited space. On the other hand, the current blobpool necessarily fragments as blob throughput exceeds what it can handle, as it does not have a global way to curb inflow, and so it can only do so locally, at the node level.

***Figure 4.** Blobpool tickets can be seen as an intermediate step. Tickets control access to blobpool propagation, but the blob and blob tx still travel together through the EL, `getBlobs` still stitches EL pre-propagation to CL sampling, and users pay twice: once for the ticket, once at inclusion (blob basefee).*
### Blob tickets
Blob tickets take the ticketing concept further, moving pre-propagation to the CL and more deeply integrating it into the data availability pipeline. This differs from blobpool tickets in two key ways:
1. **Propagation moves to the CL**, reusing its already developed infrastructure for DA sampling, which would otherwise need to be unnecessarily duplicated on the EL. Moreover, DA sampling is fundamentally part of the consensus mechanism, because DA is a precondition to importing a block into the fork-choice. Today, we instead stitch together EL pre-propagation and CL availability enforcement with [the `getBlobs` engine API call](https://github.com/ethereum/execution-apis/blob/main/src/engine/osaka.md#engine_getblobsv3)).
2. **A ticket is all that users need to get a blob included on chain.** In particular, blob txs *do not pay a blob basefee when included*, they only pay for regular gas fees. Hence the name *blob* tickets instead of *blobpool* tickets — what you're buying with a ticket is actually a blob, not just blobpool space. From the protocol's perspective, this is because propagation is where we actually consume the scarce bandwidth resources, whose usage is currently regulated by the blob basefee.
The workflow with tickets is:
1. **Buying a ticket**: By sending a transaction to the ticket contract, the user can acquire the right to propagate a blob at some point in the future.
2. **Propagation**:
1. **Blob**: Using the ticket (at the specified time), the user pushes blob data through the CL sampling infrastructure.
2. **Blob tx**: Also using the ticket, the blob tx (*without blob data*) goes *to the regular mempool*.
3. **Inclusion and availability enforcement**: When a blob tx is included, attesters enforce availability of the associated blobs (identified by `blob_tx.versioned_hashes`), exactly like today.

***Figure 5.** User workflow with blob tickets. The user acquires a ticket, propagates the blob on the CL, and submits a blob tx to the EL mempool. Once included in a block, a blob tx functions exactly like today, conditioning block validity on availability of the related blobs and thus giving the user strict availability guarantees.*
#### Ticket contract
The ticket contract is used to implement a first-price auction among ticket transactions in a given block, for a fixed number of tickets $K$. Each ticket transaction specifies three parameters:
- The `sender_address`: Corresponding to the address which will be granted a ticket. Note that this need not be the same address as the one sending the ticket transaction itself.
- The `bid` value : Corresponding to the auction bid.
- The integer $N$: Corresponding to the total number of tickets the bid is for.
The contract then orders the bids by highest value per ticket i.e by decreasing value of `bid`/$N$ and grants tickets until the total number has reached $K$. Concretely this means that the `sender_address` corresponding to the $i^{th}$ ticket transaction is granted tickets if and only if $\sum_{j\leq i} N_j\leq K$.
For all ticket transaction that did not win tickets in the auction, the `bid` value is returned to the sender. For those ticket transactions that won a ticket a proportion $\alpha$ of the `bid` value gets burned immediately and the remainder gets either returned to the sender in case of successful inclusion of a blob within a time window $\delta$ or burned otherwise. This provides an incentive for users to acquire tickets only if they in fact intend to use them.
The contract should provide a register of all addresses holding tickets which updates according to the rule that each ticket is expired either after one use or after one slot following the acquisition.
Each ticket grants the right to propagate:
- One blob on the CL (propagation right)
- Multiple blob txs on the EL (e.g., up to 4)
These two rights are independent — the CL and EL each track ticket usage separately. This means a ticket holder can propagate their blob data on the CL and their blob tx on the EL in parallel, without coordination between the layers.
Allowing multiple blob txs to propagate in the EL mempool with a single ticket lets users do resubmissions without purchasing another ticket, for example in case of base fee changes that invalidate the transaction.
Finally, note that tying blob tx propagation to tickets, rather than to their validity, lets the mempool do without the strict rules that it currently applies to them. In particular, the same address can be allowed to queue up many blob txs in parallel, because one tx invalidating the others isn't a concern — again, the right to propagate is about the ticket, not about validity! This is a concrete pain point for blob submitters, which if unaddressed is only going to get worse as throughput of individual L2s increases, since the cap on the number of blobs per tx will lead to an increase in the rate of txs from each L2.
#### Limitations
Blob tickets already achieve a lot but have limitations that motivate further additions to the protocol:
- **Mempool limitations**: A ticket only grants a few blob tx submissions. If all are used or evicted, the user must buy another ticket even though the blob data is unchanged and already propagated. Moreover, the mempool should limit the size of blob txs sent with a ticket, which is at this point the only way to send them.
- **No FOCIL for blob txs**: [FOCIL (Fork-Choice enforced Inclusion Lists)](https://eips.ethereum.org/EIPS/eip-7805) is a censorship resistance mechanism that allows a committee to include certain transactions. However, blob tx validity depends on availability. Since availability hasn't been *recorded* anywhere, inclusion of blob txs cannot be enforced.
Both limitations stem from the same root cause: **availability can only determined at the moment of blob tx inclusion**. Even after a blob propagates and everyone has sampled it, there is no independent record of which blobs are available, and the only way to assert availability is then through a blob tx.
### DA contract
To go beyond these limitations, we introduce two changes:
1. **Payloads can contain versioned hashes independent of blob txs.** A builder can include a list of versioned hashes for blobs whose availability it wants to assert, even without corresponding blob txs in the same block.
2. **Availability is recorded in a DA contract**. At the start of each block, a system call records the versioned hashes from the payload into a DA contract. This creates a record of which blobs are available, queryable by nodes (as part of mempool and FOCIL participation) as well as within the EVM.
Enforcing availability works exactly like today: attesters only vote for blocks whose blobs are available. If a builder includes a versioned hash for unavailable data, the block won't gain attestations. The only change is that versioned hashes can now come from the payload directly, not just from blob txs.
In addition, we adjust the onchain behavior of blob txs to work with the contract. Blob tx validity remains conditional on availability of the corresponding blobs, but to ensure this we now check the DA contract for availability of `blob_tx.versioned_hashes`, as part of validating `blob_tx`. In particular, we do not require `blob_tx.versioned_hashes` to be included in `payload.versioned_hashes` if they had already been recorded as available in a previous block. Availability only has to be established once.

***Figure 6.** Availability recording and enforcement, and blob tx validity. The builder includes a list of versioned hashes referring to available blobs, whose availability is enforced through attestations. The versioned hashes are recorded in a dedicated DA contract, and blob tx validation checks availability in the contract*
Moreover, we adjust how we handle blob txs in the mempool, so that it can benefit from the availability information in the contract. A blob tx can now propagate in the mempool if
either:
1. **Availability is recorded**: The referenced blobs' availability is recorded in the DA contract, OR
2. **Sender holds an unused ticket**: The sender has a valid ticket that hasn't been used yet *on the EL* (as seen locally by the node).
In other words, a ticket is only necessary if the blob tx is propagated prior to availability being recorded, e.g. in parallel with the blob itself. Once availability is recorded, a blob tx can propagate according to normal mempool rules, exactly like a regular tx. For example, resubmission does not need to be handled in any special way.

***Figure 7.** The full picture of blob and blob tx propagation with the DA contract. A ticket grants two independent propagation rights: one blob on the CL and a few blob txs on the EL. After availability is recorded in the DA contract, blob txs can propagate freely without tickets, enabling unlimited resubmission.*
### Censorship resistance
With availability recorded independently of blob txs, we can now provide censorship resistance to blob txs, by ensuring inclusion (availability determination) of blobs. We do so with a mechanism based on blob tickets, and adapting FOCIL-like fork-choice enforcement to blobs:
1. Each PTC member observes which blobs have been propagated by a deadline prior to block production. They sample these blobs and form a local view of availability.
2. Members send lists of kzg commitments they observed as available.
3. A majority vote determines which versioned hashes the proposer *must* include in the payload (and thus record in the DA contract).
4. The proposer may include additional blobs but cannot exclude those the PTC requires.
5. Attesters enforce this: they only vote for blocks that include PTC-required versioned hashes, *unless* the attester locally doesn't see those blobs as available (safety always takes precedence).
Note that the proposer is now constrained from both directions when it comes to blob inclusions: it must include what the PTC requires (liveness) and cannot include what isn't available (safety).
Crucially, once availability of a blob has been recorded, a blob tx referencing it becomes equivalent to a regular transaction, because its additional validity condition is guaranteed to be satisfied:
- As already mentioned, it can propagate without tickets, according to normal mempool rules
- It can be included through normal FOCIL
Moreover, ticket txs are themselves regular txs, benefitting from the same mempool and FOCIL infrastructure. Therefore, we get an end-to-end censorship resistance story for blob txs.

***Figure 7.** End-to-end inclusion guarantees for blob txs. The PTC enforces inclusion of propagated blobs (as versioned hashes recorded in the DA contract), while FOCIL provides inclusion guarantees for the ticket-buying and blob transactions.*
## Synchronous blobs
As we said at the beginning, a blob is considered synchronous if its availability is enforced by the protocol in the same slot in which it is propagated. By enforcing availability we mean the process of attesting to a block whose versioned hashes are available, which should be contrasted to the process of determining availability by the PTC. As such, a blob is synchronous if its versioned hash is included in a block before it has (fully) propagated — and thus also before the PTC has determined its availability. Further, it is possible for the blob tx to also be included in the same block, and as result have propagation, availability determination and execution happen *synchronously* (i.e in the same slot) as illustrated in Figure 4.
Since blob propagation always requires a valid ticket, it is evident that there is structurally no differentiation between asynchronous and synchronous blobs. A blob becomes synchronous/asynchronous depending on whether the blob transaction is included in the same or a subsequent slot to the one in which data availability is enforced. Thus, builders can prioritize those blobs to be synchronous that need it -- and presumably pay accordingly.
This works fine as long as only L2s use blobs. Rollup operators can buy tickets in advance for their async needs, and L2s wanting synchronous execution can either also buy tickets or rely on builders being able to access a ticket market that lets them fulfill their ticket needs just-in-time. However, in anticipation of the fact that ultimately the L1 itself will be using blobs ([blocks-in-blobs](https://hackmd.io/@kevaundray/HkMAlRCrbl) with zkEVM + DAS), this model breaks down. L1 transactions come from many independent users with no canonical bundler.
Thus, the builder needs to be able to propagate enough blobs to fully cover the needs of the L1. A way of ensuring this is by allowing the builder to propagate blobs, with priority, without the need to explicitly hold tickets. A natural way of achieving this is by introducing two ways of registering blobs, one for the the builder's blobs and one for the rest. Concretely we can make this explicit in the payload with two lists of versioned hashes:
- `sync_versioned_hashes`: match blobs that the builder commits to just-in-time, exactly like today with `blob_kzg_commitments`. These are propagated (sampled) alongside the payload, again exactly like today's blobs.
- `async_versioned_hashes`: blobs that were pre-propagated with tickets, now being asserted as available.
Effectively, the list of `sync_versioned_hashes` acts as a list of content-bound tickets to the builder. **Both lists condition the payload's validity on availability**: all blobs corresponding to `sync_versioned_hashes` and `async_versioned_hashes` must be available in order for the payload to become canonical. The difference is *who propagates and when*:
- Sync: builder propagates *during the critical path*
- Async: ticket holder pre-propagated; builder now simply asserts availability.
We have separated the network's general propagation resource into two parts:
1. **Critical path propagation**
2. **Pre-propagation capacity**
In doing so, we have essentially introduced two markets. The market for the latter is governed by an onchain ticket auction run ahead-of-time, while the market for the former is just-in-time and with the builder having ultimate inclusion power.
Sync blobs can then work exactly like today. A block contains commitments to blobs (`sync_versioned_hashes`), and they propagate alongside the payload. The async mechanism is additive, providing an additional pathway with different properties: ahead of time ticket purchase, pre-propagation and therefore higher capacity, CR guarantees.

***Figure 8.** Async blobs pre-propagate outside the critical path (spread over time), while sync blobs propagate during the critical path alongside the payload (like today). The payload contains both `sync_versioned_hashes` and `async_versioned_hashes`, and attesters verify availability of all blobs before voting.*
### Censorship resistance implications
Blobs propagated after the PTC deadline cannot receive inclusion guarantees for the upcoming slot. This has clear implications:
- **Sync blobs**: Cannot benefit from PTC guarantees. They propagate during/after block construction, which is after any reasonable PTC deadline. Inclusion is at the builder's discretion, incentivized by priority fees.
- **Late async blobs**: An async blob propagated after the PTC deadline misses the guarantee for the current slot, but receives it from the next slot onward (assuming it remains available).
This mirrors regular FOCIL behavior: transactions not in the mempool by the IL deadline cannot be force-included in the next block. The tradeoff is inherent — you can optimize for state certainty (wait longer) or inclusion certainty (propagate earlier), but not both simultaneously.
Blob submitters could of course also use a combined strategy for inclusion, namely acquiring a ticket and beginning propagation while also attempting to acquire critical path propagation resources through the builder. Even if the later fails the combined strategy would ensure inclusion of their blob in the subsequent block through the censorship resistance mechanism, which evidently cannot hold for sync blobs.
### Synchronous vs Asynchronous capacity
A fundamental question we need to ask ourselves is how much of the network's capacity to propagate blob data we want to devote to synchronous and how much to asynchronous blobs.
On the one hand synchronous blobs provide at least as good functionality to users as asynchronous blobs, suggesting a maximization of synchronous blob capacity. On the other hand, we saw that synchronous blobs don't come with the same censorship resistance guarantees as their asynchronous counterparts. Further, increasing synchronous blob capacity translates into an increased synchronous payload propagation period. Without going into much detail, since there is a time window between the time of committing to a payload (including synchronous blobs) and revealing it, the builder has the "free option" of invalidating a block by not making some of the blobs available in time.
As mentioned briefly in the introduction, the longer this period, the more severe the free option problem becomes and thus reducing the synchronous blob capacity also reduces the [severity of the free option problem](https://hackmd.io/@mikeneuder/decouple-payload-and-da).
Clearly, which way we end up leaning towards will also be dependent on the demand for synchronous blobs. While today this demand might be small, this will change dramatically once the L1 starts using blobs itself. We suggest that a detailed analysis taking all these factors into consideration, be carried out prior to implementation in order to determine a good compromise between these two opposing dynamics.
## Appendix
### DA system contract
#### Design considerations
- **Write pattern:** At the start of each block, a system call records the versioned hashes of blobs whose availability is being asserted in that block.
- **Read pattern:** Contracts query by versioned hash to check if a blob is available. This should be cheap and simple.
- **Storage management:** Entries must be periodically, to bound storage growth. At 128 blobs per slot, unbounded storage would grow ~10 GB per year.
- **Current-block access:** Transactions in the same block as availability recording must be able to check availability without external proofs, since they cannot produce proofs for data just recorded.
#### Contract design
The contract maintains a **recent window** (~128 blocks) via a ring buffer, enabling O(1) proof-free queries. This covers current-block access and typical rollup use cases.
Beyond that, users can prove inclusion against `versioned_hashes_root` stored in each block's header. This keeps contract storage minimal while still enabling availability queries for a long period, arguably more than enough to make sure that a user can land a tx onchain after a blob's availability has been determined.
Note that the checking the DA contract as part of validating a blob tx is very cheap when the `versioned_hashes` are included in the *current* payload, since it's a warm read — the versioned hashes were written to the DA contract at block start. The current usage pattern, with availability determination happening at the same time as execution, is then essentially unaffected.
```python
# Constants
BLOCK_WINDOW = 128
MAX_BLOBS_PER_BLOCK = 128
RECENT_RING_SIZE = BLOCK_WINDOW * MAX_BLOBS_PER_BLOCK
# Storage
recent_vhs_buffer: list[Optional[bytes]] = [None] * RECENT_RING_SIZE
recent_availability: dict[bytes, int] = {} # vh => 1 if in recent window
recent_write_cursor: int = 0
def record_availability(versioned_hashes: list[bytes]):
"""Called via system call at block start."""
global recent_write_cursor
for vh in versioned_hashes:
# Clear old entry at current position
old_vh = recent_vhs_buffer[recent_write_cursor]
if old_vh is not None:
del recent_availability[old_vh]
# Record new entry
recent_vhs_buffer[recent_write_cursor] = vh
recent_availability[vh] = 1
recent_write_cursor = (recent_write_cursor + 1) % RECENT_RING_SIZE
def is_available(versioned_hash: bytes) -> bool:
"""Check availability in recent window (no proof needed)."""
return recent_availability.get(versioned_hash) == 1
```
<!-- ## To be removed from here on.
## Mechanism for asynchronous blobs
Concretely the mechanism for including asynchronous blobs follows these steps:
- A potential blob submitter places a ticket transaction referencing a number of blobs `k`, addresses of beneficiaries `(a_1,...,a_k)` and their bid `V`.
- The transaction triggers a smart contract which allocates tickets through a first-price auction.
- Once tickets have been allocated, submitters holding tickets can begin the propagation of at most as many blobs (erasure coded) as they hold tickets. Concretely, the blob submitter push each cell or "column" (here consisting of her own blobs) to the corresponding subnet.
- Nodes fetch and propagate columns within the subnets they are subscribed to if the blob submitter holds a valid ticket. Each ticket is valid for one slot from the time of allocation. Beyond that time window peers are not required to propagate the data anymore. Furthermore, nodes store the sampled blob data for at least an epoch.
- The availability committee determines availability of blobs, outputting $[b_1,...,b_m]$ where $b_j$ is the kzg commitment corresponding to an available blob. Availability is recorded in a dedicated DA smart contract.
- Blob submitters send a transaction interacting with the EL state (e.g recording availability to the rollup contract). This transaction can be forced-included by the IL committee.
In contrast the mechanism for including synchronous blobs follows these steps:
- The proposer includes (on their own or through FOCIL) a ticket transaction referencing a number of blobs `k`, addresses of beneficiaries `(a_1,...,a_k)` and a bid `V`.
- In the same slot or some future slot the proposer includes a blob transaction submitted by one of the `a_i`. Note that this transaction cannot be force included through FOCIL.
- If, by the time the block executes, `a_i` holds a ticket, she as well as the proposer can propagate the blob data. Concretely, the blob submitter as well as the proposer push each cell or "column" (here consisting of her own blobs) to the corresponding subnet.
Note, that *all* blobs require a valid ticket. Asynchronous blobs only differ from synchronous blobs in so far as asynchronous blobs get pre-propagated and their availability is determined before their inclusion in a block while synchronous blobs get included before they have been propagated and availability has been determined.
------------
**Questions**
One question for me is whether synchronous blobs in this world are necessarily blobs that don't go through the blobpool? Presumably if you had the time to put them through the blobpool why not have the blob be async. (if we allow for pre-buying tickets to some extend you are basically just shifting the pre-propagation from the blobpool to the CL but in terms of sync/async does it make a difference)?
Ultimately this begs the question whether there will even be a blobpool which has blob data sidecars in this future?
I think the answer to both of these is NO.
This begs the question, should we comment on the fact that there is no necessity for a data-blobpool ?
Another thing to note is that with the current setup where the ticket does not bind to a specific blob, you can basically get as synchronous blobs as today using the async blob mechanism. If in slot $N-1$ you acquired a ticket and at some point in the slot you decided that you want to include a blob in slot $N$ you can push that blob and, if timed right, it should land on chain. Effectively synchronous blobs as defined above are sort of equivalent to todays private blobs. Should we comment on this?
My question is how do we distinguish between these and in how far we actually need to. If we want to distinguish between them then there would have to be a mechanism that forbids builders including an async blob before it was deemed available. Alternatively, we don't differentiate formally and leave it to the proposer to decide which blobs they are willing to include synchronously (presumably those paying the highest priority fees). The question in this case is whether the network should distinguish between them or not? (pressumably not)
Availability committee size and function has to be worked out.
**UPDATE 29/01**
-------
Issue with sync blobs: If we only have regular transactions (get rid or blob transaction) then there can be the following problem with sync blobs. If a user sends a regular transaction to the mempool very late during the slot, and the proposer includes this transaction in the blob without availability having been determined by the PTC and registered in the DA system contract, it means that the transaction will not execute but gas be consumed (money lost).
We discussed a few things and came to the following conclusions:
The ticket should give the holder the following rights:
- Right to propagate blob data (as we discussed already) in the CL
- Right to propagate a transaction in the EL mempool. This can be a regular transaction or indeed a blob transaction. The reason we allow both is that blob transaction gives you the ability to have sync blobs that addresses the issue we discussed above. This is since a blob transaction is only valid, and thus can only be included in the block if the blob data was available. This means that for the proposer to include this blob transaction into the block, they will have had to register availability of the corresponding blob data into the DA system contract (or else block is invalid).
Once availability has been determined and recorded in the DA contract, users can send transaction resubmissions without requiring tickets. These resubmitted transactions in principle don't need anymore functionality than regular transactions. However, for backwards compatibility, we might also allow blob transactions to be resubmitted for the time being (for one fork or so).
Note that still, whether a blob is sync or async will be up to whether the builder includes the transaction (regular or blob) in that same slot or whether they don't.
#### Images










<!--[](https://notes.ethereum.org/_uploads/r1Diq5zmWe.png)
----------
TO ERASE EVENTUALLY
## Stage 0 (current)
Blobs can be propagated in advance, through the EL mempool, and this pre-propagation can be used to reduce the propagation in the critical path, or even make it entirely redundant, through the `getBlobs` mechanism.
This stage lacks:
1. Global allocation of pre-propagation throughput, i.e., of mempool space in this case (there are only local mempool rules)
2. Consensus over which blobs have been pre-propagated and whether they are available
## Stage 1
This stage introduces 1., allocation of pre-propagation throughput. Examples of stage 1 mechanisms are various forms of [mempool tickets](https://www.notion.so/efdn/Variants-of-Mempool-Tickets-26cd98955541804180b5c4419a0a9673).
The improvement in this stage is that we can resolve congestion ahead of the time, as part of the allocation mechanism, rather than by (for example) applying local rate limits that fragment the mempool and limit its utility. This way, we get clear guarantees about the throughput which the pre-propagation mechanism can support, and thus about the maximum blob throughput as well. In comparison, a stage 0 mechanism cannot entirely rely on the pre-propagation mechanism to support its blob throughput.
This stage still lacks 2., consensus over which blobs have been pre-propagated and whether they are available.
## Stage 2
This stage introduces 2. as well. An example of this could be a stage 1 mechanism with the addition of availability committees that vote on the availability of pre-propagated blobs.
Another option is adding a tx type that just carries kzg commitments without blob data, with tx inclusion doing two things:
1. Allowing data propagation (i.e. allocating pre-propagation throughput)
2. Letting availability of the associated data be established onchain later (i.e. establishing consensus over availability)
- slot N:
- tx included, kzg commitments recorded in the beacon state, for the last two *payloads*.
- Once the beacon block is out, the blob data can be propagated, just like for regular blobs
- Also like for regular blobs, these count against max blobs, pay the basefee, and increase excess blob gas: since the primary ropagation is done in this slot, they consume the same resources.
- The blob gas per async blob can be set lower than the blob gas per regular blob, because they can take longer to propagate
- slot N+1:
- the proposer can record availability of these kzg commitments, from slots N or N-1, without necessarily including blob txs that use them. These kzg commitments are now going to be subject to sampling (availability determination), so *on this chain* they are to be considered as available in the future (say for 32 slots or something, e.g., `available_kzg_commitments: Vector[List[KZGCommitment, MAX_BLOBS_PER_BLOCK], 32]`)
- They do *not* count against max blobs or excess blob gas, and do *not* re-pay the blob basefee.
- the PTC votes on the availability of the remaining (not yet used) kzg commitments among the recorded ones (in the vector)
- slot N+2:
- if the PTC determines that the blob data for a certain commitment is available, the beacon block *has to* record this, adding them to the ones which are considered available from now on
While a kzg commitment is in`available_kzg_commitments`, any blob tx can use without paying the blob basefee for it, and without needing another availability determination. Moreover, without counting against max blobs or the excess blob gas (the blob basefee). In other words, `available_kzg_commitments` are commitments for which the availability work has already been done, and which can now be used freely on the execution layer.