# EIP-4844 L2 TX usage & blob lifetime
## Lifetime of a blob TX
![](https://i.imgur.com/gUrqHOG.png)
---
## L2 TX to L1 blob to L2 verifier
![](https://i.imgur.com/sB5JR3W.png)
---
## Blob TX contents
![](https://i.imgur.com/O9fIYJK.jpg)
---
## Blob TXs in EVM, but without blob data
The blob data is only there in the network wrapper presentation of the TX.
The blob data is not persisted in the execution-layer, and not accessible in the EVM.
Blob-data is purely meant for data availability. In the EVM the data can be proven to be there.
![](https://i.imgur.com/mq8gyf2.png)
---
**Non-interactive** optimistic rollup fraud proof verification (simplified)
When there is a challenge, the L1 can load the full blob into calldata, the blob was made available outside the EVM previously. A non-interactive fraud-proof requires us to execute the whole rollup transition (e.g. a transaction bundle) at once in the L1 EVM. (This obviously has many limitations on L1, rollups shifted away from this approach)
![](https://i.imgur.com/rSvJ4mM.png)
---
**Interactive** optimistic rollup fraud proof verification (simplified)
During an interactive fraud-proof the execution is captured as a trace of all executed instructions, along with the VM memory and other verifier state at that instruction step.
The trace steps are then merkleized into a big tree, which can be bisected interactively on L1, with `Log(N)` steps.
Interactively the challenger and proposer agree on a prefix of the trace of execution steps. A single step is then executed on L1.
To load data into the L1 VM, an "oracle" is used. The execution starts with a commitment to L1 data, and then digs into the history to retrieve previous transactions, L1 block headers, and L2 data from blobs.
Currently the "oracle" has 1 flavor: load data with keccak256.
Instead, we now add a 2nd flavor: load a piece of data from a KZG vector commitment, a.k.a. a point evaluation proof.
![](https://i.imgur.com/9tSxP2H.png)
---
**ZK validity proof** (simplified).
To verify a ZK rollup transition we need two things:
- proof the correct data was imported (closely related to data inclusion proof of data on L1)
- proof the ZK transition (out of scope, super specialized and cryptography heavy)
We ignore the transition here, and focus on the data inclusion.
Instead of running the KZG code within the ZK rollup system, we can show that the commitment we already have is *equivalent* to whatever ZK commitment the rollup prefers to use over the data that was introduced.
To do this, we need a random point that the producer/verifier cannot choose, and evaluate that the data point in both the KZG blob and ZK rollup data commitments is the same.
After verifying the ZK rollup commitment over imported data is valid, the ZK rollup can continue to verify the actual transition.
![](https://i.imgur.com/pkcgSSO.png)