## What is peer das (high level) ? PeerDAS is the next step in ["The Surge"](https://ethroadmap.com/surge.html) branch of Ethereum roadmap. On March 13, 2024 EIP-4844 became a part of Mainnet Deneb-Cancun upgrade, it introduced blobs and was proposed as a solution in rollups scaling. Both EIP-4844 and PeerDAS are intermediate steps on the way for decent rollup scaling, but PeerDAS is a major improvement over EIP-4844 by switching data availability check to partial in comparison with complete blobs download as of EIP-4844. Partial data requirement gives a way to raise rollup scalability without increasing node network requirements while mitigating possible adversary attacks on blobs data availability with technology. ## What is the EIP and high level spec? [EIP-7594](https://eips.ethereum.org/EIPS/eip-7594) currently is in extremely draft state, though [consensus-specs feature specification](https://github.com/ethereum/consensus-specs/tree/dev/specs/_features/eip7594) covers most of the proposed changes, which are in summary: - Requires EIP-4844 (already in Mainnet) for basic blobs concepts - Doesn't require EL changes and proper hard-fork to kick off and could be started on any agreed epoch - Adds 1 dimensional erasure coding over blob matrix. Proposes 2x extended blob (extended blobs - rows) matrix split into 128 columns. All data is distributed, stored and verified using these columns being a smallest unit. 2D erasure will introduce more opportunities and flexibility to protocol but it's more challenging, so PeerDAS is a first step and full rollup scaling will be thereafter. - Each node has custody and sampling jobs. Custody serves to data presence in the network, Sampling to data availability check. Proposed requirements: 4 columns for custody and 8 for sampling per node (of 128), so 12.5%-18.75% of total blobs data should be downloaded, compared to 100% as of EIP-4844. These numbers are definitely non-final, there are already talks on node vs validator requirements, increasing required number, increasing but making sampling lossy etc. More research and tests are required to finalize the numbers and spec. As a maximum, a node operator could reconstruct the whole data by downloading half of the matrix (falling back to EIP-4844 data requirements). ## What was the problem we were trying to solve and how did the research come about? - Scalability. EIP-4844 requires a complete data availability check performed by each node. This means linear network usage requirement growth with increasing rollup data size. It's not only bandwidth hit, all nodes with low bandwidth capacity will not be capable of keeping up with validators duties. - Ethereum scalability solution confirmation. During previous years there were different concepts of how Ethereum could be scaled, EIP-4844 and PeerDAS secures the way of rollup data distribution network as a way for scalability, meaning Ethereum core protocol will solve only L2 data issue while all other aspects are solved by individual rollup providers and are not secured by core Ethereum protocol or secured indirectly (using onchain contracts or whatever) - Occluding several feasible EIP-4844 attacks found to the day Research links: [From 4844 to Danksharding: a path to scaling Ethereum DA](https://ethresear.ch/t/from-4844-to-danksharding-a-path-to-scaling-ethereum-da/18046) [PeerDAS – a simpler DAS approach using battle-tested p2p components](https://ethresear.ch/t/peerdas-a-simpler-das-approach-using-battle-tested-p2p-components/16541) [SubnetDAS – an intermediate DAS approach](https://ethresear.ch/t/subnetdas-an-intermediate-das-approach/17169) [Decoupling global liveness and individual safety in DAS](https://notes.ethereum.org/@fradamt/DAS-security-notions) [FullDAS: towards massive scalability with 32MB blocks and beyond](https://ethresear.ch/t/fulldas-towards-massive-scalability-with-32mb-blocks-and-beyond/19529) [LossyDAS: Lossy, Incremental, and Diagonal Sampling for Data Availability](https://ethresear.ch/t/lossydas-lossy-incremental-and-diagonal-sampling-for-data-availability/18963) [Scalability limitations of Kademlia DHTs when Enabling Data Availability Sampling in Ethereum](https://ethresear.ch/t/scalability-limitations-of-kademlia-dhts-when-enabling-data-availability-sampling-in-ethereum/18732) ## How does it relate to blobs and the blob limits? - Blobs stay the same - Blob limits could be increased X-folds even keeping the same network requirements, though consensus lean towards to do it in a separate EIP after proving successful behavior of PeerDAS in Mainnet ## How will it impact node operators and roll ups / devs using blobs ? - Smaller network and disk usage for blobs. Only those who want to keep whole blobs data will get storage requirements increased: 2x for extended matrix (too expensive to extend on the fly for each rpc request), multiplied by the target blobs increment from current 3 blobs. - Feasibility of increment number of blobs per block X-fold, no blockers for rollups data growth on this side - Following types of operators are proposed: - Validator and user nodes (Honesty custody/serve assumption can be placed upon them, e.g. download and serve samples of X rows/columns. Note, validators can be incentivized to custody but not necessarily to serve) - High capacity nodes (some % of the data beyond baseline honest node) - Super-full nodes (100% of data) [special case of high capacity node] ## how far along is the implementation and what are the next steps? - Teku implementation covers all the spec and is tested with other major clients ([see Nyota interop results](https://notes.ethereum.org/HBTXApjjTKS-IFwDlGHUww?view)). After the event matrix reconstruction and sampling were added, covering 100% of the major spec features. - Teku implementation is in prototype style and requires code refactoring and test coverage for production. Other clients are on a similar stage. - Devnet highlights several issues such as: + limited sampling benefits, worth reviewing spec + new withholding and other type of attacks possible with PeerDAS + PeerDAS fork-choice rule: tight or trailing, in discussion (see: [DAS fork choice](https://ethresear.ch/t/das-fork-choice/19578)) + Lossy sampling is considered Next steps: - Launching of PeerDAS devnet-1 2-4 weeks from now - [PeerDAS Breakout](https://github.com/ethereum/pm/issues/1070) meetings every 2 weeks - Considering independent launch of PeerDAS not aligned with Pectra as hard fork is not needed (see [Ethereum All Core Developers Consensus Call #135 Writeup](https://www.galaxy.com/insights/research/ethereum-all-core-developers-consensus-call-135/)) ## How does it relate to the ethereum scaling roadmap ? (I can fill this in if you don’t have the info) PeerDAS, or Peer Data Availability Sampling, would allow both the number and the size of blobs to be increased. This will become increasingly important as the rollup space matures and begins to consume the available blob space. PeerDAS will enable nodes to verify just a portion of a blob, rather than the entire contents of each. This method allows the amount of blob data that can be attached to a block, which is already ephemeral, to be substantially increased without imposing additional hardware burdens on node operators. ![image](https://hackmd.io/_uploads/rJWNE_hr0.png)