Suppose you have a polynomial $P$, and the sample proofs $Q_i = \lfloor \frac{P}{X^{16} - \omega^{i * 16}} \rfloor$. Goal: verify all proofs. Note that for all $i$, $Q_i * (X^{16} - \omega^{i * 16}) = P - I_i$, where $I_i$ is the $deg < 16$ interpolant of the i'th subgroup. We can combine all of these equations with a random linear combination: $\sum Q_i * (X^{16} - \omega^{i * 16}) * r_i = \sum (P - I_i) * r_i$
11/1/2022Along with proof of stake, the other central feature in the eth2 design is sharding. This proposal introduces a limited form of sharding, called "data sharding", as per the rollup-centric roadmap: the shards would store data, and attest to the availability of ~250 kB sized blobs of data. This availability verification provides a secure and high-throughput data layer for layer-2 protocols such as rollups. To verify the availability of high volumes of data without requiring any single node to personally download all of the data, two techniques are stacked on top of each other: (i) attestation by randomly sampled committees, and (ii) data availability sampling (DAS). ELI5: randomly sampled committees Suppose you have a big amount of data (think: 16 MB, the average amount that the eth2 chain will actually process per slot, at least initially). You represent this data as 64 "blobs" of 256 kB each. You have a proof of stake system, with ~6400 validators. How do you check all of the data without (i) requiring anyone to download the whole thing, or (ii) opening the door for an attacker who controls only a few validators to sneak an invalid block through? We can solve the first problem by splitting up the work: validators 1...100 have to download and check the first blob, validators 101...200 have to download and check the second blob, and so on. The validators in each of these subgroups (or "committees") simply make a signature attesting that they have verified the blob, and the network as a whole only accepts the blob if they have seen signatures from the majority of the corresponding committee. But this leads to a problem: what if the attacker controls some contiguous subset of validators (eg. 1971....2070)? If this were the case, then even though the attacker controls only ~1.5% of the whole validator set, they would dominate a single committee (in this case, they would have ~70% of committee 20, containing validators 2001...2100), and so they would be able to control the committee and push even invalid/unavailable blobs into the chain. Random sampling solves this by using a random shuffling algorithm to select the committees. We use some hash as the seed of a random number generator, which we then use to randomly shuffle the list [1..6400]. The first 100 values in the shuffled list are the first committee, the next 100 are the second committee, etc.
10/26/2022WIP The purpose of this document is to describe in more detail a proposal for how phase 1 can be structured based on a "data-availability-focused" approach. The main addition to the beacon chain will be a Vector of ShardDataHeader objects, one for each shard. A ShardDataHeader is a small object which represents a large amount of underlying data (roughly 0-512 kB in size). A block is only valid if the underlying data that the ShardDataHeader points to is available - that is, it has been published to the network and anyone can download it. However, to preserve scalability, clients will not try to download the full underlying data of every ShardDataHeader to verify the block. Instead, they will verify that the data is available using an indirect technique called data availability sampling. Parameters Parameter Value Description
5/20/2022Two paths to a solution exist, and have existed for a long time: weak statelessness and state expiry: State expiry: remove state that has not been recently accessed from the state (think: accessed in the last year), and require witnesses to revive expired state. This would reduce the state that everyone needs to store to a flat ~20-50 GB. Weak statelessness: only require block proposers to store state, and allow all other nodes to verify blocks statelessly. The good news is that recently, there have been major improvements on both of these paths, that greatly reduce the downsides to both: Some techniques for how a ReGenesis-like epoch-based expiry scheme can be adapted to minimize resurrection conflicts Piper Merriam's work on transaction gossip networks that add witnesses to be stateless-client-friendly, and his work on distributed state storage and on-demand availability Verkle trees, which can reduce worst-case witness sizes from ~4 MB to ~800 kB (this is definitely small enough, because existing worst-case blocks that are full of calldata are already 12.5M / 16 ~= 780 kB and we have to handle those anyway). See slides, doc, code.
1/4/2022