WIP The purpose of this document is to describe in more detail a proposal for how phase 1 can be structured based on a "data-availability-focused" approach. The main addition to the beacon chain will be a Vector of ShardDataHeader objects, one for each shard. A ShardDataHeader is a small object which represents a large amount of underlying data (roughly 0-512 kB in size). A block is only valid if the underlying data that the ShardDataHeader points to is available - that is, it has been published to the network and anyone can download it. However, to preserve scalability, clients will not try to download the full underlying data of every ShardDataHeader to verify the block. Instead, they will verify that the data is available using an indirect technique called data availability sampling. Parameters Parameter Value Description
4/12/2023This document lays out some possible options for alternatives to EIP 3298 (removal of refunds) that try to preserve some of the legitimate goals and positive incentives that refunds provide. Why perhaps not total removal? Total removal is the simplest option and it accomplishes the desired effects, but it also has some downsides that have been identified by the community. The two strongest critiques of total removal of refunds that I have seen are: Zero -> nonzero -> zero storage use patterns There are two major examples of this: Mutex locks on contracts, where before contract A calls contract B, it first flips a storage slot to some nonzero value. When contract A itself is called, it checks that that storage slot is empty; if it is not, the call fails. This prevents re-entrancy. When the call finishes, the storage slot is flipped back to zero. Approving an ERC20 token for exactly the desired amount and then immediately spending that amount.
4/3/2023Suppose you have a polynomial $P$, and the sample proofs $Q_i = \lfloor \frac{P}{X^{16} - \omega^{i * 16}} \rfloor$. Goal: verify all proofs. Note that for all $i$, $Q_i * (X^{16} - \omega^{i * 16}) = P - I_i$, where $I_i$ is the $deg < 16$ interpolant of the i'th subgroup. We can combine all of these equations with a random linear combination: $\sum Q_i * (X^{16} - \omega^{i * 16}) * r_i = \sum (P - I_i) * r_i$
11/1/2022Along with proof of stake, the other central feature in the eth2 design is sharding. This proposal introduces a limited form of sharding, called "data sharding", as per the rollup-centric roadmap: the shards would store data, and attest to the availability of ~250 kB sized blobs of data. This availability verification provides a secure and high-throughput data layer for layer-2 protocols such as rollups. To verify the availability of high volumes of data without requiring any single node to personally download all of the data, two techniques are stacked on top of each other: (i) attestation by randomly sampled committees, and (ii) data availability sampling (DAS). ELI5: randomly sampled committees Suppose you have a big amount of data (think: 16 MB, the average amount that the eth2 chain will actually process per slot, at least initially). You represent this data as 64 "blobs" of 256 kB each. You have a proof of stake system, with ~6400 validators. How do you check all of the data without (i) requiring anyone to download the whole thing, or (ii) opening the door for an attacker who controls only a few validators to sneak an invalid block through? We can solve the first problem by splitting up the work: validators 1...100 have to download and check the first blob, validators 101...200 have to download and check the second blob, and so on. The validators in each of these subgroups (or "committees") simply make a signature attesting that they have verified the blob, and the network as a whole only accepts the blob if they have seen signatures from the majority of the corresponding committee. But this leads to a problem: what if the attacker controls some contiguous subset of validators (eg. 1971....2070)? If this were the case, then even though the attacker controls only ~1.5% of the whole validator set, they would dominate a single committee (in this case, they would have ~70% of committee 20, containing validators 2001...2100), and so they would be able to control the committee and push even invalid/unavailable blobs into the chain. Random sampling solves this by using a random shuffling algorithm to select the committees. We use some hash as the seed of a random number generator, which we then use to randomly shuffle the list [1..6400]. The first 100 values in the shuffled list are the first committee, the next 100 are the second committee, etc.
10/26/2022