Hello internet
Over the past 3-4 weeks, I have been deeply engaged in testing and prototyping my project idea, Inclusion List with Plausible Deniability. Through several iterations, I explored the feasibility of this idea and worked to identify potential challenges and solutions. However, as I delved deeper into the implementation, it became clear that the concept faced significant hurdles that would prevent its adoption on the Ethereum mainnet. These challenges included network overhead and other limitations, which I will discuss in more detail in the following sections.
In the conclusion of this document, I will also outline what’s next for me as I continue my journey in the Ethereum Protocol Fellowship.
While working on the Inclusion List with Plausible Deniability project, I encountered several challenges that ultimately led to the decision to pivot. These challenges include:
Let's go through each section and refine your reasoning in more technical language.
The Ethereum consensus layer currently has approximately 1.2 million validators. In the one-bit-per-attester approach for implementing the Inclusion List, each transaction requires a committee of validators to attest to it. Specifically, if the size of a transaction is denoted by T, the Reed-Solomon encoding requires 2T validators to attest to a single transaction.
For instance, if the transaction size is 500 bits, we would need a committee of 1,000 validators to attest to that transaction using binary Reed-Solomon encoding. Each validator must send an attestation object indicating whether they are attesting to a value of 0 or 1. Now, considering that we want to limit our Inclusion List based on gas usage or transaction size, the average Inclusion List could contain 6 to 10 transactions per slot.
This means that for each slot, we would need multiple attestation committees—one per transaction. If we estimate an average of 6 transactions per slot, this would result in 6,000 new packets being added to the network per slot. While Reed-Solomon encoding can mitigate issues related to packet loss, this still introduces a noticeable overhead for every node in the network.
Moreover, each attestation object contains the validator’s signature. This requires every node to validate approximately 6,000 signatures per slot. Although BLS signature aggregation can help alleviate the computational load, the sheer volume of attestations introduces a significant burden on the network, potentially leading to performance degradation and increased latency.
Reed-Solomon encoding necessitates a predefined committee size for each transaction. For example, if we define a committee size of 1,000 validators per transaction, this inherently limits the size of the transaction that can be included in the Inclusion List. Specifically, transactions exceeding 500 bits in size would not fit within this model.
Furthermore, the current design does not support dynamic committee sizing, meaning that the committee size is fixed and does not adapt to varying transaction sizes. This rigidity imposes a significant constraint on the system, requiring further research to develop a more flexible solution that can accommodate transactions of different sizes while maintaining the integrity and efficiency of the Inclusion List.
With an average of 6 committees per slot, involving a total of 6,000 validators, each validator sends either a 0 or 1 as part of their attestation. A critical vulnerability arises if nodes are configured to consistently attest to a fixed value, such as always sending a 0 or always sending a 1. If a significant portion of the network adopts this behavior, the protocol’s efficacy would be compromised, leading to the failure of the Inclusion List mechanism.
Moreover, implementing a new incentive mechanism to encourage proper attestation behavior could spark extensive economic debates, particularly given the large validator set. The challenge lies in designing incentives that prevent strategic behavior while ensuring that the inclusion mechanism remains secure and robust.
Another concern related to overhead is the risk of data corruption when a committee commits to a transaction. If the proposer attempts to extract the transaction from the Reed-Solomon encoded data and finds it corrupted—perhaps due to committee members having divergent views of the mempool—this would result in a failed block. This scenario is plausible given the nature of Reed-Solomon encoding, where different committee members might include conflicting data.
Addressing this issue requires additional research to devise a robust solution. Currently, the design does not account for such cases, meaning that in the event of data corruption, the proposer would miss the block entirely. This represents a significant risk, particularly in a live network environment where reliability and accuracy are paramount.
These were the key challenges I identified during the prototyping phase of my project proposal. Each of these challenges contributed to the decision to pivot away from the original idea.
Throughout the implementation and prototyping of the One Bit Per Attestation Inclusion List
, I have gained valuable insights into the Lighthouse codebase, its architecture, and its functionality. This experience has significantly deepened my understanding of how Lighthouse operates, which I plan to leverage in my next steps.
My immediate focus will be on contributing more seriously to the Lighthouse project. I will start by tackling some of the Good First Issues (GFI) in their codebase. After successfully contributing to a few of these, I plan to reach out to the Lighthouse team to identify areas where they need more focused contributions.
Update: probably i will join to this project (reth + lighthouse integration)
https://github.com/ThreeHrSleep/cohort-five/blob/main/projects/direct-integration-of-lighthouse-reth-and-tracing-integration-in-lighthouse.md
In addition to this, I remain deeply interested in the Inclusion List concept and the broader challenge of addressing censorship in Ethereum. However, my experience over the past month and a half has shown me that many of the ideas in this space are still in the research phase. I plan to stay engaged with these ongoing research efforts by tracking the latest developments, contributing through brainstorming sessions, writing technical analyses, and offering my insights where possible.
For example, I am currently working on a write-up about the BRAID protocol (link to Dan Robinson's tweet: https://x.com/danrobinson/status/1820506643739615624), which explores new approaches for parallel block proposing systems. This kind of engagement will allowing me to contribute to the community's efforts in meaningful ways.