This project sought to increase block proposer rewards for Lighthouse users. The project proposal lives here: Lighthouse Attestation Packing.
The project is practically complete. We have gone through phases of optimization to arrive at something able to be merged into Lighthouse. The code is not yet merged though.
We switched the roadmap of the project after realizing that the optimal aggregation strategy with a greedy solution to the weighted maximum coverage problem results in the optimal solution on all test cases. Also my partner stopped working on the project so the whole roadmap outlined in the project was pushed back as I completed both sides.
I implemented the so called 'decomposition strategy' outlined by Satalia This took longer than I initially expected due to a lack of familiarity with the techniques on my part. I also implemented the 'enumeration approach' discussed in the paper linked above. That work can be found in this testing repository.
Given the strict pressure on consensus clients to publish blocks quickly I do not believe these other approaches are tenable. With the addition optimal aggregation, attestation packing takes ~30ms on average. The exact approaches to weighted maximum coverage use of compute in the test repository. With the decomposition approach taking upwards of 200ms. Implementing these alternative approaches led to a small optimization of the max coverage algorithm though.
Additionally I implemented lazy attestation signature decompression on receipt of a new attestation. This was an adjacent issue, but it did lead to the realization that we could skip attestation signature decompression until we finish max coverage.
If the number of attestations per data increases for whatever reason some decisions will need to be reevaluated. The first thing to test would be if attestation rewards are degraded significantly by the limit set on attestations per data in the Bron-Kerbosch algorithm.
If they are then we should increase the limit. If this results attestation packing taking too much time then we should consider an incremental approach to maintaining the set of maximal cliques.
I learned a lot during the program, but there were many areas that I feel I need to improve as well. In particular I learned about implementing some dynamic programming techniques as well as branch-and-bound algorithms while working on the alternative approaches. I learned more about the rigorous work needed to implement a new algorithm in a production environment. But this is also where I need more work, because Michael Sproul was able to reduce allocations significantly from what I had written.
The program has been extremely helpful. I feel thankful to the EPF for giving me this opportunity to work on this with some scaffolding. Without the program I don't think I would have started contributing. I plan to continue to contribute whether that's one off issues or formal employment is yet to be known.