Hello again! This update will be a short one. As I described in my previous update, this week would be devoted to implementing the maximal clique enumeration aggregation for the Lighthoues consensus client.

Here is an overview of the changes that I have made:

  • Deleted the importing agg from naive_aggregation_pool since the unagg attestations should already be in the op_pool.
  • During unagg attestation processing add them to the op_pool
  • Altered fields of AttestationDataMap to a HashMap for aggregated_attestation and a HashMap for unaggregated_attestations and altered methods on AttestationDataMap accordingly.
  • Removed greedy aggregation on insertion to the op_pool
  • Port Bron-Kerbosch implementation from Satalia to Lighthouse
  • Add get_clique_aggregate_attestations_for_epoch -> Vec<(&CompactAttestationData, CompactIndexedAttestation<T>)> method (This is where most of the changes are. I'm curious if it would be better to return an Iterator)
  • Use AttMaxCover of the output of the above as input to max_cover

The above changes should be sufficient to achieve the stated goal. I have not gotten the chance to test the changes yet. I contacted Paul Hauner of Sigma Prime to tell him that the code is ready for a first review. I'm looking forward to testing this soon.

I don't think it would be useful to anyone for me to explain the changes in more detail. But if you would like to know more about this feel free to message me on discord. Or check the PR: https://github.com/sigp/lighthouse/pull/4507. I will hopefully have feedback on the code sometime next week. In the meantime I will be studying up on the packing part of the problem. And if time permits I will make a PR for an issue that is adjacent to the project because it takes a lot of CPU time.