# Week 12 After revisiting [Consensus Specs Altair](https://ethereum.github.io/consensus-specs/specs/altair/), I felt that there are still a lot of pieces missing to implement [/eth/v1/validator/sync_committee_contribution](https://github.com/ReamLabs/ream/issues/238). There is an [OperationPool](https://github.com/ReamLabs/ream/blob/master/crates/common/operation_pool/src/lib.rs#L19) in Ream, but no sync committee pool exists. To find the best implementation approach, I reviewed how this endpoint was built in the Lighthouse and Grandine clients. By referencing their work, I started implementing a sync committee pool for Ream. ## Lighthouse - **Sync Committee Pool** Lighthouse maintains a `naive_sync_aggregation_pool` to collect incoming sync committee messages and contributions. Individual signatures received and verified contributions from gossip are inserted so the node can aggregate them quickly without re-verifying the same data. - **Aggregation model** The pool indexes entries by `SyncContributionData`—a combination of slot, beacon block root, and subcommittee. Each key holds the best aggregated `SyncCommitteeContribution`, merging bitfields and signatures as more single-signer messages arrive while filtering out duplicates or invalid data. Before returning an item, it checks that the referenced block is fully execution-verified to avoid serving optimistic data. - **API** `GET /eth/v1/validator/sync_committee_contribution` queries this pool using the request’s `SyncContributionData`. If an aggregation exists and passes the execution-status guard, the endpoint wraps it in a JSON response. Missing entries result in 404, and optimistic or invalid roots return client errors. The endpoint is effectively a read-only view over the pooled contributions, letting validator clients fetch ready-to-broadcast aggregates instead of assembling them locally. ## Grandine - **Sync Committee Pool** A pool holds three keyed stores: aggregated contributions (`aggregates`), seen aggregator contributions (`aggregator_contributions`), and raw messages (`sync_committee_messages`). Entries are keyed by `ContributionData { slot, beacon_block_root, subcommittee_index }`, letting the pool merge votes across sources while deduplicating aggregators. - **Aggregation model** The pool prunes data older than the previous slot, keeping only the current and previous slots (Lighthouse does something similar). It folds messages into aggregates by matching validator pubkeys against the subcommittee, toggling aggregation bits, and aggregating signatures. - **API** `GET /eth/v1/validator/sync_committee_contribution` extracts `slot`, `beacon_block_root`, and `subcommittee_index`, then retrieves the pool’s strongest aggregate and returns it as JSON. Similar to Lighthouse, this API lets validators assigned as sync committee aggregators fetch the best available aggregate to sign and publish as a `ContributionAndProof`. Because the pool already merges gossip and local votes, the endpoint supplies the highest-participation aggregate currently known without duplicating validation logic. ## Resources [Lighthouse: Ethereum consensus client](https://github.com/sigp/lighthouse) [Grandine: A fast and lightweight Ethereum consensus client](https://github.com/grandinetech/grandine)