# Changes in the aggregation scheme in Prysm
In the following release of Prysm (v 4.0.5 due early this week starting 05/22/23) there are some changes in the aggregation scheme of Prysm. In short, there's nothing you need to change and you will automatically have better performance, less CPU usage during critical times in the slot and have overall better aggregations when you perform that duty. However, the defaults are being set quite conservatively, they are controlled by hidden feature flags and monitored by new metrics. So I'll describe the changes and how to adjust your setup in this short post.
## Aggregated attestations and slot timing
A validator receives two kinds of attestations on gossip (there are other ways of obtaining attestations like RPC or blocks, but let's fix on gossip for simplicity). They are unnagregated and aggregated. The first ones are the attestations that each validator submits during its assigned slot. It has a single bit set: that of the validator signing it. These attestations are sent typically 4 seconds into the slot, although some clients may send them as soon as a block arrives if it is earlier than 4 seconds. The other type --aggregated attestations-- are sent by aggregators that collect many of these unaggregated attestations with voting for the same blocks, and join their signatures so that the remaining validators have a short time in verifying them. These duties are carried over at 8 seconds into the slot.
Attestations are very important a couple of times during the slot. Of course if you are an aggregator, then you better have a lot of unaggregated attestations at 8 seconds, otherwise you will not perform well your duties. You also want to have many attestations at the start of your next slot (that is second 12 from the current slot) to find out what the right head of the blockchain is, otherwise you will produce a bad block if you are a proposer, or will have a harder time importing the next block when it comes.
With the feature allowing honest validators to reorg late blocks, these timings become a little subtler. In the case of Prysm, we will check 10 seconds into the slot if the attestations we have seen already are good enough to reorg the next block or not. So we better have good information at 10 seconds.
As you can guess, most unnagregated attestations are received in the first couple of seconds between second 4 and second 6 in the slot. Most aggregated attestations are received between seconds 8 and 10, and essentially no new attestations are received after the 10 seconds mark.
## The way we currently do this on Prysm
Aggregating attestations and verifying signatures is CPU intensive, it requires operations on elliptic curves and we rely on an external assembly library to do this. I mentioned above what aggregating unaggregated attestations means: you have a bunch of attestations from different validators, all of them voted for the same blocks, and the aggregate of these attestations is simply another attestation, with a bunch of bits set (one bit for each validator included) and a signature that is the sum of all the signatures of the validators included. The magic of BLS signature aggregation is that by verifying this sum, we are verifying at the same time all signatures of all validators included.
However, nothing says that we can't aggregate further these aggregated attestations. In fact if we have an attestation that has bits sets for validatrors `1,3,4` and another for validators `2,5`, we can aggregate them into a single attestations for validators `1,2,3,4,5`. Validators do this when proposing blocks for example as they get rewarded for each attestation they include and there's a maximum size of 128 aggregate attestations in a block. Full aggregates are therefore more valuable. There is a whole lot of literature on what are the most effective mechanisms to aggregate attestations (it is particularly hard when we are aggregating partially aggregated attestations). In the case of Prysm we use the [max coverage algorithm](https://ethresear.ch/t/attestation-aggregation-heuristics/7265) designed by our colleage Viktor Farazdagi
Currently Prysm aggregates whatever attestations it has in its pool every 4 seconds. And crucially, every 4 seonds starting with the node, so this is not deterministic within a slot. So let $\Delta$ be the first interval at which we aggregate, counting from the start of the slot. This is a random number $\Delta < 4''$. We would aggregate at $\Delta$, $\Delta + 4''$ and $\Delta+8''$. The first time we would mostly aggregate whatever was left over from the previous slot. Most of these attestations would be already aggregated. The second time we would have $4'' \leq \Delta + 4'' < 8''$ thus these would be mostly unnaggregated. However, if $\Delta$ was small enough, we would have very little attestations in this time. The third time we would have $8'' \leq \Delta + 8'' < 12''$, this means that if $\Delta$ is small enough, Prysm would be a pretty bad aggregator as it would have not enough unnaggregated attestations to send. Moreover, in this case we would also have very few aggregated attestations to pack in blocks if we are proposing in the next slot, and we would rely mostly on the unnaggregated attestations that we ourselves aggregated during the slot. This means that the max coverage algorithm has not been even used in this case.
There are problems now matter what value of $\Delta$ we pick. If we have $\Delta$ close to 2 seconds then it will interfere with the processing of most blocks that arrive at that time, if it is close to 4 seconds then it will interfere with attesting duties, and moreover it will not finish aggregating at 8 seconds in time anyway so it becomes even worse for aggregating duties.
## The new way
In the next release there are several fixes regarding our aggregation scheme. One of them should be clear from the above description: we will aggregate at 3 times, but now these will be fixed three times during the slot. By default these three times are set to be 6.5'', 9.5'' and 11.8''. These numbers were obtained by benchmarking a server connected to all subnets and with less than normal specs so that we know for sure that by 8 seconds, all attestations that have arrived before 6.5 seconds will be already aggregated. By 10 seconds, all attestations that have arrived by 9.5 seconds will be aggregated. And the same thing for 12 seconds. The latencies are decreasing because the vast majority of the work is done in aggregating unaggregated attestations.
If you are a homestaker running just a couple of validators and not subscribing to all subnets, you will most likely have a lot of slack to change these numbers. As a form of example, on my personal node, running on a NUC i5, I am setting these numbers to 7.5'', 9.8'' and 11.9''. These numbers are set with the three hidden feature flags
`aggregate-first-interval`, `aggregate-second-interval` and `aggregate-third-interval` respectively. They take a time duration as parameter. For example ` --aggregate-first-interval 7500ms`.
If you want to check how long your node is taking to perform these operations, you can check the exposed metrics `aggregate_attestations_t1` and `aggregate_attestations_t2` that expose the latency buckets of the two most costly operations.
## Better performance
Even though it sounds like this is the most impactful thing on this change, this was mostly done because of the honest reorg feature. In fact, the most impactful change is unrelated to timings. Whenever we obtain an attestation, we have a slice of bytes, representing the signature, that we receive over the wire. This needs to be converted into a signature. We perform a check that these bytes do indeed correspond to a valid point in the corresponding elliptic curve. This turns out to be an unnecessary and cosly check on attestations that we have already verified the signature. Just removing this extra superfluous check has reduced the aggregating time by 60%. The other change is that there is no need at all, in fact it is detrimental, to use the max coverage algorithm when aggregating unnaggregated attestations. If we know we have a single bit, aggregating them is simply adding those signatures as explained above. So we no longer use that algorithm on the unnaggregated pools.