# Hardfork Transition Changes
This document will build on and follow on previous documents that have been raised on this. The nature of data structure changes is not included in this document as it has already been previously
Major Changes Added In:
Sync Committees
Changes in incentive accounting
Parameter changes
Block:
Added in a Field for Sync aggregate
State:
Replace Previous Epoch Attestations with Previous Epoch Participation
Replace Current Epoch Attestations with Current Epoch Participation
Add in a uint64 list of inactivity scores
Add in a current sync committee field
Add in a next current sync committee field
Participation Flags is a new data structure being added in.
Its a 8-bit sized object, with each bit used to determine the relevant flag via their index. Its set with 2^idx which is then embbeded into the participation flag.
Sync Committee Shuffling:
During the first 2 sync committee periods, the base epoch will be the same(0). Not really relevant for mainnet as we are already on epoch 36000, but relevant to remember for other testnets.
```py
base_epoch = Epoch((max(epoch // EPOCHS_PER_SYNC_COMMITTEE_PERIOD, 1) - 1) * EPOCHS_PER_SYNC_COMMITTEE_PERIOD)
```
Instead of shuffling happening every epoch, it happens every sync committee period. With this value being 512, this will be much less frequently accessed and can be cached similarly to our current committee cache. This might only be an issue syncing, however even with that its only liable to show up every 512 batches(1 epoch per batch), which is a very acceptable rate.
The shuffling is a copy of the Swap Or Not shuffle currently utilised in mainnet for committees. This uses a pseudo-random byte and the shuffled index to determine which validators are viable for being part of the sync committee via their effective balance.
Definition of Get_Sync_Committee:
```py
def get_sync_committee(state: BeaconState, epoch: Epoch) -> SyncCommittee:
"""
Return the sync committee for a given ``state`` and ``epoch``.
``SyncCommittee`` contains an aggregate pubkey that enables
resource-constrained clients to save some computation when verifying
the sync committee's signature.
``SyncCommittee`` can also contain duplicate pubkeys, when ``get_sync_committee_indices``
returns duplicate indices. Implementations must take care when handling
optimizations relating to aggregation and verification in the presence of duplicates.
"""
indices = get_sync_committee_indices(state, epoch)
pubkeys = [state.validators[index].pubkey for index in indices]
aggregate_pubkey = bls.AggregatePKs(pubkeys)
return SyncCommittee(pubkeys=pubkeys, aggregate_pubkey=aggregate_pubkey)
```
Some important caveats, duplicate pubkeys must be handled properly during aggregation. Verification must also be able to handle this. Another issue is the committee storing aggregate pubkeys. All pubkeys are aggregated apparently for ease of light client verification, this claims seems a bit iffy and needs more investigation.
Base_reward:
Instant of the base_reward of a validator relying on a constant factor it is now replaced with an accounting based system. Where each effective balance increment is utilised to represent a constant unit of reward. Validators with a lower effective balance will end up earning less of a reward compared to validators with max_effective_balance.
Get_unslashed_participating_indices:
A function which filters for active and unslashed validators using a provided flag index as a filter. This is bounded to the current and previous epoch.
Get_flag_index_deltas:
This function tallies all the rewards and penalties via the provided flag indexes. All active validators and slashed validators(but non-withdrawable) validators are considered valid for tallying rewards and penalties. Each flag index is given with their corresponding weight.
```py
Gwei(base_reward * weight // WEIGHT_DENOMINATOR)
```
The above can be denoted as the reward factor per validator(depending on their effective balance). How much reward they do get is dependant on this ratio:
```py
unslashed_participating_increments // active_increments
```
This creates an interesting dynamic where in the event of reduced participation for an epoch, everyone's rewards is consequently less.(Need to double check if this is the intended side effect.)
```py
for index in get_eligible_validator_indices(state):
base_reward = get_base_reward(state, index)
if index in unslashed_participating_indices:
if is_in_inactivity_leak(state):
# This flag reward cancels the inactivity penalty corresponding to the flag index
rewards[index] += Gwei(base_reward * weight // WEIGHT_DENOMINATOR)
else:
reward_numerator = base_reward * weight * unslashed_participating_increments
rewards[index] += Gwei(reward_numerator // (active_increments * WEIGHT_DENOMINATOR))
else:
penalties[index] += Gwei(base_reward * weight // WEIGHT_DENOMINATOR)
```
Get_inactivity_penalty_deltas:
This function is modified from its phase 0 counterpart in the selection of matching target indices and the removal of BASE_REWARDS_PER_EPOCH.
The function now requires a timely_target index flag, to be set for the validator before calculating inactivity penalties.
Slash_validator:
This function is modifed to use MIN_SLASHING_PENALTY_QUOTIENT_ALTAIR and use PROPOSER_WEIGHT when calculating the proposer reward. This gets rid of the proposer_reward_quotient here.
Process_attestation:
This function is now modified to account for all participation flags.
```py
# Participation flag indices
participation_flag_indices = []
if is_matching_source and state.slot <= data.slot + integer_squareroot(SLOTS_PER_EPOCH):
participation_flag_indices.append(TIMELY_SOURCE_FLAG_INDEX)
if is_matching_target and state.slot <= data.slot + SLOTS_PER_EPOCH:
participation_flag_indices.append(TIMELY_TARGET_FLAG_INDEX)
if is_matching_head and state.slot == data.slot + MIN_ATTESTATION_INCLUSION_DELAY:
participation_flag_indices.append(TIMELY_HEAD_FLAG_INDEX)
```
The conditions are interesting for their potential side effects on participation flags applied. For timely_source, only if the attestation comes in within 5 slots of being tallied will the flag be included. However for timely_target that requirement is much more lax(32), will need to verify why source participation flags are much more stricter than target.
Also the proposer denominator:
```py
proposer_reward_denominator = (WEIGHT_DENOMINATOR - PROPOSER_WEIGHT) * WEIGHT_DENOMINATOR // PROPOSER_WEIGHT
```
each attestation is weighted by this factor before it is added intowards the proposer reward. This is bounded by (128- max attestations) and the `active validators/slots_per_epoch` . As the validator set grows the reward gets proportionally bigger.
Process_sync_committee:
A newly added block processing function:
This looks at sync committee signing to the previous block header. This performs a fast aggregate verify on all signatures per block. If sync committee are perfectly signed for then its a constant load of 512 signatures per block. Given that we do have a validator set of size of ~160k, with attestations for 5000 validators per block, raising this by 10% isn't a too large load in the large scheme of things. However with the amount of signatures we have to verify, adding this special committee in is equivalent to adding 16384 validators to the beacon-chain in terms of signature verification load.
The proposer gets a proportional reward for including the aggregate in by the following factor:
```py
proposer_reward = Gwei(participant_reward * PROPOSER_WEIGHT // (WEIGHT_DENOMINATOR - PROPOSER_WEIGHT))
```
Some interesting questions come up for sync committees given how invasive the changes are, should double equivocation be punished here for committee signatures ? Signing for 2 blocks in the same slot. If that isn't an issue then why are we actually signing it ? That would mean light clients would take the risk of validators being potentially mischecvious and giving them incorrect data.
Process_inactivity_updates:
A newly added function which performs tallying of inactivity periods per validator. For each epoch where an attester hasn't attested timely its incremented by this value `INACTIVITY_SCORE_BIAS`
Other helper methods haven't been gone into in detail as there haven't been anything interesting found in them.
## Fork Transition
The current plan for Altair would be to transition during a specified fork epoch. During this particular epoch transition, you add in the new fields for altair and modify the underlying state.
Pre-State -> Phase0State
Post-State AltairState
There is currently a PR by Michael from LH to perform a participation migration here:
https://github.com/ethereum/eth2.0-specs/pull/2373
https://github.com/ethereum/eth2.0-specs/issues/2314
The same principle applies to inactivity scores but given the complexity of applying it, its likely any migration for it is ignored.
Sync Committees are also added in for the first time at the fork epoch. Some interesting observations have been brought observed, unless the fork epoch is appropriately set, the first sync committee after the fork will last less than 512 epochs. As far as it has been observed, it doesn't look like it would cause too much issues, except for maybe making our lookahead trickier.
Finally setting the fork in the state with the Altair fork version. Once this is done we can proceed onwards to processing an Altair compatible block.
## Networking
A new field is added to our metadata, this contains our Syncnets bitvector so as to broadcast viable subnets that we are subscribed to.
Gossipsub Topics will be cased by their relevant fork-digest, so that messages meant for pre or post-fork will not clash with each other as their validation pipelines can now be separate. The 3 gossip topics that are affected by the fork are:
Beacon Block
Sync_committee_contribution_and_proof(new)
Sync committee subnets(new)
The first one will require a bit of separation for the relevant gossip handler to version between forks. The key issue would be to handle the different types of block data structure pre and post fork.
The Latter are 2 newly added gossip topics, they are used to propagate sync committees which are being introduced in altair. The mechanics used will pretty much mirror how attestations are gossiped and packaged into aggregates. Selected validators are expected to stay in their particular subnets for an extended period of time. Given that this isn't consensus critical and basically meant to improve light client support , DOS protection mechanisms are minimal.
For transition between phase 0 to Altair, the beacon node will have to keep both gossip pipelines 'open' for a while(2 epochs). Penalties need to be appropriately applied for peers spamming data on the wrong pipelines.
Req/Resp
In Altair Req/Resp are now versioned by fork-digests. A new context-bytes object is added into the response payload. For our v1 methods this can be represented as 'empty', however for all v2 block/metadata rpc methods this is used to version for different forks, so as to provide us the ability to utilize new data structures for our common types.
## Honest Validator Changes
Important Constants:
Sync Subnets -> 4
Aggregators Per Subnet -> 4
New Data Structures Introduced:
- SyncCommitteeSignature
- SyncCommitteeContribution
- ContributionAndProof
- SignedContributionAndProof
- SyncAggregatorSelectionData