# Cross-session availability
**GOAL: We no longer clear the availability cores once a new session comes. We maintain the availability timeout (taking into account the elapsed blocks from the height of the original block, regardless of whether or not it's part of the session)**
## Context
### Session changes
Generally the entire protocol configuration can change on a new session. More importantly for our purposes:
- `parasAvailabilityPeriod` (which is currently 10 on polkadot). Basically is the number of blocks that availability can take before the candidate is dropped.
- validators may change (including their number and identities)
- number of cores may change, as well as their assignments
- paras may be offboarded
- validators are shuffled (canonical shuffling) and split up into backing groups (contiguous sequences per core). These backing groups are periodically rotated across cores (every 10 blocks on polkadot), counting from the first block of the session. The rotation does not matter for our purposes because it involves backing groups. Regardless of the backing group for the core, all validators participate in availability. The only thing that changes on rotation is who to get your av chunks from during distribution.
### Bitfield structure
Signed by a validator, tied to a relay parent and session index, where the candidate is pending availability. One bit for each core, where 1 represents that the validator possesses its chunk associated to the core occupied at that relay parent. Important: bitfield is per leaf. So if it takes multiple blocks to reach inclusion, we will still distribute multiple bitfields.
Also, the bitfield distributed over the network and in the parachain inherent also contains the validator index (so that signature check is possible).
### Going from backing to inclusion
A new leaf comes in where a candidate is backed on-chain. Inclusion process kicks off:
1. Availability distribution starts. Every validator requests its chunk from the backing validators.
2. After a while, bitfield-signing inspects which chunks it retrieved and signs a bitfield attesting its data.
3. Bitfield-distribution tries to ensure that all validators have all of the bitfields (via gossip over 2d topology). These are then fed into the provisioner (one bitfield per validator per active leaf)
3. Provisioner bundles them up (choosing the best ones if having multiple from a validator) and sends them to the block authoring logic.
4. When buildinig the block, the runtime processes the inherent data and checks whether the 2/3 threshold is reached. If it is, it frees the core, which can immediately be occupied again by a new candidate that is part of the inherent data.
# Proposed Solution
## Possibilities
Simply not dropping the candidates on a new session in the runtime is not enough.
It would only make sense if the availability distribution already took longer than one block. For candidates that are backed in the last block of the session it won't.
At the moment, AV-distribution and bitfield distribution start when seeing an occupied core on a leaf. So in the last block of the session, if we back a block, it will immediatelly get freed (therefore nobody will fetch any chunks or sign and distribute any bitfields).
Since we require validator participation, we need to determine which validators will participate in the availability process of the candidates that remain in the inclusion pipeline across a session change.
Options are:
1. old validators are the ones participating in availability distr and bitfield distr.
- Pro: During disputes, unupgraded validators will be able to recover candidates.
- Pro: Collators will also be able to recover candidates.
- Pro: Chunk and bitfield distribution effort of the previous validators is not lost.
- Con: Unupgraded validators will no-show during approval voting (but will not dispute). They will assume that the new validators hold chunks indexed by the old session's canonical shuffle, resulting in inability to recover.
- Con: Validators may change and if they are no longer participating, it could stall the availability anyway (it's a reasonable assumption however that the validators will still be online for 1 more block).
2. new validators are the ones participating in availability distr and bitfield distr.
- Pro: We don't care whether the old validators are still around (besides the backers which have the original data).
- Pro: Unupgraded validators will still be able to participate in approval-voting (even if the validator count decreases), because approval voting assumes the inclusion session index for recovery.
- Pro: This would enable availability in the new session to take up as many blocks as we want (parasAvailabilityPeriod), without complicating the implementation.
- Con: This breaks collator pov recovery of unupgraded nodes.
- Con: This breaks av-recovery during disputes on unupgraded nodes (they will not be able to participate).
- Con: the session_index in the descriptor loses its meaning
- Con: Already performed distribution effort of the previous validators is lost (in the case of a candidate being pending availability for multiple blocks).
3. mix, combining bitfields from old and new validators (would be total hell)
- Con: the recovery logic would be needlessly complicated if using chunk recovery - need a separate mechanism for determining the index of the validator to fetch the chunk from.
- Pro: Distribution effort of the previous validators is not lost.
- Pro: We don't care whether the old validators are still around (besides the backers which have the original data).
**Option 1** is the most sane, preserving most of the current behaviour and enabling unupgraded validators to participate in disputes and unupgraded collators to still recover blocks. Therefore, this is the assumption the rest of the solution makes.
## Coordination Requirement
This feature requires coordinated enablement via a node feature flag. The reason is that approval voting must use the **backing** session to recover PoV data
(which is currently not the case). Approval voting uses the inclusion session and assumes that it's the same as the backing one. If the runtime enables cross-session availability before supermajority of nodes are upgraded, finality will stall (due to noshows).
Therefore:
1. Approval-voting fix must be deployed first and reach validator supermajority (switching to using the correct session index during approval-voting).
2. A node feature flag signals readiness, the runtime can begin allowing cross-session availability.
## Assumptions
- we assume the validators from the previous session are still online and participate in the availability and bitfield distribution processes. At least one backer needs to be available and 2/3 of the validators, to achieve full availability. (if they are not, they will cause a delay in the core getting freed, therefore causing next blocks to be lost)
- for cross-session availability, we don't take into account the `parasAvailabilityPeriod`. Candidates will only have one more block to reach availability. This enables a key simplification: the bitfields on the first block of the next session are all signed in the context of the previous session.
- we assume the core that is still pending availability still exists (we don't care whether it's Idle, Task or Pool), as this is just one block where backing would not be possible. If it no longer exists, the core will inevitably be freed. Same if the paraid is no longer registered.
- we assume that the `HostConfiguration` does not change in order for cross-session availability to continue. Otherwise, we risk trying to enact candidates which are no longer valid in the new session's configuration. Instead of trying to perform a resolution between the new config and the candidates, it is much easier to just evict the core if there was a configuration change on the new session. `HostConfiguration` changes are very rare so impact will be negligible.
### Runtime Changes
1. Currently, backed candidates are dropped at the end of the last block in a session, if they still occupy cores. If using elastic scaling, all subsequent descendant blocks would also be dropped.
This logic needs to be relaxed. Only drop candidates if the cores will no longer exist in the next session, the para is offboarded or if there is any `HostConfiguration` change (cascading to the other chained candidates).
2. Currently, no new candidates are backed in the last block of the session (precisely to avoid dropped candidates).
This logic needs to be removed.
3. Validator rewards currently occur for two behaviors: backing votes and dispute votes. For dispute votes, they are using the correct session index for the calculation (the one that is part of the statement). For backing votes however, the inclusion session index is assumed, so we would end up rewarding the wrong validator. In the `RewardValidators` trait definition and default implementation, we need to also accept the session index of the relay parent of the enacted candidate and take it into account when deciding which validator to reward (just as we do for dispute statements)
4. **Disputes `Included` storage fix**: The `Included` storage item of the disputes runtime module stores by the (session, candidate_hash) key, the block height at which we would have to revert if this dispute concluded invalid.
When processing the dispute, we use the session index as passed in the statement set in the inherent (which is the backing session). However, when the candidate is included, `note_included` is currently called with the inclusion session (which may be different). The session passed into `note_included` must be the backing session where the candidate was authored. Without this fix, disputes would fail to find the candidate and the revert mechanism would not work.
5. Change bitfield processing in the exceptional case where we have cores occupied by v2 candidates from past session which have not reached availability threshold yet, but only in the very first block of the new session. (Note: not all cores that are still occupied, because they may have reached availability but their parent did not, when using elastic scaling):
A signed bitfield is only valid within the `SigningContext` with which it was signed, because the signature is performed over a payload which concatenates the `SigningContext` to the actual bitfield.
```
/// A type returned by runtime with current session index and a parent hash.
#[derive(Clone, Eq, PartialEq, Default, Decode, Encode, Debug)]
pub struct SigningContext<H = Hash> {
/// Current session index.
pub session_index: sp_staking::SessionIndex,
/// Hash of the parent.
pub parent_hash: H,
}
```
The `parent_hash` is the currently active leaf that we're authoring on top of (it cannot be an older one).
At the moment, if we were to pass along bitfields from the old session they will be dropped, but the runtime allows for bitfields to be supplied on the very first block of the session, even if they are completely useless (and the node does not prevent this either).
Because we will only allow availability to continue for one more block in the new session, we can safely assume that all of the bitfields sent in the first block are signed in the context of the previous session.
This will preserve **backwards compatibility** because:
- old validators authoring blocks with a new runtime: the new runtime will assume the old session index. The useless bitfields will be dropped as invalid.
- new validators to authoring blocks with an old runtime: the old runtime will assume the new session index. The useful bitfields will be dropped as invalid and the cores cleared.
The information of the previous session is readily available in the `session_info` pallet (for the past 6 sessions, for dispute purposes). Reuse that.
In addition, the `availability_threshold` needs to be computed according to the old session's validator count.
### Node Changes
**Availability-distribution**
On every new leaf, we request our chunks for this leaf + 3 more ancestors (within the same session).
This needs to relax. Also allow retrieving chunks for ancestors outside the session.
When requesting a chunk, we make the assumption that the session of the relay parent is the same as the leaf's session (this will no longer hold).
```
// We use leaf here, the relay_parent must be in the same session as
// the leaf. This is guaranteed by runtime which ensures that cores are
// cleared at session boundaries. At the same time, only leaves are
// guaranteed to be fetchable by the state trie.
```
This needs to relax. Try getting the session of this relay parent. If it fails, bail out.
On leaf deactivation, we don't cancel availability-distribution unless the candidate is no longer pending availability under any leaf (this is fine, since it will still be pending availability).
**Bitfield-signing**
Bitfields are only accepted if they are on the relay parent that is being built upon (latest leaf).
At present, bitfield signing does not do any work if the validator is no longer in the active set on a particular leaf.
Add a special case for when we are no longer active: if there is still a candidate pending availability at this leaf that is from a previous session where we used to be active, sign a bitfield with my old validator index and the `SigningContext` of the old session (but with the current leaf as parent_hash).
**Bitfield-distribution**
We will maintain the distribution based on the old topology and validators and assume that the new author will receive it (by being part of the old topology).
A validator that is no longer part of the new session will still participate in distribution if the bitfield contains a set bit for a core that is pending availability across a session boundary (and if it used to be active in that session)
Note: The current bitfield-distribution code filters bitfields by session at line 817 of lib.rs:
`.filter(|job_data| job_data.signing_context.session_index == current_session_index)`
This filter must be changed to only allow bitfields from the previous session on the very first block of a session.
Bitfield distribution will only happen based on one topology at a time (without increasing the number of messages):
- for the first block of the new session, the topology is that of the old session
- for the rest of the session, the new topology is being in effect
**Provisioner**
Needs to be modified so that it:
- does not send bitfields signed in the new context for the first block of the session
- only stores and appends potential bitfields of the old session to the proposed inherent data.
**Approval-voting and approval-distribution**
There are numerous places in these subsystem's code that assume a candidate's `SessionIndex` is the same as the overall relay parent's `SessionIndex` in whose context approvals are performed (the inclusion block).
One of the most important ones is for av-recovery. It will use the wrong session index and fail to recover. Finality will stall.
The candidate session index is stored in the approvals db under `CandidateEntry`. This needs to be modified with the session index of the relay parent. Given that the node feature will only be enabled once 2/3 of validators have upgraded, it's safe to assume that there is no need for a DB migration for the already-stored candidate entries. Sufficient time should have passed between the node upgrade and the node feature enablement.
Moreover, the subsystem must not use the session index of the inclusion block for things like validator indices. Data that is coming from the `HostConfiguration` is however fine to use, as configuration changes will cause the runtime to forbid backing/including candidates built on the old session. Check all subsystem usage of session information.
Same applies for the `approval-distribution`.
Here as well, great simplicity comes from the fact that you will never have pending/included candidates from two different sessions at the same time.
**Prospective-parachains**
Prospective-parachains should handle the possibility of candidates still pending availability from the old session just fine. There shouldn't be a need for any changes, because the runtime will already evict the candidates pending availability if there were any session change, so it should not break the backing constraints in the new session.
**Dispute-coordinator**
The usage of the session index in this subsystem is quite complex. It generally looks like the session index used for recovery is the one where the candidate was backed.
Could use a double-check from disputes expert @eskimor.
## Testing plan
Besides the usual unit tests for all changes, develop a malus variant which delays availability bitfield distribution near session changes, so that we can artificially have blocks which become included in the next session.
### Testing various availability delays
- block being delayed for multiple blocks in the previous session:
- becoming available in the first block of the new session
- not becoming available in the first block, check it's evicted
- block being backed in the last block of the previous session:
- becoming available in the first block of the new session
- not becoming available in the first block, check it's evicted
Also check that the correct validators are receiving the backing rewards (which are delivered at inclusion time).
### Backwards compatibility, approval voting and disputes
Having availability being delayed for the last candidates in a session, check that unupgraded nodes:
- will "no show" during availability but finality will not stall if they are in superminority (and disputes will not be raised). Even if the number of validator changes.
- can still participate in disputes (check that in fact, all nodes, old or new participate). We need the malus to also raise a dispute on this particular candidate.
## Implementation and rollout plan
### Phase 1 - Prototype
The potential blast radius of these changes is enormous and security-sensitive.
In addition, the cognitive complexity of the approvals and disputes subsystems is enormous, making it difficult to achieve confidence in the required changes based only on code analysis.
Therefore, a prototype of all the changes is needed first, so that we can properly test the approval-voting and dispute subsystems.
This is to avoid underestimating the amount of changes that need to be gated by the node feature enablement. Otherwise, we may deploy a half-baked approval voting fix and would later reset our node upgrade efforts (by increasing the minimum required version to have the node feature enabled).
### Phase 2
1. Release the approval-voting fix (and any other fixes discovered through the prototype)
2. Implement the runtime changes (gated by the node feature)
3. Continue with the implementation. In the meantime, nodes are upgraded with the approval-voting fix and fellowship runtime changes are being enacted.
5. Once sufficient nodes are upgraded with the approval-voting fix and sufficient testing was performed, enable the node feature.
6. Once sufficient nodes were upgraded with the full code (not just the approval voting fix), cross-session availability will naturally start working.