# `pallet-stake-tracker` weight overhead
related to https://github.com/paritytech/polkadot-sdk/pull/1933
We want to keep a bags-list of voters (nominators) sorted by stake and a bags-list of targets (validators) sorted by approval stake (i.e. `self-stake + ∑ nominated_stake`) on-chain. This incurs on some costs associated with updating the score of the voter/target when new nominations, rewards, slashes, etc happen and ensure that the target node is in the correct bag (keep lists sorted).
**TL;DR:**
- Keeping the voters list sorted is negligible compared to the overhead of keeping the targets sorted by approval voting (when worst case scenario is considered).
- Maintaining the target list sorted at reward payout is a considerable added overhead compared to the status-quo; However, with the paginated rewards the costs can be spread across multiple blocks.
- Mass slashing events will overload the block.
- In the context of the Staking parachain, the slashing needs to be multi-block and the exposure size page size needs to consider the costs of updating the target and voter lists in the worst case scenario.
- For the worst case scenario (full page of 512 exposures, 16 nominations for all nominators exposed and *all* stake/approvals updates requiring a rebag), keeping the target list sorted at (paged) payout (512 exposures/page), the `ref_time` required to update the lists in the worst case is ~250% of Polkadot's `max_block.ref_time` and `proof_size` impact is negligible. (*note, however, that the worst case scenario is very pessimistic; most of the updates should not require a rebag in real-world scenarios AND the avg exposures currently in Polkadot is 75 per validator*.)
## Complexity by operations
### Single events
**add nominator**: `O(1) + O(MaxNominations) ~ O(MaxNominations)`
- add voter node from bags list
- updates up to `MaxNominations` target scores (may rebag)
**remove/chill nominator**: `O(1) + O(MaxNominations) ~ O(MaxNominations)`
- remove voter node from bags list
- updates up to `MaxNominations` target scores (may rebag)
**nominate** `O(MaxNominations)`
- updates up to `MaxNominations` target scores (may rebag)
**add/remove validator**: `O(1)`
- add/remove target node from bags list
**single stake-update**
- if nominator: `O(1) + O(MaxNominations) ~ O(MaxNominations)`
- updates the voter score (may rebag)
- updates up to `MaxNominations` target scores (may rebag)
- if validator: `O(1)`
- updates the target score (may rebag)
### Combined slashing events
**(paged) payout**
`O(1) + O(exposures_per_page) * (O(1) * O(MaxNominations))`
- validator: updates the score of the target (may rebag)
- nominators in exposure page:
- updates `O(exposures_per_page)` voter score (may rebag)
- updates up to `O(exposures_per_page) * (MaxNominations)` target scores (may rebag)
**slashes**
`O(1) + (O(nominators_exposed_in_era) * ( O(1) + O(MaxNominations) ) ~ O(nominators_exposed_in_era) * O(MaxNominations)`
- updates the score of the target (may rebag)
- for each nominator exposed to the slashed validator: `O(1) + O(MaxNominations)`
- updates the voter score (may rebag)
- updates up to `MaxNominations` target scores (may rebag)
## Weight overhead at `payout_stakers_by_page`
Considering the worst scenario:
- full page payout (`max_exposure_page = 512`, i.e. 512 nominators to pay per page)
- max 16 nominations per nominator
- each update to the bags list will require a `rebag_non_terminal` (ie. most expensive operation in a bags-list)
❓ What is the block weight % of a full paged payout in the worst case? (considering only the weight overhead of maintaining the bags-lists up to date).
Given
- `max_block_weight = Weight { ref_time: 2000000000000, proof_size: 18446744073709551615 }`
- `update_lists_weight = Weight { ref_time: 5046034432000, proof_size: 4594561073152 }`
Which means that the `ref_time` required to update the lists in the worst case is ~250% of Polkadot's `max_block.ref_time` and `proof_size` impact is neglible.
calculated with https://gist.github.com/gpestana/ac1269c64dae0ee3d136bb19c8984217
---
```rust
// from Polkadot weights:
fn rebag_non_terminal() -> Weight {
// Proof Size summary in bytes:
// Measured: `1690`
// Estimated: `11506`
// Minimum execution time: 52_919_000 picoseconds.
Weight::from_parts(55_123_000, 0)
.saturating_add(Weight::from_parts(0, 11506))
.saturating_add(T::DbWeight::get().reads(7))
.saturating_add(T::DbWeight::get().writes(5))
}
```
In Polkadot
```
const system.blockWeights: FrameSystemLimitsBlockWeights
{
// snip
maxBlock: {
refTime: 2,000,000,000,000
proofSize: 18,446,744,073,709,551,615
}
// snip
}
```