## Slashing Invariant and Rule Definitions ### Purpose This document is meant to outline some key invariants and properties of the core contracts as part of the Slashing upgrade in EigenLayer. This is mostly meant to serve as a guide and what could potentially be proved with the Certora Prover as part of the audit. **Slashing Release**: Release tag: [v1.3.0](https://github.com/Layr-Labs/eigenlayer-contracts/releases/tag/v1.3.0) Commit: [722f3cb](https://github.com/Layr-Labs/eigenlayer-contracts/tree/722f3cbeb7721431f1a2a4a73582f7981212e23d) --- ### Helpful Definitions 1. WAD: `uint64 WAD = 1e18` 2. Magnitudes: Synonymous with proportions, this refers both to magnitude allocations and maxMagnitudes. 3. There are 3 types of shares: 1. depositShares - These can be converted to an amount of tokens given a strategy - by calling `sharesToUnderlying` on the strategy address (they're already tokens in the case of EigenPods) - These live in the storage of EigenPodManager and StrategyManager strategies 2. withdrawableShares - For a staker, this is the amount of shares that they can withdraw - This does not live in storage but is read through the view function `DelegationManager.getWithdrawableShares` 3. operatorShares/delegatedShares - This is the sum of withdrawable shares of all stakers that have delegated to an operators - This lives in the storage of DelegationManager.operatorShares() --- ### Invariant Properties #### AllocationManager 1. An AVS can only slash an Operator for a Strategy given: - The Strategy is added to the OperatorSet - `_isOperatorSlashable(operator, operatorSet)` returns true - There is a non-zero magnitude allocation for the OperatorSet 2. `maxMagnitude` is monotonically decreasing For a given (Operator, Strategy), their maxMagnitude should be montonically decreasing with the default starting at WAD. This is stored in the storage variable below in `AllocationManagerStorage.sol` and should only decrease as a result of `AllocationManager.slashOperator` ```solidity /// @dev Contains a history of the operator's maximum magnitude for a given strategy mapping(address operator => mapping(IStrategy strategy => Snapshots.DefaultWadHistory)) internal _maxMagnitudeHistory; ``` 3. `getEncumberedMagnitude()` + `getAllocatableMagnitude()` = `getMaxMagnitude()` For a given (Operator, Strategy), their total encumbered amount + whatever is freely allocatable should be equal to their current maxMagnitude. If the Operator has allocated all, then encumberedMagnitude = maxMagnitude. Can also define encumberedMagnitude as being bounded as the following: `encumberedMagnitude` $\in [0, maxMagnitude]$ 4. Snapshots' timestamps are sorted in increasing order. That is, all snapshotted values are in ascending chronological order. Maybe overkill? We are using Openzeppelin's Checkpoints library with minor modifications, unrelated to the keys which are used for blocknumbers. 5. At most 1 pending allocation/deallocation can be queued for a given (Operator, OperatorSet, Strategy) tuple 6. Deallocations and Operator deregistration still enables slashability for `DEALLOCATION_DELAY` window of time(in blocks). 7. mapping `deallocationQueue` is unbounded in size While pending allocation/deallocations are limited to max 1 pending for a given (Operator, OperatorSet, Strategy), it is possible for an Operator to have an unbounded number of allocations/deallocations for a Strategy but for different OperatorSets. Therefore implying, the mapping `deallocationQueue` can theoretically be unbounded in size. Which is why the interface `clearDeallocationQueue()` exists. 8. A pending deallocation is not slashable on it's `effectBlock` (no possible race condition) but is slashable on `effectBlock - 1`. #### DelegationManager 1. depositScalingFactor is initialized as WAD and *only* increases on further deposits or delegation. This gets reset back to WAD on a full withdrawal where deposit shares are set to 0. Update: this is true conceptually but has been found to not always be the case due to floating arithmetic rounding edge cases. 2. For a given (Staker, Strategy), withdrawableShares <= depositShares. These values can be read from the view function `getWithdrawableShares(staker, IStrategy[] memory strategies)` 3. A queued withdrawal is slashable on the `slashableUntil` block but not slashable the block after in which it can be completed. See `DelegationManager._completeQueuedWithdrawal` for implementation details. 4. `_cumulativeScaledSharesHistory` is montonically increasing. Proof not required for this, can be observed with updates taking previous value and adding to it. 5. For a given staker, all roots existing in `_stakerQueuedWithdrawalRoots` also exists in storage - queuedWithdrawals[withdrawalRoot] returns the Withdrawal struct - pendingWithdrawals[withdrawalRoot] returns True Note: pendingWithdrawals[withdrawalRoot] being true does not imply the other properties on mainnet necessarily! Legacy queued withdrawals will not have pushed values to the mappings `_stakerQueuedWithdrawalRoots` and `queuedWithdrawals`. 6. depositScalingFactor gets updated for a staker on every added deposit even if the staker is not delegated. Added shares deposits occur through. It also gets reset to WAD when depositShares are set to 0. 1.`increaseDelegatedShares` called by deposits through the StrategyManager/EigenPodManager 2. `delegateTo` 3. `completeQueuedWithdrawals` as shares ### EigenPodManager 1. `BeaconChainSlashingFactor` is montonically decreasing Similar to maxMagnitudes for a given (Operator, Strategy), the BeaconChainSlashingFactor is initialized to WAD for a given staker and *only* decreases based on the proportional decreases in balance after EigenPods are Checkpointed. Note that the `BeaconChainSlashingFactor` is left unchanged if the EigenPod balance increases as a result of a completed Checkpoint. 2. `podOwnerDepositShares` can't go negative Assuming the current balance is a positive int256 value in `podOwnerDepositShares`, there is no state transition resulting in this storage value turning negative. Note: On mainnet currently, its possible for negative balances here but this will be properly handled in a migration as part of the upgrade.