# DSM 2.0
<img src="https://hackmd.io/_uploads/SJVniDCKp.png" style="border-radius: 8px">
## TL;DR
The proposal suggests implementing optimistic vetting of deposit data submitted by operators, migrating this function from governance to DSM; standardizing the vetting process by decoupling it from the architecture of modules with a strict `operator -> keys` hierarchy; and transitioning to a more decentralized Data Bus. The document addresses issues with the current solution, such as [vulnerability to key substitution during the vetting process](https://github.com/lidofinance/lido-dao/issues/141), and proposes ways to resolve them.
The document assumes the reader is already familiar with the current design of the [Deposit Security Module](https://docs.lido.fi/guides/deposit-security-manual), [the key submitting process](https://docs.lido.fi/guides/node-operators/validator-keys), and [key vetting](https://docs.lido.fi/guides/easy-track-guide/#node-operators-guide-to-easy-track).
## Motivation
### Vetting of Deposit data
With the imminent introduction of new permissionless modules, the question arises whether the current vetting of keys through the DAO is compatible with the permissionless model of these modules, as it requires explicit approval from governance for each operator.
Originally, the Lido DAO implemented the key vetting process through a staking limit, serving two purposes: establishing a stake volume limit on the operator and validating deposit data uploaded by operators. After the Lido V2 upgrade, the Staking Router received separate functionality for limiting stake on the operator – [the target limit](https://docs.lido.fi/contracts/node-operators-registry#node-operator-parameters), while the validation of uploaded deposit data remained under the staking limit parameter, which was renamed to vetted keys. Historically, this function remained under governance control, although it serves exclusively a security purpose and can be automated.
### Vetting Issue
The current deposit data vetting process has a vulnerability described in [the issue](https://github.com/lidofinance/lido-dao/issues/141). By exploiting it, a malicious node operator can replace keys that are about to be marked as vetted with invalid keys.
The problem becomes more acute with the appearance of permissionless modules. In the curated set, operators risk their reputation and offboarding from the set, while such levers do not exist for permissionless operators. Thus, the current approach is not applicable for new modules and should be reconsidered.
### Modules with Various Hierarchies
In the current implementation, key validation in the Curated module is done by using a cursor on the number of operator keys that passed the checks. With the upcoming introduction of new DVT-based modules, which could have a different hierarchy where each key can be associated with multiple operators, there is a need for a new, more universal key vetting solution not tied to the operator.
### Data Bus Decentralization
The current setup of offchain tools supports Rabbit MQ and Kafka as data buses, requiring a permisioned and probably centralized server. Transitioning to a decentralized data bus solution can enhance resilience and allow anyone to make deposits using signed messages from a publicly accessible data bus.
## First Principles
Based on the motivation, let's form principles on which the design changes in DSM will be built:
- The interface implemented by modules for making deposits through DSM should not be tied to the internal architecture of the module and should describe only the minimally necessary set of methods.
- Vetting, as a security function, should be automated to increase its robustness.
- It should not be possible to use key vetting to censor stake allocation to an operator.
- The proposed solution should not compromise security and maintain the existing security assumptions.
## Proposed Design
### Vetting
In the current implementation of the Curated module, key vetting occurs through an EasyTrack motion, which the operator initiates after submitting keys. LDO holders can check the validity of keys or the presence of duplicates and object to the motion. Lido DAO contributors can check the keys using the Operators Widget Backend, which sends an alert to the dev team in case of any problems with the keys.
As part of the preparation for the release of the Simple DVT module, the latest version of Council Daemon (part of DSM) added checks for the validity of key signatures and the absence of duplicates in the keys. Thus, additional responsibility has fallen on DSM, beyond protecting from front-run attacks.
For the current iteration, it is proposed to completely transfer the key vetting function to DSM and eliminate the intermediate vetting procedure by governance. The Operators Widget Backend responsible for key validation and the Operators UI for key submitting is proposed to remain unchanged. They will ensure that keys are checked before they are sent to the contract and act as an additional source of checking keys already submitted to the contract.
The process of key vetting can be completely trustless and performed onchain during the deposit data submission by the operator. This requires signatures validation, as well as making sure that the public key being submitted does not have a duplicate in the module or in other modules.
Signature validation requires support [EIP-2537](https://eips.ethereum.org/EIPS/eip-2537) in the chain, and checking for duplicates requires the creation of a single registry of keys for all modules. Also the correctness of the uploaded deposit data can be done with zk proofs. It is suggested to postpone implementation of trustless checks in the current iteration and do it in the future.
### Deposit Queue
Based on the first principles outlined above, the following minimal necessary solution is proposed; at its base it utilizes a queue, without requirements regarding the internal structure of related modules.
To be implemented on the module's side:
- A view method of the queue, accepting the max size of the deposit and returning a list of deposit data.
- A method for notifying the module about deposited deposit data, to update the internal state within the module.
- A method for notifying the module about deposit data that did not pass the checks, to remove invalid keys from the queue.
Requirements for the queue:
- Idempotence. The queue must be idempotent for the same set of keys and their state. This requirement arises from the key checking process. After checking the keys offchain and sending a transaction for the deposit, the contract calls the same method and checks that the result will be identical to what was signed on the offchain side.
- The queue should accept the size of the deposits and return deposit data in a priority-sorted order. Thus, a repeated request for a smaller amount should return the first keys from the previous request.
- Deposit data should contain: pubkey, signature, amount, and may contain auxiliary information necessary for the module to identify the key (for example, the ID of the operator or cluster).
- It should be possible to view the entire queue, not necessarily in sorted order, to check all the keys.
- When the queue changes, the module should emit a unified event, to track changes from the offchain tooling side.
#### Positive Scenario

- Each Council Daemon collects common data:
- Previous deposited Lido keys
- Lido Withdrawal Credentials
- All previously deposit events from the Deposit contract. Deposits with invalid signatures are dropped.
- Each Council Daemon collects data for each of the Staking Modules:
- Next keys to deposit
- Each Council Daemon checks:
- The signature of the Deposit message
- The presence of duplicate public keys:
- Within the list of the next keys to deposit for each module (may be deprecated in the future, after the appearance of MEB)
- Between the list of the next keys to deposit and the previously deposited public keys of Lido
- The absence of a front-run attack. For this, the Daemon verifies that the next public keys to deposit do not appear in Deposit events or, if found, have Lido Withdrawal Credentials
- If all checks are passed, the Council Daemon signs the hash of the deposit message, containing:
- Deposit root from the Deposit contract, representing the verified state of the Deposit contract
- blockNumber and blockHash of the block on which the check was performed
- Staking Module ID
- List of deposit data to deposit
- Each Council daemon sends messages to the Data Bus
- The Depositor Bot monitors messages in the Data Bus and collects fresh messages for deposit from there
- The Depositor Bot performs additional checks of messages and network status, aggregates messages, and sends a deposit transaction to the DSM contract
- The DSM contract performs onchain checks:
- Guardian signatures and the presence of a quorum
- Deposit root from the Deposit contract
- The previous block deposit on the module, ensuring that a minimum distance between deposits is maintained
- That the signed state belongs to the current chain (the signed blockHash matches the onchain blockHash for the signed blockNumber)
- That the module's deposit data queue has not changed and matches the signed one
- The DSM contract calls the deposit method on the Lido contract
- The Lido contract calculates how much ETH can be directed to the module from the buffer in accordance with the stake distribution received from the Staking Router and calls the deposit method on the Staking Router, attaching buffered ETH to it
- The Staking Router performs a deposit in the Deposit contract using the deposit data from calldata
- The Staking Router notifies the module of the list of deposited keys
- The module mutates its internal state, marking the deposited keys
Note ☝️: The proposed design revise the conditions for early exit as `nonce` was previously used for this. This change does not appear to be a big problem for the following reasons:
- The `deposit_root` check remains above the next deposit data check, ensuring early exit when there are ahead deposits to the deposit contract. Is the most common cause of reversals. At the time of writing for the latest DSM contract (updated during the Lido V2 deploy, May 2, 2023), there were only 2 reverts due to `nonce` changes, versus 133 due to changes in `deposit_root` (https://dune.com/queries/3407237).
- Changes in the next keys to deposit are quite rare and require submitting or deleting keys within a certain range of the queue.
- Depositor bot checks the contract state before sending the deposit, so the keys should change when the transaction is already in the mempool, which highly reduces the probability of this happening
- Depositor bot uses private mempools to send deposit transactions, which does not include reverted transactions with a fallback to a public mempool.
#### Negative Scenarios
It is proposed that the Council Daemon has several processes that perform the following work:
- Checking the next keys for deposit and signing deposit messages
- Checking all keys in the queue and notifying the module about incorrect keys
- Checking previously deposited keys and pausing the module in case of an executed front-run attack (starting from Lido V2, the deposit pause is module-wise, not global-wise)
Keys in the deposit queue undergo the following checks:
**Signature Validation.** Deposit Data consists of a public key and signature over the deposit message. In this message, withdrawal credentials, deposit amount, domain are included. Council daemon reconstructs the expected message and checks that the signed message matches the expected one. If one of the keys in the queue has an incorrect signature, the Council Daemon notifies the module of the problematic key and stops signing deposits for the module until resolved.
**Duplicate Check.** The Council Daemon checks for the absence of duplicate public keys within the entire deposit queue, as well as for duplicates relative to previously deposited Lido keys from all modules. In case duplicates are found, the Council Daemon notifies the module of the problematic keys and stops signing deposits for the module until resolved. The original key is considered to be the one that has an earlier addition time and a smaller log index. To get the key addition time, it is suggested to use the unified addition event in all modules.

**Front-run Deposits.** The Council Daemon checks that the public keys in the queue have not been previously deposited directly through the Deposit Contract with different withdrawal credentials from Lido. In case previously deposited keys are found, the Council Daemon notifies the module of the problematic keys and stops signing deposits for the module until resolved.

Signatures from Deposit Events are validated and invalid ones are rejected. Such deposits are ignored on the Consensus Layer side. Filtering of such deposits allows to exclude censoring of the deposit queue by depositing 1 ETH (minimum deposit size in a deposit contract) to the key of the attacked operator.
Deposit Events containing deposits to Lido Withdrawal Credentials are ignored and do not block deposits to keys in the queue unless they have been previously deposited through Lido. Such a key can be deposited by Lido without any consequences. Once the validator is activated, donated ETH will be skimmed on withdrawal credentials contract.
#### Invalid Keys Reporting
In the event that one of the checks fails, the Council Daemon notifies the module about the invalid keys by sending a transaction to the DSM contract. The module must implement logic to exclude such keys from the deposit queue. In addition to the report transaction to the DSM, the Council Daemon sends a signed message to the general data bus, where anyone can use it to perform the transaction.
Unlike deposits, a quorum is not required to perform the transaction; any Council Daemon can execute it. This design is aimed at protecting against collusion among guardians. Thus, one honest actor is sufficient to exclude the invalid keys from the queue.
The Council Daemon should be able to aggregate problematic keys into batches by modules and split large batches of keys into several transactions to meet block limits.
When invalidating keys in the module, the module must consider the specifics of reporting keys in batches. For example, when using arrays, as done in the Curated module, when a key is deleted, the last key is moved to its place, which can lead to incorrect processing of subsequent batches. A good solution would be temporary invalidation of keys and subsequent manual cleanup by the operator himself.
In the event of invalid keys, each of the guardians will send a transaction to report the invalid keys. To avoid unnecessary gas expenses, it is proposed to record the hash and block number of the report. This allows for an early exit for subsequent identical transactions. Since invalid keys can be re-uploaded by the operator, it is proposed to invalidate reported entries after a certain number of blocks have elapsed.
#### Module pause
The emergence of new modules with permissionless entry, where the number of node operators is unlimited and the node operators themselves are unknown, imposes new constraints on the design of the module pause. Malicious behavior of one of the operators in such a module should not negatively affect the rest of the participants.
It is proposed to pause the module only in case of an already occurred front run, and the scenario in which an attempt to steal user ETH occurs is proposed to be mitigated by removing the keys from the deposit queue (see the previous section). Thus, protection against attempted theft becomes more targeted and directed at a specific operator, rather than the entire module. The module pause remains and is moved to the next layer of defense and should trigger in the event of a front-run, which would mean a collusion of guardians or unforeseen circumstances. More details are outlined in the [Guardian Collusion section](#Guardian-Collusion).
#### Withdrawal Credentials Change
A separate scenario to consider is the change of Lido protocol withdrawal credentials. Although this would be a very rare operation, it is suggested that modules be prepared for this event and invalidate the entire queue of keys when the `onWithdrawalCredentialsChanged` method is called.
#### Interfaces
Major changes to the interface of the DSM contract:
```solidity
contract DepositSecurityModule {
// introduce new type of messages
bytes32 public immutable REPORT_MESSAGE_PREFIX;
bytes32 public immutable ATTEST_MESSAGE_PREFIX;
bytes32 public immutable PAUSE_MESSAGE_PREFIX;
// introduce types of unpassed checks of deposit data
enum UNPASSED_CHECK {
signature,
duplicate,
frontrun
}
function depositBufferedEther(
uint256 blockNumber,
bytes32 blockHash,
bytes32 depositRoot,
uint256 stakingModuleId,
// 1. remove nonce
// 2. use existing depositCalldata to pass next deposit data to deposit
bytes calldata depositCalldata,
Signature[] calldata sortedGuardianSignatures
) external;
function reportInvalidDepositData(
uint256 blockNumber,
bytes32 blockHash,
bytes32 depositRoot,
uint256 stakingModuleId,
UNPASSED_CHECK unpassedCheck,
bytes calldata depositCalldata,
Signature memory sig
) external;
}
```
Major changes to the IStakingModule interface:
```solidity
interface IStakingModule {
// deprecate nonce
function getNonce() external view returns (uint256);
// split obtainDepositData into two methods: view and report
function getNextDepositData(
uint256 maxDepositsCount,
bytes calldata _depositCalldata
) external view returns (
bytes memory depositCalldata
);
function reportDeposited(
bytes calldata depositCalldata
) external;
enum UNPASSED_CHECK {
signature,
duplicate,
frontrun
}
// introduce a new method to report invalid keys to a module
function reportInvalidDepositData(
bytes calldata depositCalldata,
UNPASSED_CHECK unpassedCheck,
) external;
// introduce a new method to get all deposit data in the queue
// may return unsorted list for gas optimization
function getQueuedDepositData(
uint256 maxDepositsCount,
byte32 offsetPointer // 0x00 for the start of the queue
) external view returns (
bytes memory depositCalldata,
bytes32 nextOffsetPointer // 0x00 if the end of the queue is reached
);
// TODO: events:
// - Key added
// - Queue updated
}
```
`depositCalldata` structure:
- `32 bytes` - keys count
- Rest bytes - deposit data list of 184 bytes each
Deposit data structure:
- `48 bytes` - pubkey
- `96 bytes` - signature
- `8 bytes` - deposit size in gwei (`uint64`)
- `32 bytes` - key meta data
#### Important Changes
Significant changes proposed in the new DSM design:
- Key vetting is now optimistic, eliminating the need for additional transactions for Curated, Simple DVT and Community Staking modules.
- DSM now takes on the role of checking and invalidating keys in the deposit queue. Upon detection of invalid keys, one of the Council Daemons notifies the module about them, and the module takes measures to exclude the key from the queue.
- Key vetting no longer relies on the presence of an operator entity in the module, which is the first step towards connecting modules with a different structure. It is important to note that other parts of the protocol, such as the Validator Exit Bus Oracle, still require the presence of operators in the module interface.
- Instead of `nonce`, which represented the state of the keys, the nearest keys for deposit are now signed. Thus, operations on keys beyond the nearest keys for deposit do not invalidate the signed deposit messages. `nonce` still remains in the connected modules, but is deprecated.
- The staking router uses a public key and signature signed by guardians (passed through calldata) for the deposit, rather than obtaining them from the module. In this way, the module cannot substitute the deposit signature or public key during the transaction. This scenario was partially described by Statemind during the Lido V2 audit: ([Critical-02](https://github.com/lidofinance/audits/blob/main/Statemind%20Lido%20V2%20Audit%20Report%2004-23.pdf))
- The pause of the module is now initiated by the guardians only in the case of detection of theft of user ETH, i.e., front-run of keys in the past.
### Attack Mitigation
#### Incorrect Signatures
A node operator may submit a key with an incorrect signature (e.g., a message with a different withdrawal credentials address than the Lido protocol's), which if deposited, would lead to the loss of user ETH. This is mitigated by checking the signatures of messages, and, if a issue is detected, stopping the signing of deposit messages and reporting the invalid keys to the module.
#### Duplicates
Re-depositing on a key, even if correct, disrupts the accounting within the Lido contract. In the current design of the Lido protocol, all repeat deposits on the same key are treated as transient balance (ETH sent to the deposit contract, but not yet having validators appeared on the CL). In this case, the funds will be skimmed and accounted for by the protocol as Consensus Layer rewards. This is mitigated by checking for duplicates in the deposit queue, and, if found, stopping the signing of deposit messages and reporting the duplicate keys to the module. Duplicates are checked both between modules and against previously deposited keys.
#### Front-run
An operator might attempt to front-run the deposit transaction with their own deposit transaction but with different withdrawal credentials. In this case, the deposit transaction through DSM will be reverted due to a mismatch of `deposit_root` on the Deposit Contract, after which the Council Daemon will report the front-run key to the module, and it will be excluded from the queue. The more complex scenario of a guardian collusion is described below.
More about the vulnerability can be read here: [https://docs.lido.fi/guides/deposit-security-manual#the-vulnerability](https://docs.lido.fi/guides/deposit-security-manual#the-vulnerability)
#### Provocation of invalid keys report
A malicious operator may upload knowingly incorrect keys to deplete the balance of council daemons reporting invalid keys and thereby stop deposits to the module.
For curated modules, the attack can be mitigated manually by deactivating such operators.
For permisonless modules, it is proposed to disincentivize this behavior by charging some fee to operators for reporting invalid keys, which exceeds the cost of reporting invalid keys. It is assumed that the fee is charged from the operator's bond.
#### Censoring the stake allocation
With the emergence of permissionless modules, there is a space for an attack aimed at censoring the stake on specific operators by submitting similar keys on behalf of another operator.
Consider the scenario:
- Operator `A` from a curated module submits a large volume of valid keys.
- A malicious actor, aiming to prevent stake allocation to operator `A`, uses the first uploaded key of operator `A` and submits it under their own name with the aim that DSM reports duplicate keys for operator `A`.
This attack is proposed to be mitigated by checking the submission time of the keys and reporting duplicate keys that were uploaded later. To avoid attempts to front-run key submission transactions, private mempools can be used.
A stricter mitigation of the vulnerability could involve checking some signed message by the validator's private key, but misuse involves additional security risks. Therefore, this option is not considered.
// TODO:
- The current design of the CSM assumes complete exclusion of operator keys from the queue in case of a invalid key report. We can envision a scenario where operator A loads 1000 keys into the CSM and occupies the queue. In this case, operator B may try to front-end the next transaction of operator A with the same key, expecting the DSM to report the duplicate to operator A. This attack is aimed at pushing the keys of operator B (or any other operator under the same malicious actor) by excluding a large number of keys of operator A. It looks like it is more correct to only exclude the keys to the right of the reported key. Requires further discussion, as it affects the current design of the CSM
- The front-run can be mitigated by freezing both operators, and unlocking upon providing a signature through governance.
- Describe how we will mitigate in the event that frontrunner attempts begin to appear
#### Guardian Collusion
With the emergence of permissionless modules, it is necessary to reconsider the collusion scenario, as the ability to introduce permissionless modules and modules with FIFO can alter the attack patterns.
Consider a potential scenario:
- Guardians conspire in sufficient numbers to constitute a quorum to launch an attack
- Malicious guardians generate valid Deposit Data with Lido Withdrawal Credentials and upload to a module with permisionless entry, blocking a bond for each validator on the module's contract. In the worst case scenario, when the module implements a FIFO queue, all attack keys will go in order
- Malicious guardians hold off deposits to other modules by not sending signatures, thereby accumulating a buffer and moving the queue to their keys (assuming the target limit per module allows to hold this amount of stake)
- After waiting for the queue to reach the attack keys and a large buffer volume, malicious guardians send prepared deposits of 1 ETH each with their own withdrawal credentials to the deposit contract
- After that they sign the deposit messages and execute the deposit through the DSM contract
At least one honest guardian mitigates the potential attack by:
1. Sending a key invalidation transaction upon detecting an attempt at front-run. May not help, if the attack is done correctly and the transactions are bundled together
2. Pausing the module upon detecting a completed front-run
DSM has limits on the frequency of deposits and the number of deposits at a time. Current values: 25 blocks between transactions and 150 keys at a time. Thus, at least one honest guardian has a sufficient time window to react, and the amount of funds that can be stolen, in the case of guardian collusion, is limited.
In summary: the attack through permisionless modules with FIFO becomes easier than the attack through a curated module. At the same time, the attack becomes more expensive because a bond is required for each validator. The reaction window and the amount of funds under threat does not change.
// TODO:
- Consider reducing the number of keys deposited at a time. It is possible to have different limits for different modules
- Max will prepare a document that reviews parameters such as `maxDepositsPerBlock` and `minDepositBlockDistance`, as well as the quorum size and the size of guardian set, taking into account new inputs: permisionless entry for operators and modules with FIFO queue.
- Doc from Max: https://hackmd.io/@5wamg-wlRCCzGh0aoCqR0w/HJfjMgW56
### Data Bus
It is proposed to use Gnosis Chain or any other EVM-compatible blockchains with low gas costs as the Data Bus.
// TODO:
- Research: what needs to be done, how much it will cost.
- Can be done in a separate track and not block the main changes in implementing optimistic vetting.
- Switching between different chains.
- Interface of the Data Bus contract.
- Review re-signing of deposit messages (don't need to be done as often as they are now because of tx cost )
### Modules
#### Curated Module
For implementing the `getNextDepositData` and `reportDeposited` methods, it is proposed to split the current `obtainDepositData` method into two: a view method returning keys in the updated format, and a method that moves the pointers of the number of deposited validators for node operators.
Each module can record metadata for precise identification of keys when calling the `reportDeposited` and `reportInvalidDepositData` methods. It is proposed that in the curated module, the metadata will consist of the operator index and key index:
- `16 bytes` - operator index
- `16 bytes` - key index
To implement optimistic vetting, it is proposed to use the existing `vettedKeys` parameter. By default, this parameter should be equal to the number of submitted keys and updated when new keys are added, provided that these values are synchronized before the keys are submitted.
In the case of reporting invalid keys, the number of `vettedKeys` is reset to the minimum reported index, which is taken from the metadata of the reported keys:
```
Deposited keys count = 3 (indexes: 0, 1, 2)
v
[0][1][2][3][4][5][6][7][8][9]
^ ^ ^ ^ ^
Invalid keys
^
Vetted keys count = 5 (indexes: 0, 1, 2, 3, 4)
```
When deleting keys, the staking limit is set back to the total number of keys.
Deletion can be done in batches of [500 at a time](https://etherscan.io/tx/0xdc0c685d96ca38917e5c75df0b4f0c83e3c34bce2653abde38b4d7fbcb09e218), which is sufficient to cover most scenarios. Scenarios where more keys need to be invalidated at once are assessed as unlikely, and in these cases, a repeat report of invalid keys (already in smaller numbers) is expected, after which the next batch of keys can be deleted.
#### Simple DVT Module
The proposed changes are similar to the changes to the Curated Module
#### CSM
It is assumed that the module has an internal FIFO queue of keys, which is typically filled in the order of new keys added by operators. The queue consists of batches of keys added at once.
Each operator in the module has pointers to the total number of added keys and the number of vetted keys. By default, these two parameters are synchronized and increase as new keys are added.
In the `getQueuedDepositData` and `getNextDepositData` methods, the module implements iteration through the internal queue of keys, skipping batches where the batch's nonce does not match the operator's nonce.
```
[0-1] - key in the queue, where 0 - operator index, 1 - key index
Inner queue:
[0-0, 0-1, 0-2] [1-0, 1-1] [3-0, 3-1] [1-2, 1-3, 1-4] [2-0, 2-1]
^^^ ^^^
Batch's nonce does not match the operator's nonce
Public view method returns cleaned queue:
[0-0, 0-1, 0-2] [1-0, 1-1] [1-2, 1-3, 1-4] [2-0, 2-1]
```
There may be a situation where the number of invalid key batches is so large that it makes it impossible to execute a transaction within the block's gas limit. To solve this problem, it is assumed that the module has a permissionless method for clearing the internal queue.
Each module can record metadata for precise identification of keys when calling the `reportDeposited` and `reportInvalidDepositData` methods. It is proposed that in CSM, the metadata will represent a pointer to the batch in the queue, which consists of:
- `8 bytes` - operator index (`uint64`)
- `8 bytes` - start key index (`uint64`)
- `8 bytes` - keys count in the batch (`uint64`)
- `8 bytes` - the batch nonce (`uint64`)
In the `reportDeposited` method for successful deposit on keys, the module receives the deposited keys and metadata on them, which it uses to offset the pointer of the current batch in the queue and the number of deposited validators for the operator.
When reporting invalid keys through the `reportInvalidDepositData` method, the operator's nonce is incremented, invalidating all keys in the queue, and the vettedKeys value is reset to the index of the last deposited key.
```
[0-1] - key in the queue, where 0 - operator index, 1 - key index
Deposited Reported invalid key
vvv vvv
[0-0] [1-0, 1-1] [0-1, 0-2] [1-2] [0-3, 0-4]
^^^ ^^^ ^^^ ^^^
New nonce value invalidates all operator keys in the queue
```
Reporting invalid keys should charge a fee from the operator. This fee should be enough to cover the expenses of the reporting transaction.
When an operator deletes keys, several events occur:
- The actual deletion of the keys
- The batch of remaining keys after deletion is placed at the end of the queue
- Increases the operator's `nonce`
- Restores synchronization of vettedKeys with the total number of uploaded keys
- Charges a fee for deleting the key
```
Deposited keys = vetted keys
v
[0,1,2,3,4,5,6,7,8,9]
^
Key to delete
When key 6 is deleted, the key 9 will be moved to its place:
[0,1,2,3,4,5,6,7,8,9]
[0,1,2,3,4,5,9,7,8]
^
New batch placed in the deposit queue
v v v v v
[0,1,2,3,4,5,9,7,8]
^
New vetted keys count = 9 (indexes 0-8)
```
Deleting keys should charge a fee from the operator. This fee should be enough to cover the expenses of a full queue cleanup.
Uploading new keys can occur in two scenarios: when the number of vetted keys equals the total number of keys and when it does not. In the case of equality, the batch of added keys is placed at the end of the queue, and the vetted keys count is recorded as equal to the total number of keys. The scenario where vetted keys are less than the total number of uploaded keys means that the operator has reported invalid keys and has not deleted them. In this case, uploading new keys should not raise the number of vetted keys and should not add the batch to the queue.
#### Easytrack factories related to modules
It is proposed to revoke the roles for Easytrack factories that raise the vetted keys of operators in Curated and Simple DVT modules for being unnecessary.
### Maintenance costs
// TODO:
- modules should compensate gas report of invalid keys
- need to fund council demons now on gnosis chain. put on gas committee
## Scope of changes
### Onchain part
- Expand functionality of the DSM contract
- New DSM address in Locator
- IStakingModule interface
- Curated and Simple DVT contract implementation
- Staking Router (deposit method)
- Revoke roles for deprecated Easytrack factories
### Offchain part
- Develop a new Secure bot in addition to Depositor and Pause bots
- Expand the functionality of the Council Daemon
// TODO:
- discuss with the Valset team what affects the offchain
- think about penalties in the curated module.
- explain why it is important to iterate on all keys (because of the duplicate problem). checking only a part of keys does not guarantee that we will not deposit duplicates instead of originals (provided that duplicates are loaded into different modules).
- think about permisoinless reports of invalid keys
- invalid key report. invalidate signatures after some time
- describe possible improvements to onchain checks in future releases: bls precomiples, deposited keys registry