Try   HackMD

ADR: Staking Router

Lido, as part of the community, thinks of Ethereum as a credibly neutral home for applications and their users. The mission of Lido on Ethereum is to provide a secure and accessible staking platform and contribute to Ethereum's decentralization. Lido is firmly committed to diversifying the validator set, which should reduce the risk of downtime or censorship while retaining network performance.

With this goal, we, the dev team, present Staking Router, a major protocol update, moving the operator registry to a modular architecture. This design would allow for experiments with different approaches and principles for building validator sets, including solo and DVT validators, without significant changes to the underlying protocol.

The Staking Router brings Lido's vision of creating a platform for stakers, developers, and node operators to collaborate and advance the future of decentralized staking to reality. We invite teams and individual contributors to build with us.

Glossary

  • Aragon is a DAO framework used by Lido on Ethereum;
  • AragonApp is a base utility contract from Aragon;
  • Aragon Voting is a DAO voting application;
  • A bond is ether put out by a validator as a collateral for bad behavior or poor performance;
  • The deposit buffer is the user-submitted ether stored temporarily on Lido before being deposited to DepositContract ;
  • DepositContract is the official contract for validator deposits;
  • DepositSecurityModule is a utility contract for verifying deposit parameters;
  • DVT, or Distributed Validator Technology, shares the duties of a single validator between multiple participants using a multi-signature scheme;
  • EasyTrack is a suite of contracts and an alternative veto-based voting model that streamlines routine DAO operations;
  • Lido is a core Lido contract that stores the protocol state, accepts user submissions, and includes the stETH token;
  • The Lido DAO is an organization headquartered on Ethereum that operates liquid-staking services across various Proof-of-Stake blockchains, including Ethereum, Solana, Polygon, Polkadot, and Kusama;
  • A node operator is a person or entity that runs validators. In NodeOperatorsRegistry, each node operator has an index, a human-readable name, an associated address on Ethereum, and a list of signing keys.
  • NodeOperatorsRegistry is a Lido contract that manages a curated node operators' list, stores their signing keys, and keeps track of the number of active/stopped validators for each node operator;
  • A signing key refers to a structure within NodeOperatorsRegistry consisting of the validator's public key and signature, which are submitted to DepositContract. Lido does not store validators' private keys;
  • stETH is an ERC-20 token minted by Lido and representing the user's share in pooled ether under the protocol control;
  • After the Capella/Shanghai hard fork, withdrawals will enable the validator to exit and unstake its underlying balance using the specified withdrawal credentials.

Problem statement

Currently, at the smart-contract level, Lido on Ethereum supports the curated validator set only. All node operators apply through the DAO vetting process and, if approved, are added to NodeOperatorRegistry, a smart contract that manages node operators, stores signing keys, and distributes the stake.

At this time, Lido seeks to diversify the validator set and is already actively experimenting with DVT partners such as SSV and Obol, and has plans for onboarding community validators. In this connection, the Lido protocol needs to be more flexible and able to support new validator subsets, which is challenging under the constraints of the existing monolithic architecture. It is, therefore, proposed to move to a modular structure such that each validator subset is encapsulated into a separate smart contract and integrated into the protocol.

It makes sense to impose certain limits on newly joined modules and increase them as the relevant infrastructure matures. For example, the DVT module may start with some safe and low limit (e.g., 1%) of the total stake in Lido, and if it performs well, this limit might be increased.

Moreover, the modular design will allow the DAO to specify fee settings for each validator subset independently because different ways of distributing stake can call for different fee structures.

This approach will allow the Lido on Ethereum protocol to incorporate an aggregator strategy for its validator set. Anyone is welcome to build on this platform, diversifying both the contributor community and the underlying validator technologies. The role of Lido DAO is to choose the optimal proportion between subsets to maximize the protocol's decentralization, profitability, and sustainability.

Existing architecture

Before diving into the StakingRouter design, it is worthwhile to establish an understanding of the existing architecture and processes. This knowledge will serve as a jumping point for the StakingRouter design exploration.

The diagram below illustrates the main processes in Lido:

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

Journey of ether

A user submits ether to the Lido contract in return for a share of the total Lido stETH pool. The pool grows by earning consensus and execution layer rewards and shrinks due to slashing penalties. As such, the same user share will respectively appreciate or depreciate in terms of stETH amount. Before being deposited, user-submitted funds accumulate on the Lido contract's balance. In technical terms, we refer to this ether as buffered or in the deposit buffer.

Lido, however, does not perform the deposit as soon as the buffer reaches 32 ether. This would be prohibitively expensive. Instead, a special off-chain bot monitors the network gas price and performs a batch deposit in a single transaction paid for by the protocol whenever the buffer reaches the size that justifies the cost of the transaction.

Rewards

Among the core mechanics of the Lido protocol is synchronizing the total validator balances with the total supply of stETH. However, because the execution and consensus layers are separate, the protocol employs a set of oracles to reflect balance updates onto the Lido contract. Typically, the oracles report balance changes daily, and Lido accordingly updates the total supply of stETH.

Users receive 90% of the total rewards earned by validators because Lido takes a 10% cut as the protocol fee, which is then split fifty-fifty between the treasury and node operators. In technical terms, Lido mints shares in the amount that reduces holder rewards by 10% and distributes the shares between the treasury and node operators.

Node operator workflow

On the protocol level, Lido interacts with node operators via NodeOperatorsRegistry. This contract manages the list of node operators, stores their signing keys, and decides which keys will be used for the next batch deposit based on which node operators have the least active validators. Once voted in, the node operator is added to NodeOperatorsRegistry. Then, the operator can add signing keys to the same contract. When the keys are submitted to DepositContract, the respective validators are placed in the activation queue. Once activated, they start participating in the consensus.

Although there are no restrictions on how many keys a node operator can upload, Lido imposes staking limits on each node operator. Thus, a node operator has only certain amount of approved keys and cannot start more validators than what the current limit allows. These limits are not static and may be raised via Aragon Voting or EasyTrack.

Deposit security

Each deposit to DepositContract must come with withdrawal credentials (WC), allowing the sender to withdraw the funds and potential rewards in the future, once withdrawals are enabled. All Lido deposits are submitted with the Lido-approved WC to ensure that only Lido can withdraw user funds.

In October 2021, a critical vulnerability was reported to the Lido bug bounty program. Because DepositContract associates the validator's key with the first valid deposit, this exploit allowed a node operator to pre-submit a minimal deposit of 1 ether with their own WC, thus, making any subsequent deposits with the same key withdrawable only with the initial WC.

As per LIP-5: Mitigations for deposit front-running vulnerability, a deposit security scheme was introduced. Before submitting a batch deposit, the guardian committee ensures there were no malicious pre-deposits and signs a message containing the deposit parameters:

  • depositRoot from DepositContract, the Merkle root of all deposits;
  • keyOpsIndex from NodeOperatorsRegistry, a nonce of key operations;
  • the block number; and
  • the block hash.

The depositor bot submits these messages to DepositSecurityContract, which verifies the guardian signatures and confirms the block with the specified number has the specified hash.

Introducing Staking Router

The main idea of StakingRouter is to modularize the validator set at the smart-contract level. This design encapsulates different validator subsets into separate pluggable contracts called modules. These modules manage node operators, store their signing keys (or their hashes in case the keys are stored elsewhere, e.g., an off-chain database), and distribute the stake and rewards between them. StakingRouter is a contract that acts as a top-level controller that oversees the operation across the modules.

The implementation may drastically differ from one module to another depending on the underlying validator technology, however, in order to be able to communicate with them, StakingRouter requires all modules to implement a specific interface. For example, each module must provide a way to retrieve validators' key information, regardless of how the module stores it.

When designing the specification, the development team had in mind several possible validator subsets that could be connected to StakingRouter:

  • Curated, the DAO-curated node operators, equivalent to the existing NodeOperatorsRegistry;
  • Community, permissionless node operators on a bond basis with an optional mechanic of effectively lowering bond requirements based on reputation;
  • DVT, DVT-enabled validators (with optional bonds) such as Obol's Distributed Validator Clusters or SSV nodes; and
  • Offchain or L2, upgrade for Curated subset, which allows reducing gas cost by pushing validators' keys storing to off-chain or layer 2.

Staking Router's purpose is to orchestrate deposits and, eventually, withdrawals through different modules in a way that satisfies the stake distribution desired by the DAO.

Initially, the Lido DAO will set a target share expressed as a percentage for each module's maximum fraction of active validators. Over time, as modules mature and develop higher capacity, the DAO will increase their target shares to compile a validator set that maximizes capital efficiency and, at the same time, is both sufficiently safe to delegate more user funds to and diverse and censorship-resistant to act as the part of the blockchain validation operations.

Along with the allocation algorithm discussed further, another way to control the distribution of active validators between modules is selecting from which module to eject validators once withdrawals are enabled. Next year, Lido plans to launch protocol-level withdrawals, but this topic is out of the scope of this proposal.

Much like right now, the rewards will be distributed across all modules without regard for performance, meaning the modules will receive rewards in proportion to the number of validators they have running and fee settings. The modular design of StakingRouter allows the DAO to set the treasury fee independently for each module which implies different fee structures for different modules.

The main difference in the deposit process is the introduction of the balancing strategy, which we call the stake allocation algorithm. Whereas currently, NodeOperatorsRegistry is the only consumer of the buffered ether, StakingRouter will have to split incoming ether between several modules. The main goal here is reserving enough ether for all modules and, at the same time, reducing the amount of idle ether to maximize capital efficiency.

Another feature for this upgrade is the migration of NodeOperatorsRegistry to meet the StakingRouter module requirements. The existing registry contains a lot of information about node operators and moving this state may be complicated and expensive both in terms of gas consumption and development.

Now we want to talk about some of the problems of this update and propose solutions.

Problem 1. Allocating buffered ether between modules

In the existing architecture, the Lido contract serves as a temporary storage, or a buffer, for user-submitted ether. Whenever the buffer becomes large enough to justify the cost of the transaction, the depositor bot performs a batch deposit bundled with the keys provided by NodeOperatorsRegistry. The key selection is based on whichever node operators have the fewest active validators.

With this upgrade, the deposit buffer still remains on Lido, however, during the deposit procedure some of the ether will be transferred to StakingRouter which calculates the amount of ether allocated to the module and performs the deposits. We call this process stake allocation.

Target shares

One of the main jobs of StakingRouter is to manage the relative module sizing. To achieve this, the DAO imposes a target share on each newly joined module, expressed as the percentage of its active validators to those in total across all modules. At the same time, this target acts as the module's quota that skews stake allocation in favor of the underdeposited module.

To illustrate this mechanism, we will consider the following example. The table below presents information about the modules at the start of an allocation round.

Module Target share (%) Сurrent share (%) Active validators Ready-to-deposit keys
Community 10% 7% 7000 3000
DVT 20% 18% 18,000 5000
Curated 70% 75% 75,000 20,000

Now, let's say that there is 320,000 ether in the deposit buffer, enough to onboard 10,000 new validators. Starting from the smallest module, the allocation algorithm reserves sufficient ether to whichever of the below conditions comes first:

  1. the module catches up to the next module,
  2. all ready validators are covered or,
  3. the target share is reached.

If two or more modules are equal in size and still have ready unused keys and some room until the target share, deposit ether is allocated evenly between them until reaching the capacity or there is no more ether to allocate.

The procedure of allocation is as follows:

  1. At the start of the allocation, StakingRouter calculates the maximum active validators for each module based on the target shares. Including the buffered ether, there is 110,000 validators worth of stake in the protocol. Therefore, the flat caps of the Community, DVT and Curated modules, with their respective target shares of 10%, 20% and 70%, is 11,000, 22,000 and 77,000 validators.
  2. StakingRouter starts with the Community module, as it has the least stake. With its 7000 already active validators plus the 3000 ready validators, the module will total to 10,000 validators which is well within the module limit. As such, 96,000 ether (or in other words 3000 validators worth of ether) is reserved for the module.
  3. Next, StakingRouter reserves only 128,000 ether (or 4000 validators worth of ether) for the DVT module, pushing the module to its maximum of 22,000 active validators and leaving 1,000 idling until the next allocation round.
  4. With 224,000 ether allocated, there is still 96,000 left in the buffer, of which 2000 worth of validators will be accomodated by the Curated module. 32,000 remains unaccomodated and will spill over to the next deposit.

The table below demonstrated the state after the allocation.

Module Target share (%) Сurrent share (%) Active validators Ready-to-deposit keys
Community 10% 9% 10,000 0
DVT 20% 20% 22,000 1000
Curated 70% 70% 77,000 18,000

With this mechanic in mind, we have come up with the following allocation algorithms.

Decision Drivers

The allocation algorithm should,

  • maximize capital efficiency by reducing the amount of idle ether awaiting deposit;
  • provide a sufficient flow of ether for all modules to reach target share;
  • implement a mechanic that improves the current situation according to the upper two principles in each round, in case of a deviation from the target shares.

Considered options

Listed below are the general assumptions for all algorithms:

  • a module communicates its capacity to receive stake in the amount of its ready to deposit keys to the staking router;
  • a stopped, suspended or otherwise inactive module does not receive ether;
  • every signing key provided by the module to StakingRouter requires exactly 32 ether to deposit.

Option A. Static-buckets balancing

The static bucket algorithm implements the allocation logic precisely as the target-share mechanic prescribes. This means that the buffered ether is split according to the target shares into separate reserves, so-called buckets. These buckets signify a module's priority access to the buffer up to the assigned allocation.

Each module is entitled only to its respective bucket. This guarantees a reserve of ether for each module to cover validators up to the module's maximum capacity or up to the target share. The downside to this approach is if the module, for some reason, cannot make use of its reserve, the ether stays idle, thus, decreasing capital efficiency.

Since each staking module is autonomous and responsible for sending a transaction that would result in a deposit to the DepositContract, idle ether may accumulate in the event of a module failure or due to an aggressive gas spending economy strategy by the module.

Pros
  • a guarranteed reserve;
  • easy to implement.
Cons
  • possible decreased capital efficiency;
  • no way to automatically reallocate the reserved ether in case the module stays inactive.

Option B. Expirable-priority balancing

This approach builds on top of the first one, as it uses the same allocation logic. However, the difference is that the module's guaranteed reserve expires over a set period, allowing other modules to take any unused ether. Note that the module does not lose access to its reserve but no longer has a priority and will have to race with other modules.

In each round, the module is given some time to make use of the entire reserved allocation. However, starting from a certain point in the round, the guaranteed reserve will gradually expire, for example, in four increments of 25%.

The diagram below illustrates an example of a module's guaranteed reserve expiring in four increments of 25% starting from the middle point of the round.

Thus, rounds are delineated by time. As such, there may be multiple allocations during a single round, and the deallocation mechanic works in terms of the current, not initial, reserve.

We plan to limit one round to 24 hours which is why it may make sense to attach the round start to oracle report time. However, as oracles do not guarantee a report every day, it may be safer to use real-world time, e.g., start each round at 12 am UTC+0.

This approach provides a good balance between providing a sufficient supply for all modules and minimizing idle ether.

Pros
  • guaranteed ether reserve for a module at the start of the round;
  • stake is split into buckets according to the quotas;
  • ether does not stay idle in a bucket if other modules have capacity.
Cons
  • relatively complex implementation;
  • race conditions between modules competing for deallocated ether.

Decision outcome

We are leaning towards Option A because it offers a straightforward solution which is enough for the initial iteration of StakingRouter. In future, we intend to make the algorithm more sophisticated and able to allocate ether dynamically based on the module demand.

Problem 2. Distributing rewards to treasury and node operators

Context and Problem Statement

Currently, the Lido contract is responsible for the distribution of rewards between treasury and node operators. We propose to shift this responsibility to StakingRouter because of the following reasons:

  • the Lido contract approaching the maximum contract bytecode size, and
  • StakingRouter having a direct line of communication with modules.

This means that the oracle will still report the rebase delta to Lido but StakingRouter will provide the amount of shares to mint. While it does make sense from the design perspective to move minting to StakingRouter, we are strongly inclined to leave the mintShares function on Lido to avoid security risks associated with having minting on a separate contract.

As we've mentioned earlier, the treasury cut will be set individually for each module in StakingRouter that gives the flexibility to integrate modules without imposing Lido's current 5% treasury fee.

Decision Drivers

  • leave mint function on the Lido contract to avoid security risks of having the mint function on a separate contract;
  • move the distribution logic out of Lido due to the contract bytecode size limits;
  • have a distinct separation of concerns between Lido and StakingRouter; and
  • give the DAO the ability to set the treasury fee independently for each module.

Considered Options

We have identified three possible variations of the rewards distribution process. All of them are quite similar and any one of these options is not marginally better than the others. Nevertheless, by presenting these options, we document our thinking process behind the decision.

Option A

This option presents the following steps of rewards distribution:

  1. Lido requests the number of shares to mint from StakingRouter;
  2. Lido mints new shares for StakingRouter;
  3. Lido retrieves the module-shares table from StakingRouter;
  4. Lido retrieves the Lido treasury shares table from StakingRouter;
  5. Lido transfers the minted shares from StakingRouter to treasury.
  6. Lido transfers the minted shares from StakingRouter to each module.
DVTRegistryCuratedLidoTreasuryStakingRouterLidoOracleDVTRegistryCuratedLidoTreasuryStakingRouterLidoOracleupdates module-to-shares tablereports deltagets deltaprints sharesgets module-to-sharestransfers sharestransfers sharestransfers sharesOption 1
Pros
  • minting remains on Lido;
  • independent treasury fee settings.
Cons
  • the distribution logic partly resides on Lido;
  • does not abstract away the knowledge of modules from Lido;
  • some redundancy in that the shares are first minted to StakingRouter and are then transferred to node operators, instead of minting the shares to node operators at once.

Option B

This option presents the following steps of rewards distribution:

  1. Lido requests the number of shares to mint from StakingRouter;
  2. Lido mints new shares for StakingRouter;
  3. Lido calls distibuteFee function on StakingRouter; and
  4. StakingRouter transfers the minted shares to treasury and to each module.
DVTRegistryCuratedLidoTreasuryStakingRouterLidoOracleDVTRegistryCuratedLidoTreasuryStakingRouterLidoOracleupdates module-to-shares tablereports deltagets deltaprints sharescalls `distributeFee()`transfer sharestransfers sharestransfers sharesOption 2
Pros
  • abstracts away the knowledge of modules from Lido;
  • minting remains on Lido;
  • the distribution logic is hidden from Lido;
  • independent treasury fee settings.
Cons
  • some redundancy in that the shares are first minted to StakingRouter and are then transferred to modules, instead of minting the shares to modules at once.

Option C

This option presents the following steps of rewards distribution:

  1. Lido requests the module-to-shares table from StakingRouter;
  2. Lido requests the treasury shares from StakingRouter;
  3. Lido mints the shares to treasury.
  4. Lido mints the shares to each module.
DVTRegistryCuratedLidoTreasuryStakingRouterLidoOracleDVTRegistryCuratedLidoTreasuryStakingRouterLidoOracleupdates module-to-shares tablereports deltagets module-to-shares tablemint sharesmint sharesmint sharesOption 3
Pros
  • minting remains on Lido;
  • slightly lower gas cost due to fewer actions;
  • straightforward logic without any intermediate steps,
  • independent treasury fee settings.
Cons
  • does not abstract away the knowledge of modules from Lido.

Decision Outcome

We propose to move forward with Option C because it presents a straightforward distribution logic with minimal steps.

Problem 3. Migrating operators registry

Context and Problem Statement

With this upgrade, the existing NodeOperatorsRegistry becomes the Curated module. This implies that NodeOperatorsRegistry must implement the module interface. Nevertheless, StakingRouter does not dictate the underlying design of the module; thus, the architecture of the registry remains as it is and will only be expanded according to the module specification.

Decision Drivers

  • reduce development costs;
  • minimize gas consumption in state migration;
  • minimize gas consumption in operation;
  • preserve backwards compatibility for EasyTrack that allows node operators to change their staking limits.

Considered Options

Option A. Expand NodeOperatorsRegistry interface to comply with module specifications

Because NodeOperatorsRegistry is a proxy contract, we have the option of changing the implementation with one that complies with module specifications and preserve the state on the proxy. The upgrade implies only modifications to the existing NodeOperatorsRegistry codebase to meet the module interface requirements. However, the Solidity version is quite outdated (v0.4.24), and as the contract inherits from AragonApp, it results in slightly inefficient gas consumption for daily transactions.

Pros
  • reduced development cost because, instead of a full rewrite, we'll only introduce modifications to the codebase,
  • no state migration cost as it is preserved on the proxy,
Cons
  • slightly increased gas consumption in operation due to the less gas-optimized Solidity version.

Option B. Create new contract but leave state on NodeOperatorsRegistry

This option implies moving the registry to a new codebase using an up-to-date Solidity version but leaving the state on NodeOperatorsRegistry. Any state reading and writing calls will be communicated through the new contract to the old NodeOperatorsRegistry. Even though the newer Solidity version provides better gas optimization, the operation gas expenses will likely still be increased due to call delegation.

Pros
  • more secure and optimized Solidity.
Cons
  • significant development cost due to a complete rewrite of the contract;
  • increased operation cost due to call delegation.

Option C. Complete migration

In this approach, we migrate to a new registry codebase using an up-to-date Solidity version and move the state in a single expensive transaction.

Pros
  • optimized gas consumption due to a newer Solidity version.
Cons
  • requries an expensive transaction to migrate the state;
  • significant development cost due to a complete rewrite of the contract.

Decision Outcome

We propose to move forward with Option A with plans for a complete migration in future.

Problem 4. Ensuring deposit security across multiple modules

Context and Problem Statement

Each deposit to DepositContract must come with withdrawal credentials (WC), allowing the sender to withdraw the funds and potential rewards in the future, once withdrawals are enabled. All Lido deposits are submitted with the Lido-approved WC to ensure that only Lido can withdraw user funds.

In October 2021, a critical vulnerability was reported to the Lido bug bounty program. Because DepositContract associates the validator's key with the first valid deposit, this exploit allowed a node operator to pre-submit a minimal deposit of 1 ether with their own WC, thus, making any subsequent deposits with the same key withdrawable only with the initial WC.

As per LIP-5: Mitigations for deposit front-running vulnerability, a deposit security scheme was introduced. Before submitting a batch deposit, the guardian committee ensures there were no malicious pre-deposits and signs a message containing the deposit parameters:

  • depositRoot from DepositContract, the global Merkle root;
  • keyOpsIndex from NodeOperatorsRegistry, a nonce of key operations;
  • the block number; and
  • the block hash.

The depositor bot submits these messages to DepositSecurityContract, which verifies the guardian signatures and confirms the specified block has the specified hash. After that, the contract submits the deposits by calling the deposit function on Lido.

Index of key operations

keysOpIndex is a nonce incremented on state transitions in NodeOperatorsRegistry. This nonce allows the guardian committee to refer to a specific state in NodeOperatorsRegistry when performing a deposit security check. Thus, DepositSecurityModule knows that the guardian signatures are valid only for the specific state of NodeOperatorsRegistry, if the keyOpsIndex in the committee-signed message does not match the one in the registry, the deposit will not occur.

In the existing design, keyOpIndex in NodeOperatorsRegistry is incremented when:

  • a node operator's keys are added;
  • a node operator's keys are removed;
  • a node operator's approved keys limit is changed;
  • a node operator is activated or deactivated; and
  • when Lido retrieves keys for deposits by calling the assignNextSigningKeys function.

With the StakingRouter update, DepositSecurityModule will need to verify deposits from multiple modules, each with their keysOpIndex. In this regard, we have identified two approaches.

Decision Drivers

  • ability to pause malicious deposits for specific modules.

Considered Options

Option A. Design a system with a single index

This approach implies designing a system with only one index required for the security check. For example, StakingRouter stores a counter that increments on any key operation in any module. If the check fails, deposits are stopped across all modules. The obvious flaw with this approach is that DepositySecurityModule cannot identify the source of the malicious deposit and has to halt deposits for all modules.

The sequence diagram below illustrates this process.

DVTCommunityCuratedStakingRouterDepositSecurityModuleDSMBotDVTCommunityCuratedStakingRouterDepositSecurityModuleDSMBotcheck faileddeposit(depositRoot, keyOpIndex, blockNumber, blockHash)getCommonKeyOpIndex()getKeyOpIndex()index=10getKeyOpIndex()index=20getKeyOpIndex()index=12index==42check(keyOpIndex)pause()DSM
Pros
  • only one check is required to verify the deposit security which reduces the burden of the off-chain tooling support.
Cons
  • deposits are paused globally.

Option B. Include index in module specification

In this approach, each module tracks its own index and DepositSecurityModule performs multiple checks, one for each module. This allows the protocol to halt deposits independently for one module in the case of a failed check. This approach excludes any race conditions because once the module deposit is performed, the deposit Merkle root updates, and the subsequent check will be carried out against the new root.

This approach is illustrated below.

CuratedDepositSecurityModuleDSMBotCuratedDepositSecurityModuleDSMBotcheck faileddeposit(depositRoot, keyOpIndex, blockNumber, blockHash)getKeyOpIndex()index=42check(keyOpIndex)pause()DepositSecurityModule
Pros
  • pause deposits for the malicious module only.
Cons
  • multiple checks increase the load on off-chain support.

Decision Outcome

We propose to move forward with Option B due to the flexibility it provides.

Next iteration

Up to this point, we have discussed design decisions that required our immediate attention. We want to dedicate this section to some of the features we intend to focus on in the next iteration.

We have identified three directions that we will be working in within the next iteration of Staking Router:

  • support for bonds,
  • integrating withdrawals to the stake-balancing algorithm, and
  • support for L2 and off-chain modules.

We also expect other feature requests from contributors and stakeholders.

Implementing bonds

Community and DVT modules would likely require operators to submit bonds as collateral. A bond makes up a part of the DepositContract deposit, with the rest taken from the Lido pool. For example, to start participating, a node operator may submit only ten ether, and 22 ether will be provided from Lido.

Before full-featured bonds are implemented, modules that require this feature could convert bonds to stETH. The downside of this approach is that converted bonds could shrink under mass-slashing events. However, the original bond semantics can be achieved by handling such cases off-chain with the slashing insurance fund, assuming that the volume of bonds is relatively small compared to the protocol TVL. We have plans to rework the accounting model to support a true bond implementation which will enable the protocol preserve the bond and translate rewards and penalties based on individual validator performance.

Before that time comes, we can take two possible directions with stETH-based bonds:

  1. the validator's bond is converted to stETH; the validator receives stETH rewards plus the validator fee; their slashing penalties are socialized across the protocol; or
  2. the validator's bond is converted to stETH; the validator receives stETH rewards plus the validator fee; the insurance fund covers slashing penalties.

Counterbalancing deposits with withdrawals

Each module will have a target share of stake to work towards. The balancing algorithms explained in Problem 1 account for these shares when allocating buffered ether between modules. Another way to balance the stake distribution between modules is through withdrawals. As such, the StakingRouter can employ a strategy for ejecting active validators to bring modules closer to the target shares desired by the DAO.

Interacting with L2 and off-chain modules

Staking Router will need to provide support for L2 modules. Although we anticipate some complications with interoperability, it will be possible to platform these modules due to the minimal interfacing requirements. In our design process, we tried to keep module requirements as loose as possible. As such, the only critical information each module must provide is the number of total, active, stopped, and exited (once withdrawals are enabled) keys.