# Test scenarious for Exit Bus ## Requirements ### stETH - `0xa5F1d7D49F581136Cf6e58B32cBE9a2039C48bA1` has 67k stETH ### Required actions - [x] Empty buffered ether. Prepare initial withdrawal request from Testnet account for 21k stETH - [x] Prepare script to set target validators to exit. Set required role to EOA - [x] Prepare simple scripts to monitor exit time for validators, delayed validators, withdrawal requests demand ## Test scenarios ### Exit at least 1 validator for each node operator We want each node operator to test the Ejector on its side. We can use target validators functionality to make the exit order predictable. For each node operator, the process may look like this: - Wait for confirmation from NO that them are ready - Check that the withdrawal requests queue is empty or contains only requests for previous tests - Make sure the buffer is empty - Set target validators for this NO to current number of active validators - 1 (this means prioritizing exit of 1 validator) - Create a request for 32 stETH in the withdrawal queue - Wait for exit bus oracle to exit a validator - Notify NO that a validator has been reported to exit - Wait the validator for exit - Check the next Accounting oracle report to contain updated numbers - Disable target validator limit for this NO We'll probably want to make a board in notion for this to track NOs statuses ### Exit 150 validators ### Delay validators - [x] Ask at least one NO to delay an exit of validators for 1 day - [x] Check that th oracle skips this NO and select validators from other NOs ### Stuck validators and penalty - [ ] Ask at least one NO to delay an exit of validators for 3 days - [ ] Check that rewards are halfed - [ ] Exit validators and check that rewards become normall - [ ] Check the extra data in an oracle report ### Slashings #### Zhejiang - [x] Slash at least 1 validator on zhejiang - [x] Check that the oracle counts them as exited when the exited epoch is reached - [x] Check that Oracle don't ask him to exit - [ ] Check that Oracle doesn't count his balance until calculated exit epoch > validators exit epoch. Else check that balance is calculated. - [ ] debugger - [ ] Check that the balance doesn't counts in exit bus prediction until the withdrawable epoch reaches a predicted withdrawable epoch - [ ] debugger #### Goerli Maybe we don't want to test it on Goerli, or maybe we should communicate it publicly first. The scenario is the same as Zhejiang ### Bunker mode #### Negative rebase - [x] Coordinate a testing date with ChainSafe and cryptomanufaktur.io - [x] Make sure the deposit buffer is empty - [x] Create a batch of withdrawal requests with some delay between them over several days just before the oracle report - [x] Ask ChainSafe and cryptomanufaktur.io to shut down their validators - [x] Stake some ETH, make sure it's enough to finalize all the requests in the queue - [x] Check that an oracle report enables the bunker mode - [x] Check safe border: what withrawal request were finalized - [x] Ask ChainSafe and cryptomanufaktur.io to run their validators - [x] Check buffer and withdrawal requests queue - [x] Check that an oracle report disables the bunker mode after rebase is normalized #### Negative rebase in the future due to associated slashings Can't be tested with small amount of validators. Let's test on zhejiang with debugger - [ ] Create a batch of withdrawal requests with some delay between them over several days just before the oracle report - [ ] Slash one validator on zhejiang - [ ] Go with the debugger to the place where negative rebase is checked in the future - [ ] Change the state of a large number of validators to emulate large slashings: `slashed: true`, `exit_epoch`, `withdrawable_epoch`. The goal is to do it in a way that is as close as possible to the state of the chain - [ ] Check the calculated values against the values calculated on paper - [ ] Check that the predicted cl_rebase is < than current cl_rebase - [ ] Check that it enables the bunker mode - [ ] Check that the slashed validator affects the safe finalization border - [ ] Wait for a few frames, do all the checks again, pay special attention to the finalization border - [ ] Wait for a few days, do all the checks again, pay special attention to the finalization border #### Abnormal rebase The goal is to turn off all validators close to the report and have negative cl_rebase in a short period right before the report, but the whole cl_rebase must be positive. Tests on zhejiang: - [ ] Negative rebase in the `REBASE_CHECK_NEAREST_EPOCH_DISTANCE` period - [ ] Shut down all validators during period [`ref_epoch - REBASE_CHECK_NEAREST_EPOCH_DISTANCE`, `ref_epoch`] - [ ] Check that it triggers the bunker mode - [ ] Negative rebase in the `REBASE_CHECK_DISTANT_EPOCH_DISTANCE` period - [ ] Shut down all validators during period [`ref_epoch - REBASE_CHECK_DISTANT_EPOCH_DISTANCE`, `ref_epoch`] - [ ] Check that it triggers the bunker mode - [ ] Negative cl_rebase in the first half of a frame - [ ] Shut down validators in the period [`ref_epoch`, `frame_duration / 2`] - [ ] Make sure the whole rebase is positive - [ ] Make sure it doesnt trigger the bunker mode ### Safe borders testing #### Default safe border Test with lots of small withdrawal requests right before the report. Tweak border param in the contract - [ ] Create a batch of withdrawal requests with small delay between them just before the oracle report - [ ] Make sure that there is enough ETH in the buffer - [ ] Wait for the oracle report - [ ] Check that requests were finalized right before the safe border (`requestTimestampMargin` in the sanity checker contract rounded to epochs) #### Associated slashing border Could slash one validator on zhejiang and trigger the bunker mode. Should be checked in the [negative rebase in the future due to associated slashings](#Negative-rebase-in-the-future-due-to-associated-slashings) scenario. #### Negative rebase border Trigger negative rebase. Check that withrawal requests not finalized in the safe border period. Should be checked in the [negative rebase](#Negative-rebase) scenario. ## UI - bunker mode - associated slashings - large queue - large request from user - large requests count from user - pause - large claim