# Slashing Protection Mechanism 1) Problem Description 2) Pertinent Issues to Solve 3) Proposed Solution 4) Open Questions ## Problem Description Slashing is utilized in Casper to punish equovication of block proposals and attestations. While this is a core part of the protocol, this also has a downsided of punishing both malicious and non malicious users. Users optimizing for redundancy end up running 2 instances of their validator clients and end up being slashed due to making conflicting attestations or proposals. The below is an example of a user in our channel getting slashed: ![](https://i.imgur.com/dc9aquH.png) This was a very unfortunate incident, and we should minimize situations where something like this is possible.The majority of situations of slashings on mainnet have been similar cases like these(excluding staking providers) ## Pertinent Issues to Solve Currently in order to safeguard potential stakers better there are 2 major issues to resolve: 1) Detection of Unrecorded proposals/attestations. 2) Exiting the validator in the event of a positive detection. Each validator saves a record of all their attestaiong and block proposals. We can leverage this to perform a quick sanity check; Using all observed attestations in the last 10 epochs that have been packed into blocks we can use that to perform a differential with our recorded history. If we find an attestation from a validator that has not been recorded we immediately log a fatal message and exit the process. This ensures that users are protected from theirselves better. ## Proposed Solution In order to provide users with a better ability to ascertain if they are running a slashable validator. We will implement a rpc method which provides a level of safety to a validator that is newly starting up. 1) Validator Starts up and loads its relevant keys 2) Validator performs a rpc call to the beacon node, in this call it checks if in the last 10 epochs it has attested to the network. 3) Any attestations in the last 10 epochs are differentiated with the recorded validator history. 4) In the event the history differs from the onchain record, we log a fatal message and exit. This check provides an easy way for validators to make sure that a 2nd duplicate key isn't running. The majority of slashings have been due to users being careless and running 2 validators at once. For the average staker, it can be estimated the chances of being slashed drop down by about 90% due to running redundant setups. After a validator checks that a node is synced, we perform the historical lookup on the last 10 epochs. Once we have confirmed that our saved attestation/proposal history maches we can continue onwards. ## Open Questions 1) What is the correct number of epochs to lookup( 5/10/15/etc) 2) Should something like this be the default or behind a flag ? It can be argued that a validator's saftey is paramount and should be the default. 3) Is there any mechanism to retrieve from forkchoice vs on-chain history ? cc terence 4) What are the performance implications of having this in the hot validator path. A few benchmarks should be easy enough to alleivate this concern for 2000+ keys. 5) Can we be checking anything more enough to build a more secure mechanism for validators to ensure their safety ?