# VC Slashing Protection ## Description The Validator Client should ensure that it never signs a slashable attestation or a slashable block proposal. To do so, a validator only needs to follow these 3 simple rules: 1. Never sign two different blocks for the same epoch 2. Never sign a surround vote 3. Never sign a double vote The first rule is to protect from proposer slashings; the two following rules are to protect from attester slashings. ## A naive implementation A straightforward approach would be to use a simple DB in which we store every block we proposed and every attestation we signed. Everytime we need to check for a new block or attestation, we just scan the DB and check if a similar block/attestation already exists, and if the new attestation/block is valid against all entries in the DB. Needless to say this is extremely inefficient, both in terms of lookup time and DB size. ## A conservative implementation See [this issue](https://github.com/sigp/lighthouse/issues/254). ## A suggested implementation ### Stored data It turns out the minimal information we need to store is pretty light. Given that a validator should create an attestation for every epoch but not necessarily propose a block for each epoch, I would suggest having two separate files. ```rust struct ValidatorHistoricalBlock { epoch: Epoch, preimage_hash: Hash256, } ``` ```rust struct ValidatorHistoricalAttestation { source_epoch: Epoch, target_epoch: Epoch, preimage_hash: Hash256 } ``` > [name=pscott] Where preimage refers to the data we would send to the signing function. Not sure how to make this clear. > [name=michaelsproul] The spec refers to this as the "signing root" for blocks, and I think we could use the same term without confusion for attestations (where the signed data is the `AttestationDataAndCustodyBit`'s hash tree root). These structs could be SSZ-encoded and stored in two separate files (both files being sorted in increasing order; one by `block_epoch` and the other one by `attestation_data_target_epoch`). ### Size Storing this information would require `3 * 8B + 2 * 32B` i.e `88B` of data. > [name=pscott] Maybe a bit more with SSZ encoding? Given that there are 82_125 epochs in a year (`3_600 * 24 * 365 / (64 * 6)`), the total information stored by each validator client would grow by `82_125 * 88B` i.e `6.89 MiB` per year **at most**. ### (cross-client) Portability Using a simple SSZ encoded container could not only facilitate Lighthouse's validator portability but also cross-client portability. Having a standard format could help us test, and could reduce the amount of work needed if a user wishes to switch from using a client to another client using the same validator keys. ### Algorithm to follow #### For blocks: - On validator initialization: 1. Open and deserialize the `proposed_blocks_history` file and load it up in memory. - When a `new_block` arrives: 1. Starting from the most recent block, find the first block in the history such that `historical_block.block_epoch <= new_block.epoch`. > [name=pscott] If no such block exists: if history is empty -> sign block. Else -> there was an error while pruning the DB. - If `new_block.epoch > historical_block.block_epoch`, go to step 4 - else if `(historical_block.block_preimage_hash == Hash256(new_block_preimage))`, continue - else reject block 2. Append the new SSZ-encoded`ValidatorHistoricalBlock` to the `proposed_blocks_history` file. (make sure it works). 3. Append the new `ValidatorHistoricalBlock` to the history kept in memory. (make sure it works) 4. Generate block and sign it 5. Broadcast block #### For attestations: - On validator initialization: 1. Open and deserialize the `signed_attestations_history` and load it up in memory. > [name=pscott] Storing it as a vector is sufficient because most new attestations should be appended to the history, and not inserted in it. If insertion happens, it should be near the end of the vector. If this proves to be a bottleneck, maybe consider a beap? - When a `new_attest_data` arrives: 1. Please have a look at this [simple PoC repo](https://github.com/pscott/attestation_slashing_protection) and my [attempt at formalizing the correctness of this algorithm](https://hackmd.io/@6Uku5jlsSVewmiY5aPVYIA/S1eoIqSKr/edit). - For every target epoch higher than the `new_attestation` target epoch, check that the corresponding source epoch is higher than the `new_attestation` source epoch (checking that the `new_attestation` is not surrounded by any previous vote). - If the `new_attestation` target epoch is already in the historical attestation list, check that they have the same hash (checking for double votes). - For every target epoch between the `new_attestation` source epoch and the `new_attestation` target epoch, check that the corresponding source epoch is smaller than the `new_attestation` source epoch (checking we're not surrounding any previous votes). > [name=pscott] small optimization: rather than the source epoch we can start at the target following the source epoch. 3. Append the new SSZ-encoded`ValidatorHistoricalAttestation` to the `block_proposer_history` file. (make sure it works) 4. Append the new `ValidatorHistoricalAttestation` to the history kept in memory. (make sure it works) 5. Generate the attestation and sign it 6. Broadcast attestation. ### Pruning suggestions #### By finalized block - One simple idea is to remove from the history all attestations that have a target epoch smaller than the current finalized epoch. However this opens up an attack vector in which the beacon node "lies" about finalized attestations and prunes the whole DB, cancelling out any protection. #### By epoch distance - Another idea would be to prune by epochs, keeping only the attestations that have a source epoch at least `X` epochs away from the latest signed attestation. `X` could be any number high enough that would allow us to be sure that no new attestation could have a target epoch smaller than the first target epoch in our history. Finding such an `X` sounds arbitrary, but if big enough, might be an acceptable solution. ## Attack vectors ? 1. The Beacon node could simply ask the validator client to sign an attestation that has a really small source epoch and a really high target epoch. This would result in validator no longer signing any attestations for a big period of time, as it would be signing "surrounded" votes. ## Proof ? See [my attempt at prooving the correctness of the attestation checking algorithm](https://hackmd.io/@6Uku5jlsSVewmiY5aPVYIA/S1eoIqSKr/edit).