# Ops, the hot DB burned my node
## The problem
Lighthouse currently stores one full `BeaconState` every non-finalized epoch of any (valid?) head fork. During non-finality this can exhaust the node's disk space, and is vulnerable to attacks that create useless forks.
Assume a single fork, 150 MB per state, that's `225 * 150 = 33 GB / day / fork`. Note that if a large portion of the validator set goes permanently offline (i.e. +2/3) it will take the inactivity leak 30 days to restore finality (source: this [simulation](https://hackmd.io/Uq00EA8-TumhnSFKBB3OMw#Whats-the-effect-of-EJECTION_BALANCE)). So during 30 days, we'll have `225 * 150 * 30 = 1TB` of disk just for unfinalized states, if there's no forking.
At the same time we want to process network objects from forks without having to recompute expensive states everytime. Caching a state every epoch puts an upper bound on that time.
## Potential solution
- Store less states (always): Reduce the frequency of unfinalized state storage and replay more blocks.
- Store less states (variable): Prune unfinalized states as the head progresses, for example given an epoch only keep the state if `epoch % (max(1, head_epoch - epoch)) == 0`
- Note: pruning is complicated. You need to deal with things not being there anymore and all it's implications.
- **Store all states, but as diffs**: Same number of states, but their disk space cost is dramatically lower. Note that we incurr a penalty to compute and apply the diffs compared to just storing states.
## HDiffs on the hot DB
Why not adopt the same idea from Lighthouse's freezer / cold DB to reduce disk space? The caveats of the hot DB are:
- Non-aligned starting anchor point
- Constantly moving finalized state
Note that computing diffs is expensive (~1.3 sec per diff), so we can't recompute diffs. We must find a way to allow the finalized state to move forward while re-using the existing set of state diffs.
The key to unlock this is to compute "reverse" diffs into the new finalized state.

### Modified HDiff grid
#### Un-aligned starting point
Can be made deterministic given:
- Anchor slot (initial state)
- Current finalized state
- Target slot
- Hierarchy
Note: Must handle a transition period after upgrading where you have no diffs and full states every epoch
#### Un-aligned moving snapshot
### Migration
When upgrading to the new version we must convert all existing states into the expected heriachy of diffs, rooted in the current finalized state.
### Put state
Now instead of storing a snapshot on each epoch boundary, we check against the heriarchy if we should replay, diff, or snapshot. We persist the storage strategy in the hot state summary.
```rust
pub enum StorageStrategyHot {
ReplayFrom(Hash256),
DiffFrom(Hash256),
Snapshot,
}
```
### Read state
Read hot state summary for that state root.
- If replay `from`, recurse load state at `from` and replay
- If diff `from`, recurse load state at `from` and apply diff
- If snapshot, load state
## Prune / finality migration
This is probably the most tricky part. We want to:
- Keep the HDiff grid "working",
- while moving the snaphost,
- and deleting as many diffs as possible
We want to prune all diffs that are not "reachable" i.e. will never be read to regenerate any state that is descendant of the new finalized checkpoint. There are two groups of diffs to prune:
- All diffs part of abandoned forks: These can easily be pruned with our existing `prune_abandoned_forks` routine.
- Some diffs part of the finalized canonical chain: These are more tricky and require new logic.
Given a newly `new_finalized_slot` and `prev_finalized_slot`, consider a pruning routine:
- for each layer `i`
- Find the nearest `x := new_finalized_slot / moduli(i) * moduli(i)`
- Replace diff at `x` with a diff from `x` to `new_finalized_slot`
- Delete all diffs in layer `i` between the previous finalized slot and `x`
- Replace state at `prev_finalized_slot` with a diff from `prev_finalized_slot` to `new_finalized_slot`
- Prune all diffs with slot < `prev_finalized_slot`
_The diagram below helps me explain it but will try to come up with a digital version that is more rigorous_

Consider a sequence of increasing moduli values `moduli = [2^5, 2^8, 2^11]`. Given a finalized slot `finalized_slot` compute the set of values `S` where `S(i) = finalized_slot / moduli[i] * moduli[i]`. All diffs with a target state after `finalized_slot` will be based on state in `S` or some other state after `finalized_slot`. So for each layer `i`, we can prune all diffs prior to `S(i)`.
## Issues
- If compute diff is so slow (1.3 sec) it will impact:
- Time to import a block that requires persisting a diff. The attester cache should mitigate it but still
- More time computing a diff, but less data written overall. What's the time to write a full state in disk? + less IO means less memory spikes
- The migration time. On Holesky that's approx 0.5 sec, looking at elastic logs of `event.original: "Freezer migration"`
- If we use a fixed grid and don't follow the finalized snapshot then no need to recompute anything on finalization migration
## Misc
- When we finalize we move the finality checkpoint to some epoch <= than 4 epochs of the head. This ensures that the set of non-finalized states / blocks after finality is very small and bounded.
- We could iterate all hot blocks via state summaries and get rid of the head tracker
## Latest Design Considerations
### To clamp or not to clamp?
Two options for dealing with checkpoint sync:
1. Use the checkpoint as an irregular snapshot for starting the diff hierarchy, or
2. Shift the whole diff hierarchy to descend from the checkpoint.
The advantage of (2) is that it simplifies the optimisation where we copy diffs between hot and cold without recomputing them.
The advantage of (1) is that it's a temporary state of affairs.
### Store the slot in the storage strategy/summary?