# Validator custody design w/o backfilling When the CGC changes, wait 18 days (retention window) to advertise. Because you are required to finalize within the retention window we don't need to deal with CGC changes in the unfinalized section of blocks. Backfill is not needed as the DB will be populated by forward sync or gossip. If the CGC increases again, wait another 18 days to advertise the new change. Record in the DB the latest set of columns we are writing (derived from NodeID and CGC). On start-up if the set of columns of the current runtime differ from the persisted ones, exit. This still allows big operators to clone DBs, only if they run the source and target nodes with `--supernode` because the set of columns is the same = the total set. When a node checkpoint syncs with a fresh DB, with current reqresp and ENR spec there MUST exist a time period where the node is not spec compliant. That's because a node that just started from checkpoint sync has not data. But its peers will assume a default CGC value. Given this reality we accept the fact that nodes that are newly checkpoint sync-ed will violate the spec for a bit. To shorten the time it takes for a checkpoint sync supernode to advertise the max CGC, we can restart the existing backfill if it's closer to the head with the higher CGC. existing backfill = Lighthouse backfill routine that does a single pass from starting anchor downloading blocks + data. At the fork boundary of PeerDAS we must handle the case explicitly so nodes that have a lot of validators attached advertise their CGC ahead of time. As long as the backfill process started before the PeerDAS fork, it doesn't matter. If the backfill process includes the fork epoch, the node needs to wait the 18 days before advertising the CGC. With this we get instant supernodes at the fork start with validator custody. ### Cases Increasing CGC post PeerDAS fork | Time | Connected vals | Internal CGC | Advertised CGC | Action | | - | - | - | - | - | | day 0 | 0 | default | default | Node is synced to head | day 0 + Δ | 1000 | 128 | default | Node records CGC update time | day 18 | 1000 | 128 | 128 | When the retention window ellapses the CGC update time, it is advertised Node starting from chekpoint sync with empty DB | Time | Connected vals | Internal CGC | Advertised CGC | Action | | - | - | - | - | - | | day 0 | 0 | default | default | Backfill starts for CGC of default | day 0 + Δ | 1000 | 128 | default | Backfill restarts with CGC = 128 | After a few days | 1000 | 128 | 128 | Backfill completed Node at the fork boundary | Time | Connected vals | Internal CGC | Advertised CGC | Action | | - | - | - | - | - | | before fork | 1000 | n/a | n/a | Node is synced to head | before fork + Δ | 1000 | 128 | 128 | Node immediatelly advertises higher CGC | At fork epoch | 1000 | 128 | 128 | Instantly can participate as supernode ### Roadmap With this assumptions we don't need any new backfill mechanism. We need to - carefully persist new consistency data to disk, - make all codepaths read the CGC from a Mutex, - restart the backfill under certain conditions - cache the to-advertise CGC somewhere and update ENR + status at that moment ### Drawbacks Supernodes that checkpoint sync will take ~18 days (maybe less) to be usefull supernodes to the network. If a large portion of nodes in the network need to wipe their DBs and checkpoint sync we will lose all those supernodes. Because most of the network is long running I don't believe this is a big concern.