# Sept 8th updates
## Selective pivot point geth
### Goal
Adapt geth client to be able to fast sync to specified block height and pivot to full sync w/ statediffing there on
Reminder: snap sync not considered because would require having snapshots at the specified block heights which is improbable at any height and won't happen at heights below where snap sync was introduced. Whereas an archive node has the data to fast sync to any point in its history.
### Issues
* Fast sync normally only operates at 64 blocks back from the head, where all full nodes on the p2p network will have the data
* Practically, we will have to peer directly with our archive
* Fast sync updates pivot point every time the chain advances 64 blocks
* Overhead to perform this accumulating iterative subtrie sync every 64 blocks until a full trie is "topped-off" within such a period
* Fast sync was removed in 1.10.14
* Overhead for maintaining the reintroduced fast sync capabilities on-top of upstream geth
* Conflicts across many core packages and breaks some interfaces
* core/blockchain.go
* eth/downloader/*
* eth/handler.go
* eth/sync.go
* trie/sync.go
* trie/sync_bloom.go
Alternative:
Export the full sync initial state from an offline levelDB to another levelDB
Pros
* Remove p2p overhead
* Less complex
* Simpler to implement optimally
* Simpler to maintain
* Closer to what we need, e.g. fast sync's constantly adjusting pivot every 64 blocks is a counterproductive to our goal and adds a lot of additional complexity which has be cut out and/or refactored around.
* Doesn't burden a live archive node with tons of requests
Cons:
* Requires a copy of an archive
* Requires running two binaries instead of one
### Sync modes recap
Fast sync:
* n = header of chain - 64
* Download all headers, transactions, uncles, receipts, and total difficulties for block 0 to n
* Verify the PoWs in the headers
* Attempt to download the entire state trie at block n
* Everytime the head of chain advances 64 blocks, n is incremented by 64
* Each state sync attempt accumulates a bunch of sub tries which won't get updated within the next 64 block window
* Uses a bloom filter to limit reads to check if subtries have already been downloaded
* Eventually after a bunch of these partial syncs, most the current state is accumulated and a complete trie can be "topped-off" before the 64 block window moves again
* At this point, the client switches to full sync mode
Snap sync:
* Similar to fast sync except
* Instead of downloading entire state trie (and linked storage tries), it downloads only the set of key:value pairs stored in those tries (a "snapshot").
* It doesn't use a bloom for subtries as it doesn't download subtries, it materializes the trie from the snapshot
Full sync:
* Start at the genesis state or the state reached by fast or snap sync (n)
* Download block (header + uncles + transactions) at height n+1
* Validate PoW
* Apply the transactions on top of the state at n to produce the state and receipts at n+1
* Repeat
* Full sync prunes away (dereferences) historic states below 128 blocks behind head
Archive node:
* Not a sync mode
* Full sync mode but does not dereference historic states
Light clients:
* Download headerchain only
* Never transitions to full sync; cannot perform state transitions or serve any state/tx/receipt/log data
* But the state trie, tx trie, and rct trie roots in the header can be used to verify Merkle proofs for those respective tries
* Relies on CHTs
* LES: verify PoWs
* ULC: doesn't verify PoWs