# FFG on top of Lukso setup
Introduction to FFG:
https://www.adiasg.me/2020/04/09/casper-ffg-in-eth2-0.html
## Problem backstory
At this very moment (29.07.2021) we are facing issue with block synchronization. We have mechanism (pending queue) which confirms blocks if they are matched on both sides and then they are inserted into cannonical chain.
## Problem
Blocks from peers were rejected and chain split. Only single-mining happened on machines where validators were progressing.
First block that was rejected is vanguard block at slot number 3, with hash: `0x43aeeffde05ea18f5bb74670ac253349baf08e6f9aa4672fd43032c4ccb1d89b`.
It was present in logs in multiple machines, but all of them (except a producer?) rejected this block due to the pending queue.
Producer of this block is known.
**Pandora block slot no. 3 was propagated to boonode-d, but vanguard block never arrived.**
## Cause of the stall/split
Pending queue
## Second thoughts
- we run already FFG that can be verified on-chain with 2 len supermajority link, and full finality can be made knowing all history
- validator should get blocks from its peers and do not check pending validness. FFG is enough with attestations to get synch
- FFG alows to find chain head based on attestations and block production. It should be the source of truth
- we should support event consensus infor reorg. When vanguard downloads new block from the past it will affect the proposals and attestations schedule it is in geth and vanguard. I suggest `Warp, Light, Full, Execution`, where:
a) Warp -> find finality based on supermajority link
b) Light -> Get headers only, do not compare sharding against pandora
c) Full -> Get full blocks on both sides, do not compare sharding against pandora
d) Execution -> compare shardinig and data availability on pandora chain against vanguard chain
Worth to mention:
Synching phases consists on top of another, so if Full sync must to happen first you download history as Light nonetheless. After finality you get whole data of blocks. Light goes from highest block number to highest known ancestor. Full is building from that particular point
- when validator signs a block it gets all the previous attestations and try to fit them based on fork choice
- validators are selected for attestations and pushed into committees. There must be at least 1 attestation from validator within epoch in selected committee
- validation of signatures based on minimal conesnsus info is enough. Orchestrator pending queue is not a good idea
## Possible solution
- `Execution` sync mode should be done before validator will attest. Attestations are enough to prove validness of execution. It require 2/3 of all validators to be malicious to break the system. If you will be malicious attester you'll get slashed. If block is accepted by others which has invalid execution it will be never finalized nor justified as long as 2/3 of the validators stays honest. If the 2/3 will be malicious we are doomed nonetheless
- `Networking` we can improve networking by putting information of `pandora` availability (enode or enr) on the vanguard `peer` server. So anyone can ask `showMeYourPandora`, which will send enr:- of pandora and allow to find execution block of particular fork. This can be made on the fly without any orchestrator at all
## Possible concerns
- `Data availability` - if we loss Pandora data you wont be able to restore its state. If peers on Pandora and Vanguard wont be mathching or having same data (Fork scenarios), then
- `Rollback of execution` -> it will happen nonetheless. There might be dozens of blocks on pandora that will get rollback onto latest finalized slot due to the FFG finality algorithm.
- `Orchestrator` should be responsible to set up and manage, not maintain. IMHO we should leave MinimalConsensus flow as it is our own design. Pending queue should be limited to `Execution` verification. Peers information can be on-demand and cached. In pandora: `admin.nodeInfo` via rpc/ipc and in Vanguard via `curl 127.0.0.1:8080/p2p`
- `Double sign` -> if someone sign pandora block twice and push it to its peers it could lead to corruption of data. At this very moment my feeling is that behavior will be the same as when pandora wont be present. Nodes will just split, and guy will slash