# Optimistic recursive ZK Use a smart contract to create an optimistic ZK application. Staked relays submit proofs on behalf of users. The proofs are not verified immediately, but are instead emitted as events with hashes stored onchain. A batch of proofs is processed by taking the public signals providing them as inputs to a **state transition proof**. The application logic must be implemented in this single proof. The application state should be stored onchain as a single root value containing all other values. Approx gas costs (not including intrinsic cost) - `queue()`: 36k gas - `processProofs()`: 300-500k gas So if each batch can have a maximum of 100 proofs we get an average of 41,000 gas per proof to operate the system at peak capacity. At 1000 proofs per batch this becomes 4,100 gas per proof. If the state transition proof changes a merkle tree with depth 32 for each queued proof we have ~13M constraints for a batch size of 1000. This is < 2 minutes of proving time for GROTH16 using rapidsnark. The gas cost of operating such a system is 10-100x lower than a system that evaluates each proof/data structure onchain. ```solidity contract Sequencer { mapping (uint => address) proofCreator; mapping (uint => uint[100]) proofBatch; uint batchIndex = 1; // start at index 1 uint batchLength; uint constant maxHashchainLength = 100; // Set when the batch is processed mapping (uint => uint) batchHashchain; // The state root should include _all_ of the data for the // application mapping (uint => uint) stateRoots; uint latestProcessedBatch; event ProofQueued(uint[] publicSignals, uint[8] proof); constructor() { // mark the batch with index 0 as processed to simplify later logic proofBatch[0] = 1; } // optimistically add a ZK proof to the queue function queue(uint[] memory publicSignals, uint[8] calldata proof) { // publicSignals should include a uint indicating type of proof if (batchLength == maxHashchainLength) { batchIndex++; batchLength = 0; } uint proofHash = keccak256(abi.encodePacked(publicSignals, proof)); proofBatch[batchIndex][batchLength++] = proofHash; require(isStaked(msg.sender)); proofCreator[proofHash] = msg.sender; emit ProofQueued(publicSignals, proof); } // This is used to process all the batched proofs and update // the state root function processProofs(uint _batchIndex, uint[] calldata publicSignals, uint[8] calldata proof) { require(isStaked(msg.sender)); require(verify(publicSignals, proof)); require(batchHashchain[_batchIndex] == 0, 'batch is already processed'); require(_batchIndex != 0); require(batchHashchain[_batchIndex - 1] != 0, 'previous batch is not processed'); // publicSignals[0] should be a hashchain of the public signals // this must include all public signals for _valid_ proofs batchHashchain[_batchIndex] = publicSignals[0]; // publicSignals[1] should be the old state root require(stateRoots[_batchIndex] == publicSignals[1]); // publicSignals[2] should be the new state root stateRoots[_batchIndex] = publicSignals[2]; latestProcessedBatch = _batchIndex; } // Prove that proofs were processed incorrectly // This can be either // 1. Processing publicSignals for an invalid proof // 2. Including publicSignals that do not exist // 3. Not including publicSignals that do exist // 4. Including publicSignals in the wrong order // // In any of these cases the state root should be rolled back // and the proposer slashed // // For 1 and 4 we can use a simple ZK proof to show that some // public signals exist at an index // // For 2 and 3 we need to build the hashchain for the batch // onchain to do the fraud proof // // Because we store the keccak hash of the proof onchain // we need to include the full publicSignals/proof data // as calldata because we can't efficiently do keccak inside a proof. function fraudProof( uint batchIndex, uint[] calldata publicSignals, uint[8] calldata proof, ) { // logic } } ``` ## Finality We can avoid waiting a long time for finality by having staked entities attest to the validity of a batch. A staked account can logically say "this state root is valid or I'll be slashed". As more entities do this we can calculate the amount of ether staked on the finality of a state root. Any batch can be proven invalid at any time. Say that we're on batch 1000. If batch 950 is proven to be invalid all future state roots should be marked as unprocessed and all entities that attested to batches 950-1000 slashed. Batches 950-1000 will need to be recalculated and re-submitted. We end up with a rolling balance of ether securing the state. # Pseudo recursive ZK We can reduce the cost of operating the system by defining a **state transition proof** and allowing users to queue proofs. We can take out the optimistic part by verifying the proof. With this system we only incur the cost of verifying the proofs. We don't have to manage data structures onchain. Users can make proofs against the `stateRoot`. ```solidity import { PoseidonT3 } from 'poseidon-solidity/PoseidonT3.sol'; contract Sequencer { mapping (uint => uint[100]) batchHashchain; uint batchIndex = 1; // start at index 1 uint batchLength; uint constant maxHashchainLength = 100; // Set when the batch is processed uint stateRoot; uint latestProcessedBatch; event ProofQueued(uint[] publicSignals, uint[8] proof); constructor() { // mark the batch with index 0 as processed to simplify later logic proofBatch[0] = 1; } // add a ZK proof to the queue // there should be multiple functions for different types // of proofs function queue(uint[] memory publicSignals, uint[8] calldata proof) { // publicSignals should include a uint indicating type of proof require(verify(publicSignals, proof)); // publicSignals[0] _must_ be the hash of all other public signals uint pubSignalsHash = publicSignals[0]; if (batchLength == maxHashchainLength) { batchIndex++; batchLength = 0; } batchHashchain[batchIndex] = PoseidonT3.hash([batchHashchain[batchIndex], pubSignalsHash]); batchIndex++; emit ProofQueued(publicSignals); } // This is used to process all the batched proofs and update // the state root function processProofs(uint _batchIndex, uint[] calldata publicSignals, uint[8] calldata proof) { require(verify(publicSignals, proof)); require(batchHashchain[_batchIndex] == 0, 'batch is already processed'); require(_batchIndex != 0); require(batchHashchain[_batchIndex - 1] != 0, 'previous batch is not processed'); // publicSignals[0] should be a hashchain of the public signals // of each included proof require(batchHashchain[_batchIndex] == publicSignals[0]); // publicSignals[1] should be the old state root require(stateRoot == publicSignals[1]); // publicSignals[2] should be the new state root stateRoot = publicSignals[2]; latestProcessedBatch = _batchIndex; } } ``` ## Finality This system is instantly final and doesn't require staking.