# Recursion interface ## Problem We want to be able to support recursion in Noir. We want to support both IVC and PCD usecases, without this, we cannot have: - Noir protocol circuits (which use recursion to verify previous kernel circuit proofs) - Rollups on Aztec (To verify a tree of proofs) Currently we have a recursion interface, however the user interface needs to be revamped and it does not actually allow rollups on Aztec, even though the underlying barretenberg library has the construct to do it. ### Proposed changes - TLDR - Remove the inner aggregation object from the Noir code interface - Allow nested aggregation objects to be used so that we can verify a proof which verifies a proof - Remove the need for converting proofs from bytes to Field elements by the user - Remove is_recursive flag for a `#[recursion_friendly]` macro **How does folding fit into all of this?** Its not clear that the current interface is usable for folding instances, so that's future Aztec's problem. ## Preliminary ### Dual proving system A lot of proof systems that support recursion and eventually verify their final proof on Ethereum, use a dual proving system setup. This means they have two proving systems where one of the proving systems(inner) creates proofs which can be efficiently verified in a SNARK and the other proving system (outer) creates proofs which can be efficiently verified in a solidity contract. We also have the same duality. A small detail: the main change between the two proving systems for us is the hash function being used in the transcript. In other stacks, the change can be a lot more drastic, ie one could be using STARKs and the other is using Groth16. **Inner proving system:** Lets call the proving system which is easy to verify efficiently in another SNARK the inner proving system **Outer proving system:** Lets call the proving system which is easy to verify efficiently in solidity the outer proving system #### Why this is important in general? Lets say that you have a massive circuit/program/function, its so massive that you actually cannot prove it due to reasons: ![image](https://hackmd.io/_uploads/S1Nvpeprp.png) We can incrementally prove parts of the program, or **I**ncrementally **V**erify parts of a **C**omputation. This simply allows you to create proofs for computations that you otherwise could not by proving little parts of the program at a time. ## Theoretical API -- Take 1 Okay lets first define a theoretical function for verifying a proof in a circuit: ``` fn verify_proof( current_state : State, circuit_id : CircuitId, proof : Proof, proof_verifies_another_proof : bool ) -> State ``` > This is not the actual function, but its close enough to what we want to explain the main concepts. **Discussion** - There is always a current state, just think of it as an object that keeps track of all of the proofs we have accumulated so far. If no proof has been verified so far then the current state is "zero". - The proof is used to update the current state, producing a new state. - The circuit id is a way to identify what circuit we are verifying a proof for. >Note: Everytime a proof is verified it returns a new state, the boolean value that indicates whether the proof being verified, also verifies a proof, is a variable that we will need when we instantiate barretenberg. ### Verifying two proofs in one circuit Using our API to verify two proofs in one circuit, the noir program would look like: ``` fn main(circuit_id : CircuitId, proof_1 : Proof, proof_2 : Proof) -> State { // We have not verified anything so the current state is zero. let current_state = State::zero(); let new_state = verify_proof(current_state,circuit_id, proof_1, false); let new_state2 = verify_proof(new_state, circuit_id, proof_2, false); new_state_2 } ``` The above is roughly what we refer to as the `double_verify_proof`. >Note: The above example makes it very explicit that if the current_state is always zero to start off with, then we don't need to explicitly set it. This is one of the things that are proposed to be removed from the Noir interface. Lets make the API more complicated, to bring it a bit closer to the barretenberg recursion strategy. ## Theoretical API -- Take 2 ``` fn verify_proof( current_state : State, circuit_id : CircuitId, proof : Proof, public_inputs : PubInputs proof_state : Option<State>, ) -> State ``` **Discussion** - The first change made is that the function now accepts a vector of public inputs. This is common, lets ignore it if it is a new concept. - The second change requires some preamble: - Whenever we verify a proof, a new state is returned. - Lets say the proof, we are verifying was previously verifying a proof. This means that it must have produced a new state just by virtue of it calling `verify_proof` - This new state that was produced by the proof needs to be explicitly handled. We mention it because it has a particular name in the codebase. - Not every proof will be for a circuit that verifies another proof, which is why it is an optional value. We now have enough information to instantiate barretenberg's recursion strategy. ## Output Aggregation Object The output aggregation object is data that gets produced whenever a proof is verified in a circuit. In our theoretical API, this is the state that gets returned from `verify_proof`. ## Input aggregation object This is the data that gets passed to `verify_proof` attesting to the current state of proof accumulation in this circuit. This is the `current_state` in our theoretical `verify_proof` method. ## Nested aggregation object This is the optional data that gets passed to `verify_proof` in the case that the proof being verified also verified another proof and subsequently produced an output aggregation object. In the theoretical API, this was called `proof_state`. ## Verification Key This is the data that is used in verification to verify a proof and also to identify a circuit in a succinct manner. In the theoretical API, this was the `circuit_id`. ## Current Noir API Below is Noir code from `double_verify_proof`: ``` fn main( verification_key: [Field; 114], proof: [Field; 94], public_inputs: [Field; 1], key_hash: Field, input_aggregation_object: [Field; 16], proof_b: [Field; 94] ) -> pub [Field; 16] { let output_aggregation_object_a = verify_proof( verification_key, proof, public_inputs, key_hash, input_aggregation_object ); let output_aggregation_object = verify_proof( verification_key, proof_b, public_inputs, key_hash, output_aggregation_object_a ); output_aggregation_object } ``` > Note: The `key_hash` and `verification_key` for all intents and purposes are the same thing. Again note, the similarities between the above code and the pseudocode in `Verifying two proofs in one circuit`. ### Proposed change -- Remove input aggregation In the above code, we are verifying two proofs. The first `input_aggregation_object` that is passed in as a parameter will always be zeroes. This is because the sole purpose of the input aggregation object is to stitch together calls to `verify_proof` within the same circuit. Or alternatively, keep track of all of the proofs that we have verified in a single circuit. This parameter is hence redundant in the Noir interface, since the backend is able to see all of the calls to verify_proof and do the stitching itself. The new API would look like this: ``` fn main( verification_key: [Field; 114], proof: [Field; 94], public_inputs: [Field; 1], key_hash: Field, proof_b: [Field; 94] ) { verify_proof( verification_key, proof, public_inputs, key_hash, ); verify_proof( verification_key, proof_b, public_inputs, key_hash, ); } ``` > Note: The above change also removed the return parameter, the output aggregation object from `verify_proof`. The idea is that we can say the output aggregation object is now a part of the proof. ### Proposed change -- Remove public inputs from proof This is not obvious from the code, the proof also includes the public inputs alongside the separate public inputs array, which introduces redundancy and some confusion. To elaborate, in the above we have: proof: [Field; 94], public_inputs: [Field; 1], If we have a proof with another public input the parameters would become: ``` proof: [Field; 95], public_inputs: [Field; 2], ``` This is because the proof size is actually 93 elements and the public inputs are being concatenated to the proof. With the proposed change, the proof size will be constant and only the public_inputs variable would change. ### Proposed change -- Allow nested aggregation object Currently the Noir code does not have a way to tell the DSL code that we want to verify a proof that verified a proof. The functionality is available in the DSL, the nested aggregation object is just never set. I propose that we also append the nested aggregation object to the proof, and we set the values to all zeroes when there is no nested aggregation object. We would go from: ``` fn main( verification_key: [Field; 114], proof: [Field; 93], public_inputs: [Field; 1], key_hash: Field, proof_b: [Field; 93] ) ``` to ``` fn main( verification_key: [Field; 114], proof: [Field; 109], public_inputs: [Field; 1], key_hash: Field, proof_b: [Field; 109] ) ``` The extra 16 elements denoting the nested aggregation object. ### Proposed change -- Use bytes instead of field elements for proof and verification key **Assumptions** - The only reason we have proof and verification key as field elements is for efficiency - When we implement honk, recursion will be cheap enough for us to not need to worry about this efficiency **Gate Budget** With our current `double_verify_proof`, we have about 30K gates before we reach the next power of two. This is with one public input, so perhaps with 50, this becomes an issue. **Advantage** Currently when a user creates a proof, they receive the proof and verification key in bytes. They then need to convert the proof and verification key to field elements before passing it to `verify_proof`. If we just did this inside of the circuit, we avoid that extra process of conversion. **Thoughts:** If we only need this when we are creating recursive proofs, meaning if we tell barretenberg, this is a recursive proof, then it can always spit out field element format in that case. This does mean that if we simply want to verify a recursion friendly proof, then it will need to be converted to bytes before passing it to the native verify function. ### Proposed change -- Move `is_recursive` to the circuit layer We would go from: ``` fn main( verification_key: [Field; 114], proof: [Field; 109], public_inputs: [Field; 1], key_hash: Field, proof_b: [Field; 109] ) ``` to ``` #[recursion_friendly] fn main( verification_key: [Field; 114], proof: [Field; 109], public_inputs: [Field; 1], key_hash: Field, proof_b: [Field; 109] ) ``` This would add a field into the ACIR format that this circuit should be compiled with the `is_recursive` flag. This avoids us having to have two separate codepaths for recursion and non-recursion on the tooling layer.