# Prover Diagram + Terminology https://hackmd-prod-images.s3-ap-northeast-1.amazonaws.com/uploads/upload_6cca721567bed7f08f35e18bbe70c735.png?AWSAccessKeyId=AKIA3XSAAW6AWSKNINWO&Expires=1695371482&Signature=klKtCiO5nh4Fe%2B7ZFHtKXMTQGP8%3D ![](https://hackmd.io/_uploads/rJzWeRcka.png) # Pre-Committed columns (not in Bobbin's diagram) Pre-commited columns are a set of fixed columns ROM/Look-up tables. In Risc0 they have selector tables for each practical (potential) length. Alternative names proposed: Stage 0/Round 0 polys/ Pre-processed tables/Bobbin columns :) # Stage 1 starts which culminates in the LDE of the trace ## Main execution trace m columns and n rows ## Run iNTT to get main trace polys or base trace polys ## Run NTT to get extended trace ### What about randomization for zk property? Fully generic double interpolation domain as proposed by Alan or part of AIR like Risc0 or using vanishing polynomial and exclude the last few rows? ### Note that one can define three domains: trace + constraints evaluation + lde domains. Constraint evaluation is mainly for improved performance when the maximum constraint degree is much smaller than blowup factor # Stage 2 starts and culminates in commitment to LDE of aux traces ## Merkalize main trace to get randomness ## Build aux trace using extension randomness ## Build aux trace polys using same methodology as done in stage 1 as above. ### up to here, everything sums up to what is called trace commitment phase where trace stands for both main and aux traces. # Stage 3 starts and culminates in constraint evaluations polynomials and their commitment ## Take LDE extended traces with the random values and do constraint evaluations + random linear combination to get constraint evaluations (p columns and n*b rows where p is number of distinct quotients). ## Do divisions and compute compostiion polynomial evaluations i.e. n*b elements -> change name maybe to more discriptive name e.g. validity poly or quotiend composition poly or proof poly or something else ## Compute composition trace segment polynomials. Note that one can do everything in evaluation domain. Also should investigate which splitting strategy is better in evaluation form. ## Compute composition trace polynomial evaluationss i.e. n*b rows for b columns and then Merkelize it to reseed and get OOD challenge z. # Stage 4: DEEP stuff -> FRI ## Start DEEP stuff: Evaluate polys on z i.e. main + aux + constraint composition polynomials ## Build DEEP polynomial (consider renaming to evaluation quotients polynomial or FRI polynomial or overall low degree polynomial) vector in coefficient form of length same as original domain. ## Do NTT to evaluate over LDE domain to get n*b elements. Note: we can do this in evaluation domain (don't remember what we can do in eval domain ?!) ## The output of previous step is input to FRI protocol (note: investigate Alan's idea of not including the previously checked point, in co-linearity test, in FRI). ## Consider standarizing the terms from -> to: Algebraic -> parametric and linear/affine -> independent.