## What is the proof splitter for?
## How does it split the proof from the stone-prover?
There are 3 types of split proofs originates from the stone proof:
1. Main proof. This is based on original proof generated from the stone-prover. During split process, [a so-call main proof is extracted from the original proof based on the byte ranges positioned by the annotations](https://github.com/zksecurity/stark_evm_adapter/blob/09f1eba28e8912b12526bea3b78ced12efc1a3ae/src/annotation_parser.rs#L629). This main proof seems to contains the annotations [for proving the parts up to the constraint consistency check](https://github.com/zksecurity/stark_evm_adapter/blob/e7a0c4853e361d5132fb09a4d675160055f05a51/tests/fixtures/stone_proof_annotation.txt#L2-L213) before reducing the problem to a low degree test, which the FRI protocol is responsible for.
2. Merkle statements
3. FRI merkle statements
## How to interpret the annotations?
Example:
`P->V[0:32]: /cpu air/STARK/Original/Commit on Trace: Commitment: Hash(0x92c804e76b6abb4be75fd9ead1681609d73f2769000000000000000000000000)`
There are generated from the stone-prover using [AnnotationScope type](https://github.com/starkware-libs/stone-prover/blob/a78ff37c1402dc9c3e3050a1090cd98b7ff123b3/src/starkware/stark/stark.cc#L541).
`P->V` means the message sent from Prover to Verifier. Likewise, `V->P` means the Verifier to Prover.
`[0:32]` means the byte range between *0 and 32* in the original proof. This would be useful for extracting the data from the original proof for certain types of data, such as [re-constructing the main proof](https://github.com/zksecurity/stark_evm_adapter/blob/09f1eba28e8912b12526bea3b78ced12efc1a3ae/src/annotation_parser.rs#L629).
The `/` in the annotations represents the level of scope. This seems for differentiating the annotations generated during the verifying process. (In stone prover, these annotations are generated from the verifier command which seems to be a local dry run)
[Original](https://github.com/zksecurity/stark_evm_adapter/blob/e7a0c4853e361d5132fb09a4d675160055f05a51/tests/fixtures/stone_proof_annotation.txt#L3) [seems](https://github.com/zksecurity/stark_evm_adapter/blob/e7a0c4853e361d5132fb09a4d675160055f05a51/tests/fixtures/stone_proof_annotation.txt#L8) represeting the stuffs before the problem being reduced to [deep composition polynomial](https://github.com/starkware-libs/stone-prover/blob/a78ff37c1402dc9c3e3050a1090cd98b7ff123b3/src/starkware/stark/stark.cc#L446).
### Questions on stone-prover
- [how does the composition_polynomial_evaluation connect to the fri?](https://github.com/starkware-libs/stone-prover/blob/a78ff37c1402dc9c3e3050a1090cd98b7ff123b3/src/starkware/stark/stark.cc#L301)
- [is this the decomposite polynomial of the deep composition polynomial?](https://github.com/starkware-libs/stone-prover/blob/a78ff37c1402dc9c3e3050a1090cd98b7ff123b3/src/starkware/stark/stark.cc#L324)
- what is an [interaction](https://github.com/starkware-libs/stone-prover/blob/a78ff37c1402dc9c3e3050a1090cd98b7ff123b3/src/starkware/stark/stark.cc#L418) trace? and what is its role in [annotation](https://github.com/zksecurity/stark_evm_adapter/blob/e7a0c4853e361d5132fb09a4d675160055f05a51/tests/fixtures/stone_proof_annotation.txt#L4-L7)?
- in the annotation file, there are totally 3 traces, does it contain the following different trace?
- execution trace
- interaction trace(what is this?)
- composition trace(combined with constrains over 2 rows)?
- [is this the random linear combination polynomial?](https://github.com/starkware-libs/stone-prover/blob/a78ff37c1402dc9c3e3050a1090cd98b7ff123b3/src/starkware/stark/stark.cc#L439)
- interestingly, [they call the deep composition polynomial an oracle](https://github.com/starkware-libs/stone-prover/blob/a78ff37c1402dc9c3e3050a1090cd98b7ff123b3/src/starkware/stark/stark.cc#L446). Not sure how to wrap my mind around this term.
- the annotation contains another term called _virtual oracle_ which seem representing an evaluation machine (or FRI decommitments) for the FRI queries
- [how does the query to evaluation domain transformation works?](https://github.com/starkware-libs/stone-prover/blob/a78ff37c1402dc9c3e3050a1090cd98b7ff123b3/src/starkware/stark/stark.cc#L492)
- what is the connection between FRI layers on stone-prover and [the input and output layers in splitter](https://github.com/zksecurity/stark_evm_adapter/blob/09f1eba28e8912b12526bea3b78ced12efc1a3ae/src/annotation_parser.rs#L496-L505)?
## What is the input to the first FRI layer?
Based on [this code example](https://github.com/aszepieniec/stark-anatomy/blob/6d82d141995f17385fa83551a2591e249e7c2258/code/test_fri.py#L30), the input is the $p_0(x)$ over a domain.
Different from the simple code above, the stone prover maps the query index to the constraint composition trace. That is the query index will be used as the trace row.
Because the verifier doesn't maintain the DEEP composition polynomial, the prover will need to provide the mask values for evaluating the composition polynomial at the row corresponding to the query index.
So during the FRI stage, the verifier will need to check the consistency between the evaluation decommitments and the evaluations over the mask values with the trace decommitments, in addition to checking FRI decommitments.
### how to calculate $p_0(v)$ and $p_0(-v)$ using induced these mask values?
Maybe the verifier can directly do the calculations for the both evaluation points using the following facts
- constraints are known to the verifier already
- mask values are the required trace values to fulfill the constraints
### What is the role of the constant polynomial at final layer?
Both the constant polynomial resulted at the final layer and the evaluations over the domain at the 1st layer, are commited to a merkle tree.
Because the constant polynomial is the same at the final layer regardless of the evaluation points, its can be used repeatly to check the consistency of layer transitions during query phrase.
### What is the query index algorithm
- based on a query index (evaluation point), how to work out the other query indexes (eval poitns) required to verify the foldings?
- how does the concept colinearity test relate to this?
## How does the FRI commitment phrase work?
- 
- *why is this random number needed when there are queries for random evaluation points?*
- this resource provide a relatively easy intuition behind the math: https://aszepieniec.github.io/stark-anatomy/fri#split-and-fold
- Commitments are based on the evaluations over a subgroup domain D using generator $w^i$, where $i$ is the index of the domain.
- the evaluations are called codewords, which is the term used in the reed solomon. it use low degree extension to extend the domain by a blowup factor, which I think is to make it hard for prover to cheat.
- *For example, if the domain is the not extended, the original polynomial is of degree 16, meaning it can be represented by 16 evaluations which is small and seems very easy to cheat with.*
- I think the reason that the term **codeword** is used is to emphasize aspect of error checking, which is the goal of the original reed solomon technique. But here is for checking if the prover cheats.
- I see the blowup rate as the rate of how much longer would it extend the rows of a column represented by a polynomial(aka the low degree extension). In other words, how much redundancy to add to the original codewords and exggerate the error if happens so easier detection.
- with the pattern of $f(X) = f_E(X^2) + X \cdot f_O(X^2)$, it can use the evaluations(codewords) from the pattern of evalution points $w^i$, thus $\lbrace f^\star(\omega^{2i})\rbrace_{i=0}^{N/2-1}$ $= \left\lbrace \frac{f(\omega^i) + f(-\omega^i)}{2} + \alpha \cdot \frac{f(\omega^i) - f(-\omega^i)}{2 \omega^i} \right\rbrace_{i=0}^{N/2-1}$
## How does the FRI query phrase work?
I think the key to understand how this query phase relates to the commitment phrase above is that all the evaluations (including the intermediate evaluations during degree halving, aka the layers) are structured in a domain where the inolved evaluation points can be induced from the query indexes.
Because the evaluations are positioned in a domain and can be retrieved to substitube the evaluation placeholders during degree halving, the evaluation points can be induced and check against a merkle commitment.
- How is this related to the decommitments?
- during decommitment for query phrase, the prover provide the evaluations involved for a correct calculation for the algorithm of degree halvings, so the verifier only needs to to be exposed to a small set of evaluations and check the calculation at the last layer.
- the correctness of the low degree bound test can be checked against the merkle tree over the evaluation points, which are simply induced from the query indexes
## How split proofs related to the L1 verifiers?
According to the [VerifyInnerLayers](https://github.com/starkware-libs/stone-prover/blob/a78ff37c1402dc9c3e3050a1090cd98b7ff123b3/src/starkware/fri/fri_verifier.cc#L120)
It allows dividing the original proof into smaller proofs. These smaller proofs main proof, first FRI layer proof (which seems to prove the connection to the traces) and other FRI layer proofs.
For each inner FRI layer(excluding the first layer and last layer), denoted as layer i, its proof contains previous layer's commitments, which represent the evalutaions of $P_{i-1}(q)$. To prove its transformation from previous layer to current inner layer for polynomial $P_{i}(C_{i-1}(q))$, where the $C_{i-1}(q)$ denoted as the coset of query index $q$ at layer $i - 1$, the commitments from the previous layer can be used to verify with the cosets.
- what is the role of fri step?