# [Deprecated] Translating FRI Soundness to Unique Decoding Radius
*Version 1 -- December 2023*
### **This doc has been deprecated in favor of [a newer version](https://hackmd.io/su_x78n1R5Wdb7qyCd9qcw#fn3).**
This note aims to articulate the soundness of the batched FRI protocol in the unique decoding radius. We begin with a naive approach and will worry about details later.
### Setup
The Prover runs *BATCHED FRI* on $f_1,\ldots,f_t$, where $f_i$ are polynomials over $\mathbb{F}_q$.
We use a single random element for batching, which we denote $z_B$, and we write $f^{(0)}$ to refer to the result of batching $f_1,\ldots,f_t$
We use a single random element for folding in each round of *FRI COMMIT*. We denote these as $z_0,\ldots,z_{i-1}$.
We assume $r$ rounds of *FRI COMMIT*, indexed $1,...,r$. In Round $i$ of *FRI COMMIT*, the Prover uses Verifier randomness $z_{i-1}$ to construct $f^{(i)}$.
### Assumption
We assume the inter-leaved distance of $f_1,\ldots,f_t$ to $V$ is large.[^interleaved]
### Claim
We want to show that with $s$ queries, the Verifier is very unlikely to accept the result of *BATCHED FRI*.
### Sketch
We'll be doing $s$ queries, and we hope that this will be sufficient to reach 100 bits of security.
We define $\epsilon_\mathsf{Q}$ as the likelihood the Verifier accepts a single invocation of *FRI QUERY*. As long as $\epsilon_\mathsf{Q}$ is small enough, $s$ queries will be sufficient to reach 100 bits.
Then, we just need to bound the probability that $\epsilon_\mathsf{Q}$ is too large.
### Bounding the probability that $\epsilon_\mathsf{Q}$ is large
We want to show that it's very unlikely that $\epsilon_\mathsf{Q}$ is large.
The only reason $\epsilon_\mathsf{Q}$ will be large is if we get a "bad randomness" during the initial batching or during one of the rounds of *FRI COMMIT*. This is expressed formally in Equation (8.5) of the Proximity Gaps paper.[^8.5]:
We use $E^{(0)}$ to denote the probability of "bad batching" and $E^{(i)}$ to denote the probability of "bad folding" in Round $i$ for $i=1,...,r$.
### Bounding $E^{(0)}$
We bound[^no-weights] $E^{(0)}$ by $\frac{|D^{(0)}|t}{q}$, where
- $|D^{(0)}|$ is the size of the initial FRI domain,
- $t$ is the number of polynomials being batched, and
- $q$ is the size of the field.
### Bounding $E^{(1)}$
$E^{(1)}$ is the probability of "bad randomness" in Round 1 of *FRI COMMIT*.
In Round 1, we split $f^{(0)}$ into $l^{(0)}$ pieces of size $\frac{|D^{(0)}|}{l^{(0)}}$, and we combine them using $z^{(0)}$.
We bound $E^{(1)}$ by $\frac{|D^{(0)}|(l^{(0)}-1)}{l^{(0)}q}$.[^3]
### Bounding $E^{(2)}$
$E^{(2)}$ is the probability of "bad randomness" in Round 2 of *FRI COMMIT*.
In Round 2, we split $f^{(1)}$ into $l^{(1)}$ pieces of size $\frac{|D^{(0)}|}{l^{(0)}l^{(1)}}$, and we combine them using $z^{(1)}$.
We bound $E^{(2)}$ by $\frac{|D^{(0)}|(l^{(0)}-1)}{l^{(0)}l^{(1)}q}$.
### Bringing it Together
Now, we can bound:
$$E^{(0)}+E^{(1)}+...+E^{(r)}\leq\frac{|D^{(0)}|t}{q}+\frac{|D^{(0)}|(l^{(0)}-1)}{l^{(0)}q}+\frac{|D^{(0)}|(l^{(1)}-1)}{l^{(0)}l^{(1)}q}+...+\frac{|D^{(0)}|(l^{(r-1)}-1)}{l^{(0)}...l^{(r-1)}q}$$
Note that in the $i+1$th round of *FRI COMMIT*, $\frac{|D^{(0)}|}{l^{(0)}...l^{(i)}}$ is the size of the commitment domain and $l^{(i)}-1$ is the number of functions being folded together. Continuing this algebra, and setting the folding factor $l^{(i)}=l$ for all $i$ for simplicity, we find:
$$= \frac{|D^{(0)}|}{q}t+\frac{|D^{(0)}|\cdot(l-1)}{q\cdot l}\left(1+\frac{1}{l}+...\frac{1}{l^{r-1}}\right)$$
$$\leq \frac{|D^{(0)}|}{q}t+\frac{|D^{(0)}|\cdot(l-1)}{q\cdot l}\left(\frac{1}{1-\frac{1}{l}}\right)$$
$$=\frac{|D^{(0)}|}{q}t+\frac{|D^{(0)}|}{q}$$
$$=\frac{|D^{(0)}|}{q}(t+1)$$
Here $\frac{|D^{(0)}|}{q}t$ is the error for *batching* and $\frac{|D^{(0)}|}{q}$ is the combined error for all rounds of *FRI COMMIT*.
Aspirationally, we'd like to replace $\epsilon_\mathsf{C}$ from ProxGaps Lemma 8.2 with this expression $\frac{|D^{(0)}|}{q}(t+1)$. In the next section, we discuss where the analysis above seems to cut corners relative to the approach in the Proximity Gaps paper.
### Comparing the analysis above to the ProxGaps paper
The corresponding section of the ProxGaps analysis looks as follows: 
- We note that the analysis in the ProxGaps paper includes a $\frac{2m+1}{\sqrt{\rho}}$ term. It's not immediately clear whether this term will have an analogue in the unique decoding regime.
- We note that the ProxGaps analysis routes through Theorem 7.2/7.4, which include a $\text{max()}$ argument that we have not considered in this analysis. Again, it's not immediately clear whether we need to worry about that $\text{max()}$.
### Next Steps
- It seems that addressing the $\frac{2m+1}{\sqrt{\rho}}$ term and the question of whether we need to worry about the $\text{max()}$ in Theorems 7.2/7.4 would render this analysis complete.
- Once analysis of FRI soundness is complete, it remains to integrate this into the context of a DEEP-ALI+FRI soundness argument, as in Theorem 8 from the *Summary on the FRI Low-Degree Test.*
[^interleaved]: In other words, we assume the correlated agreement density of $F=\text{span}{(f_1,...,f_t)}$ to $V$ is not too large. 
[^no-weights]: We note that this seems to follow from Theorem 6.1 directly. We're not sure why the ProxGaps paper cites Theorem 7.4 here; going through Theorem 7.4 (or 7.2) introduces a $\max()$ argument that complicates the simple presentation we give here.
[^3]: Here the $l^{(i)}-1$ term is the soundness degradation for parametric batching of $l^{(i)}$ functions.
[^8.5]: 