# Research updates May 21st
## On the soundness of FRI
Reasoning about the soundness of FRI is complicated. In cryptographic zkps, we can assume DLOG and w/ Schwartz-Zippel can prove soundness, but since FRI is based on information theory and not on cryptography, its soundness proof is a lot more complicated compared to cryptographic protocols like Spartan/PLONK.
Fortunately, in the [ethSTARK paper](https://eprint.iacr.org/2021/582.pdf), there is a simplified way of calculating the security level of an FRI instantiation, so I think we can base our parameters on that.

But I’ll note that Yi and Nalin’s reaction to the ethSTARK spec and above “conjectured soundness” (in screenshot) included suspicion. The spec seems safe empirically since it’s written by Starkware and probably used by them, but it seems like not all cryptographers are fully convinced.
## The prover-verifier tradeoff on FRI
In FRI, there are several parameters (e.g. number of rounds, folding factor, etc) we can tweak to tradeoff proving cost, proof size, and verification cost. For example, we might be able to trade a little more proving time for cheaper on-chain verification. The details of the tradeoffs and possible optimizations can be found in the ethSTARK spec as well.
## What’s done, and what’s left
The main progress made this week is the [implementation of the batched FRI polynomial commitment scheme](https://github.com/DanTehrani/spartan-fri/commit/c3516c132bc86d4ebc29158d5f79c47a8f81e306), which is required to construct a multilinear polynomial commitment scheme (ML-PCS). In short, the ML-PCS computes a univariate evaluation proof **per variable** and batches the openings together. Therefore, even though the scheme requires multiple evaluation proofs to be computed, the verification cost is the same as proving an evaluation of a single univariate polynomial.
I mainly referred to [this article](https://aszepieniec.github.io/stark-anatomy/fri) and [this paper](https://eprint.iacr.org/2019/1020.pdf) for implementation guidance.
### What’s left
Recall that Spartan-FRI consists of sum-checks and an ML-PCS. Before transforming the above batched FRI-PCS into ML-PCS, **we need to implement the part that connects the sum-checks and ML-PCS**; and connecting these two parts seems non-trivial. The hardest part is converting a multilinear polynomial in evaluation form to coefficients form. In univariate polynomials, this can be done using FFT, but for multilinear polynomials, it seems like this conversion is far more complicated. (ChatGPT claims this conversion can be done through a method called *Fourier-Walsh expansion*, but I couldn’t find any papers/articles describing it except for [this one](https://arxiv.org/pdf/1901.08839.pdf) that is highly theoretical)
So to get a deeper understanding of the problem (and the potential solution), I’m following the Hyperplonk paper and its implementation which has a component that connects sum-checks and an ML-PCS.
## Go-to-market
It is important to expose our research and get feedback as frequently as possible. To do so, I believe we should publish Spartan-FRI **without** the following two major components:
1. EC-FRI, which is required for universal finite field compatibility
2. Batching Spartan-FRI proofs to reduce on-chain verification cost
The end goal of Spartan-FRI is to provide “right-field proving” for all finite fields and an economical EVM verifier; the former will require 1., and the latter relates to 2. But even without the above two, Spartan-FRI will be a *usable* library, where developers can experiment with in-browser proving and on-chain verification using finite fields with a smooth multiplicative group (e.g. Pasta, BN254, etc). At that point the capability of the library will be equivalent to snarkjs, but without the trusted setup. And I think we should make it explicit that proof batching for cheaper on-chain verification and EC-FRI is WIP, and see how people react.