# Analysis and Optimization of the Brakedown Polynomial Commitment Scheme in Plonky3 and Binius ![OlaVM temp (1)](https://hackmd.io/_uploads/rJC8KgFcC.png) ## 1. Introduction A polynomial commitment scheme is a cryptographic protocol that allows one party (the prover) to commit to a polynomial without revealing its coefficients. Subsequently, the prover can demonstrate to another party (the verifier) that the polynomial evaluates to a specific value at a particular point, without disclosing any other information about the polynomial. Polynomial commitment schemes are an essential component in various zero-knowledge proof systems. The development of these schemes significantly influences the iteration and evolution of zero-knowledge proof systems. Currently, the most common polynomial commitment schemes include FRI [[1]](https://drops.dagstuhl.de/storage/00lipics/lipics-vol107-icalp2018/LIPIcs.ICALP.2018.14/LIPIcs.ICALP.2018.14.pdf)(based on hash functions), KZG[2] (based on elliptic curves), and Bulletproofs [[3]](https://doc-internal.dalek.rs/bulletproofs/notes/inner_product_proof/index.html) (also based on elliptic curves). Initially, Plonky (2019) used KZG polynomial commitments. Plonky2 (2022) incorporated FRI polynomial commitments inspired by the design of STARK. Plonky3 (2024) further introduced the Brakedown polynomial commitment scheme. KZG polynomial commitments are implemented on elliptic curves. Their advantage is low verification cost when applied to univariate polynomials. However, most of the prover's work involves extensive FFT (to generate polynomials) and MSM (to generate polynomial commitments) calculations, resulting in slow proving speeds. FRI polynomial commitments, based on hash functions, do not require elliptic curve MSM operations. By leveraging recursive computation techniques, FRI enhances efficiency, balancing prover and verifier costs depending on the Reed-Solomon [4] code rate used in the protocol. The Brakedown [[5]](https://eprint.iacr.org/2021/1043.pdf?ref=hackernoon.com) polynomial commitment scheme also uses hash functions and integrates error-correcting code theory, making it faster than FRI, though the proofs are significantly larger. Compared to KZG, FRI and Brakedown offer faster proving speeds and quantum security, but lack homomorphic hiding properties, limiting their application in certain scenarios. In 2021, Alexander Golovnev and colleagues proposed the Brakedown polynomial commitment scheme, drawing on linear-time encoding from [6] and the Spartan linear-time interactive proof system design from [7]. Brakedown is considered the fastest polynomial commitment scheme currently available. As a result, it has been referenced in both Orion[ [8]](https://eprint.iacr.org/2022/1010) and Binius [[9]](https://eprint.iacr.org/2023/1784.pdf)[[10]](https://github.com/IrreducibleOSS/binius), and incorporated into the latest Plonky3. Its main drawback is the larger proof size. ## 2. Preliminaries The Brakedown polynomial commitment scheme evolved from the linear-time commitment scheme, primarily improving the linear-time encoding part by utilizing Expander Graphs. Its main technical components are illustrated in the diagram below. ![Brakedown](https://hackmd.io/_uploads/HJ62JcVtC.png) ### 2.1 Tensors and Tensor Products In mathematics, a tensor can be understood as a multilinear map, i.e., a mapping from several linear spaces to a field. The relationship between tensors of different orders (dimensionality) and other data types is as follows: • Scalar (0-order tensor): A single numerical value. • Vector (1-order tensor): A one-dimensional array. • Matrix (2-order tensor): A two-dimensional array. • Higher-order tensor: Arrays of three or more dimensions, such as a color image (each pixel containing RGB values). There are various types of tensor products, such as the outer product, inner product, direct product, and Kronecker product. Here, we focus on the direct product (commonly referred to as the tensor product), described as: for two tensors of any order, each component of the first tensor multiplies with each component of the second tensor. The collection of these combinations still forms a tensor, known as the tensor product of the first and second tensors. The order of the resulting tensor equals the sum of the orders of the factor tensors. Here is an intuitive example: $$\left[\begin{matrix}a_1\a_2\end{matrix}\right]\otimes\left[\begin{matrix}b_1&b_2&b_3\end{matrix}\right]=\left[\begin{matrix}a_1b_1&a_1b_2&a_1b_3\a_2b1&a_2b_2&a_2b_3\end{matrix}\right]$$ Typically, when we refer to the tensor product, we mean the direct product, which has the effect of expanding the vector dimension. In the above example, two one-dimensional vectors expand into a two-dimensional matrix. In zero-knowledge (ZK) systems, the tensor product used is the direct product, where a matrix composed of polynomial coefficients is decomposed into the tensor product of two vectors, as shown below: $$\left[\begin{matrix}a_1b_1&a_1b_2&a_1b_3\a_2b1&a_2b_2&a_2b_3\a_3b1&a_3b_2&a_3b_3\end{matrix}\right]=\left[\begin{matrix}a_1\a_2\a_3\end{matrix}\right]\otimes\left[\begin{matrix}b_1&b_2&b_3\end{matrix}\right]=\mathbf{a}\otimes\mathbf{b}$$ ### 2.2 Error Correcting Code and Linear Code Error correcting codes (ECC) are techniques used to control errors in data transmission or storage. They achieve error detection and correction by adding redundant information to the original data, ensuring data integrity and reliability. ECCs are widely used in the field of communications. The main processes include encoding, transmission/storage, decoding, and data recovery. One simple example of an error correcting code is the repetition code, where each data bit is sent three times. During decoding, the bit with the highest repetition is taken as the correct one. This method can correct at most one error bit or recover two missing bits. Here is a specific example: | Transmission Data | Encoded Data | Received Codeword | Decoded Data | | ----------------- | ------------ | ----------------- | ------------ | | 0 | 000 | 000 | 0 | | | | 001 | | | | | 010 | | | | | 100 | | | Transmission Data | Encoded Data | Received Codeword | Decoded Data | | ----------------- | ------------ | ----------------- | ------------ | | 1 | 111 | 111 | 1 | | | | 110 | | | | | 101 | | | | | 011 | | <p style="text-align: center">Table 1: Encoding and Decoding Process of Repetition Codes </p> A significant branch of error correcting codes is linear codes, which use principles of linear algebra to detect and correct errors during data transmission or storage. One important property of linear codes is that the codewords can be obtained through linear combinations of the message bits. This means that the linear combination of any two codewords is still a codeword, which is very useful for batch processing in polynomial commitments. Several key parameters of linear codes include: * Codeword: Each valid encoding element in a linear code is called a codeword. Codewords are obtained by encoding the information symbols. * Code Length: The number of symbols in a codeword. * Message Bits: The number of bits contained in the original information. * Parity Bits: Additional bits added to the message bits for error detection and correction. * Code Rate: The ratio of message bits to the codeword length, indicating encoding efficiency. * Hamming Distance: The number of differing symbols at corresponding positions between two codewords of the same length. ▲ BCH Codes: BCH codes use finite fields and polynomial computations. An error-detecting polynomial can be constructed to allow the receiver to detect errors. The basic idea of BCH encoding is: if the message is $m(x)$, it is multiplied by an encoding polynomial $p(x)$ to obtain the codeword $m(x)p(x)$. During transmission, the codeword may be affected by various interferences, resulting in the polynomial being added to an error polynomial $e(x)$. The receiver's message is $c(x)$, thus $c(x)=m(x)p(x)+e(x)$. The receiver and sender agree on 𝑝(𝑥) in advance. By dividing 𝑐(𝑥) by 𝑝(𝑥), if the remainder is zero, it means there was no interference, and the quotient is 𝑚(𝑥), the correct message. The polynomial 𝑝(𝑥) must be a primitive polynomial, and a primitive polynomial can correct only one error. The position of the error is determined based on the remainder 𝑒(𝑥). ▲ Reed-Solomon Codes: Reed-Solomon codes are a subset of BCH codes. They use a primitive polynomial called a generator polynomial, given by:$p(x)=(x-\alpha^0)p(x-\alpha^1)\cdots (x-\alpha^{2t-1})$ Since Reed-Solomon encoding is also performed over a finite field, here $\alpha$ represents the generator of the finite field. The encoding is thus defined as: $c(x)=m(x)p(x)$ where $m(x)$ has a length of $k$, $p(x)$ has a length of $l = 2t$, and the codeword $c(x)$ has a length of $n = l + k$. For example, if $m(x)$ has $k$ values of $[m_0, m_1, \cdots, m_{k-1}]$, the polynomial formed by these values is: $m(x)=\sum_{i=0}^{k-1}m_i x^i$ The Reed-Solomon encoding is then: $c(x)=m(x)x^m-((m(x)x^m) \mod p(x))$ Here, the degree of $((m(x)x^m) \mod p(x))$ is less than or equal to $m-1$, and the degree of $c(x)$ is less than or equal to $n-1$. Thus, $c(x)$ has the following form: $c(x)=\sum_{i=0}^{n-1}c_i x^i$ Therefore, the encoded codeword is $[c_0, c_1, \cdots, c_{n-1}]$. ### 2.3 Hash Functions A hash function takes input data, scrambles it, and creates a fingerprint called a hash value. The hash value is typically represented as a short string of random letters and numbers. Mathematically, it is expressed as: $H_m=Hash(m)$ where $m$ represents a message of arbitrary length (length limits vary by algorithm); $Hash$ is the hash function composed of a series of arrangements and mathematical computations; $H_m$ is the output hash value of fixed length. The characteristics of hash functions are: * One-way property: If two hash values are different, then their original inputs are also different. It is very difficult to deduce the original input from the output. * Collision: The relationship between the input and output of a hash function is not unique. If two hash values are the same, the two input values are likely the same but may also be different. It is hard to find a pair of messages 𝑥1≠𝑥2 with 𝐻(𝑥1)=𝐻(𝑥2) * Tamper-proof: A slight change in the input data will produce a completely different hash value. One of the evolutionary goals of zero-knowledge (ZK) systems is to find computation-friendly collision-resistant hash functions. Currently, common hash algorithms include SHA256, Keccak, Blake3, Poseidon2, and Poseidon3 (detailed in previous articles). ![Merkel](https://hackmd.io/_uploads/ryR1JJBtC.png) <p style="text-align: center"> Figure 2: Structure of a Merkle Tree</p> Merkle trees are fundamental to blockchain technology. First, in a blockchain, all transactions are organized in a Merkle tree format and correspond to the block header, ensuring the immutability of transaction information within the block. Second, in a Merkle tree, any change in a leaf node will alter the root hash, making it easy to identify which data has changed within a large dataset, thus making the entire data verification process very efficient. Third, by leveraging the one-way property of hash functions, zero-knowledge proof systems can be constructed, allowing polynomial commitments to be verified through partial nodes without revealing the specific content. Fourth, Merkle trees are used in the Proof of Work (PoW) consensus algorithm, where the main operation involves providing certain data and finding other data to combine and calculate a root hash value smaller than a given target (commonly known as mining). Current cryptocurrencies like Bitcoin use the PoW consensus. In summary, the Merkle tree's ability to efficiently and securely verify data, detect changes, and support zero-knowledge proofs and consensus algorithms makes it a cornerstone of modern blockchain technology. ## 3. Brakedown Commitment ### 3.1 Linear Codes Polynomial Commitments The foundation of Brakedown is constructing polynomial commitments using linear codes. The advantage lies in the proof size and verification time complexity, both being the square root of the length. Below is a simplified process of a polynomial commitment using linear codes. Suppose the length of the coefficient vector of polynomial $g$ is 𝑛. It can be padded with zeros to extend to $d=k^2$ where $d \geq n$. Therefore, we have: $\begin{aligned} g(x)&=a_0+a_1x+a_2x^2+\cdots+a_{d-1}x^{d-1}=\sum_{i=0}^{d-1}a_ix^i\\ &=[1,x,x^2,\cdots,x^{k-1}]\left[\begin{matrix}b_{1,1}&b_{1,2}&\cdots &b_{1,k}\\b_{2,1}&b_{2,2}&\cdots &b_{2,k}\\ \vdots &\vdots&\ddots &\vdots\\b_{k,1}&b_{k,2}&\cdots &b_{k,k}\end{matrix}\right]\left[\begin{matrix}1\\x^k\\ x^{2k}\\ \vdots\\x^{(k-1)k}\end{matrix}\right]\\ &=\sum_{p=1}^{k}\sum_{q=1}^{k}b_{p,q}x^{(p-1)+(q-1)k} \end{aligned}$ By the above equation, the polynomial coefficients of length $d=k^2$ are transformed into a $k$ order matrix 𝐵. We aim to construct a polynomial commitment with a proof size equivalent to the row count $k=\sqrt{d}$, equating to the original polynomial commitment of length 𝑑. Achieving this requires the use of linear error-correcting codes (as introduced in the previous section on Linear-Time Encoding) and Merkle trees. First, apply linear error-correcting codes to encode each row of the matrix 𝐵 denoted as $\mathbf{Enc}(\mathbf{B}_i)$, extending it to a $k\times m$ matrix $\mathbf{C}$, where 𝑙 is the codeword length. To ensure 𝑙 is not too large, a constant code rate encoding method is typically used. $\left[\begin{matrix}b_{1,1}&b_{1,2}&\cdots &b_{1,k}\\b_{2,1}&b_{2,2}&\cdots &b_{2,k}\\ \vdots &\vdots&\ddots &\vdots\\b_{k,1}&b_{k,2}&\cdots &b_{k,k}\end{matrix}\right]\stackrel{Linear Codes}{\Longrightarrow} \left[\begin{matrix}c_{1,1}&c_{1,2}&\cdots &c_{1,m}\\c_{2,1}&c_{2,2}&\cdots &c_{2,m}\\ \vdots &\vdots&\ddots &\vdots\\c_{k,1}&c_{k,2}&\cdots &c_{k,m}\end{matrix}\right]$ Next, commit to each column (with 𝑘 values) of the matrix and build the first Merkle tree. Then, use the root hashes of each column's Merkle tree as leaf nodes (totaling 𝑙 nodes) to construct a second Merkle tree. Finally, the polynomial commitment is based on the second Merkle tree. In the proof of evaluation, two steps are required: the Proximity Test and the Consistency Test. The Proximity Test ensures that the matrix committed by the Merkle Tree indeed commits to $k=\sqrt{d}$ codewords. The Consistency Test ensures the consistency between the original evaluation point vector, the original matrix's product, and the resulting vector. **Proximity Test:** The verifier sends a random vector $r=[r_1,r_2,\cdots,r_k]$ of length 𝑘 to the prover. The prover then computes the product of this random vector and the matrix 𝐵, resulting in a target vector 𝑢 of length 𝑘 (where 𝑢 is essentially the random linear combination of matrix $\mathbf{B}$ under vector 𝑟). The prover returns 𝑢 to the verifier. The verifier, upon receiving 𝑢, needs to check its consistency with the committed value $\hat{u}=[\hat{u}_1,\hat{u}_2,\cdots,\hat{u}_m]$ (where $\hat{u}_i$ represents the $i(0\leq i \leq m)$ column of the encoded matrix $\mathbf{C}$). First, the verifier encodes 𝑢 to get $\mathbf{Enc}(u')$. Then, the verifier computes a vector $\nu$ based on 𝑟 and $\hat{u}$. $\nu=\sum_{i=1}^{k}{r_i\cdot\hat{u}_i}$ Next, the verifier selects 𝑙 random points from $\mathbf{Enc}(u')$ (where $l=\Theta(\lambda)$) and checks if these points match the corresponding values in vector $\nu$. If all the points match, the Proximity Test is passed. This step leverages a fundamental property of linear codes: any linear combination of codewords is still a codeword. Consistency Test: This step is similar to the Proximity Test, but instead of using a random vector $r$, a specific vector $\mathbf{q}_1$ is used. Let's explore how to obtain $\mathbf{q}_1$. First, the polynomial evaluation can be represented as the inner product of the coefficient matrix and the coordinate matrix: $$ \begin{aligned} g(x)&=a_0+a_1x+a_2x^2+\cdots+a_{d-1}x^{d-1}\\ &=\langle\mathbf{X},\mathbf{B}\rangle\\ &=\Bigg\langle\left[\begin{matrix}x^{0}&x^{1}&\cdots &x^{k-1}\\x^{k}&x^{k+1}&\cdots &x^{2k-1}\\ \vdots &\vdots&\ddots &\vdots\\x^{(k-1)k}&x^{(k-1)k+1}&\cdots &x^{k^2-1}\end{matrix}\right],\left[\begin{matrix}b_{1,1}&b_{1,2}&\cdots &b_{1,k}\\b_{2,1}&b_{2,2}&\cdots &b_{2,k}\\ \vdots &\vdots&\ddots &\vdots\\b_{k,1}&b_{k,2}&\cdots &b_{k,k}\end{matrix}\right]\Bigg\rangle\\ \end{aligned} $$ In the above equation, the matrix $\mathbf{X}$ can be represented as the tensor product of two vectors $\mathbf{q}_1$ and $\mathbf{q}_2$, each of length $k$. Thus, we have: $\begin{aligned} g(x)=\langle\mathbf{q}_1\otimes\mathbf{q}_2,\mathbf{B} \rangle\end{aligned}$ This means that for any input $q$ to the polynomial, there exist two vectors $\mathbf{q}_1$ and $\mathbf{q}_2$ such that:$\begin{aligned} g(q)=\langle\mathbf{q}_1\otimes\mathbf{q}_2,\mathbf{B} \rangle\end{aligned}$ During the Consistency Test phase, $\mathbf{q}_1$ replaces $r$ from the Proximity Test. The product of $\mathbf{q}_1$ and the matrix $\mathbf{B}$ yields $u''$. If the prover is honest, then $u''$ must satisfy the following condition: $\langle u'',\mathbf{q}_2 \rangle=\langle \mathbf{q}_1 \otimes\mathbf{q}_2,\mathbf{B} \rangle$ If this condition is not met, the test fails. The complete commitment scheme is illustrated as follows. --- **Commit:** P → V: a commit vector $\hat{u} = (\hat{u}_1, ..., \hat{u}_m) \in \mathbf{F}^m$. If P is honest, each "row" $\hat{u}_i$ of $\hat{u}$ contains a codeword in $\mathbf{Enc}$. **Proximity Test:** V → P: a random vector $r \in \mathbf{F}^k$. P → V: a vector $u' \in \mathbf{F}^k$. claimed to equal $v = \sum_{i=1}^k r_i \cdot u_i \in \mathbf{F}^k$. // Now V probabilistically checks consistency between $\hat{u}$ and $u'$ (V reads the entirety of $\hat{u}$). V: chooses $Q$ to be a random set of size $l = \Theta(\lambda)$ . For each $j \in Q$: V queries all $m$ entries of the corresponding "column" of $\hat{u}$, namely $\hat{u}_{1,j}, ..., \hat{u}_{m,j}$. V confirms $\mathbf{En}c(u')_j = \sum_{i=1}^m r \cdot \hat{u}_{i,j}$, halting and rejecting if not. **Consistency Test** Let $\mathbf{q}_1, \mathbf{q}_2 \in \mathbf{F}^k$ such that $g(x) = ((\mathbf{q}_1 \otimes \mathbf{q}_2), \mathbf{B})$. The evaluation phase is identical to the testing phase except that $r$ is replaced with $\mathbf{q}_1$ (and fresh randomness is used to choose a set $Q'$ of columns for use in consistency testing). If all consistency tests pass, then V outputs $(u'', \mathbf{q}_2)$ as $g(x)$. --- ### 3.2 Expander Graph and Linear-Time Encodable Linear Code Linear-time encoding is a type of linear error-correcting code, with its core component being the Expander Graph, as illustrated below: ![image](https://hackmd.io/_uploads/B1udSOu9C.png) <p style="text-align: center"> Figure 3: Expander Graph</p> An Expander Graph is a type of sparse graph with strong connectivity, denoted as $G_{n,m,d}$, where 𝑛 represents the number of Message nodes on the left, 𝑚 represents the number of Codeword nodes on the right, and 𝑑 is the number of connections each left-side vertex has to the right-side nodes. As illustrated above, it can be written as $G_{6,9,3}$. This encoding method is a linear code and can be represented by an 𝑛-row, 𝑚-column generator matrix $\mathbf{M}_{n,m,d}$ (where each row of the matrix has 𝑑 non-zero elements). The encoding can be completed in linear time because each source symbol on the left is associated with a finite number 𝑑 of codeword symbols, making the encoding time proportional to the product of the number of source symbols and the constant 𝑑. ![image](https://hackmd.io/_uploads/Bktoi_O5C.png) Given a set of parameters: $0<\alpha<1$,$0<\beta<\frac{\alpha}{1.28}$,$r\geq \frac{1+2\beta}{1-\alpha}> 1$,$c_n\geq 3$,$d_n\geq 3$,[5] outlines the linear-time encoding process of an Expander Graph: Here,$$\begin{aligned} c_n=&\Bigg\lceil \min \Big(\max(1.28\beta n,\beta n+4),\ &\frac{1}{\beta \log_2{\frac{\alpha}{1.28\beta}}}\big( \frac{110}{n}+ H(\beta)+\alpha H(\frac{1.28\beta}{\alpha})\big) \Big)\Bigg\rceil \end{aligned}$$ $$ \begin{aligned} d_n=&\Bigg\lceil \min \Big( \big( 2\beta+\frac{(r-1)+110/n}{\log_2q}\big)n,D\Big)\Bigg\rceil \end{aligned} $$ $$ \begin{aligned} D=\max\Bigg(&\frac{r\alpha H(\beta /r)+\mu H(\nu /\mu)+110/n}{\alpha \beta \log_2{\frac{\mu}{\nu}}},\\ &\frac{r\alpha H(\beta /(r\alpha))+\mu H((2\beta+0.03)/\mu)+110/n}{\beta \log_2{\frac{\mu}{2\beta+0.03}}},\\ &(2\beta+0.03)\Big(\frac{1}{\alpha r-\beta}+\frac{1}{\alpha \beta}+\frac{1}{\mu -2\beta -0.03}\Big)+1 \Bigg) \end{aligned} $$ In this content, $H(x)=-x\log_2x-(1-x)\log_2(1-x)$ represents the binary entropy function, $\mu =r-1-r\alpha$,$\nu =\beta +\alpha \beta +0.03$. The encoding algorithm is a recursive process aimed at constructing a linear mapping $\mathbf{Enc}n:\mathbb{F^n}\rightarrow\mathbb{F^{rn}}$ , which maps a field vector 𝑥 of length 𝑛 to a field vector 𝜔 of length $rn$. The codeword $\omega$ consists of three parts: $x$,$z$,$\nu$ (with the first 𝑛 bits identical to the original information), with a code rate$1/r$, and a fixed codeword distance $\gamma = \beta /r$. The algorithm uses two Expander Graph generator matrices: $\mathbf{M}{n,\alpha n,c_n}$ and $\mathbf{M}_{\alpha rn,(r-1-r\alpha)n,d_n}$. The schematic of the algorithm is as follows. ![image](https://hackmd.io/_uploads/By2VlYuq0.png) <p style="text-align: center"> Figure 4: Linear-Time Encoding Based on Expander Graph</p> Linear-time encoding is performed recursively, and a boundary condition must be set to terminate the recursive process. In Brakedown, the recursion can be stopped when the length of the information elements is less than 30. Brakedown does not require a trusted setup, and both the commitment and prover time are $O(d)$, meaning they scale linearly with the size of the polynomial. Additionally, Brakedown is likely to be post-quantum secure. The only drawback is the relatively large proof size, which also scales as $O(d)$. Brakedown is a scalable commitment scheme, suitable for both univariate polynomial commitments and multilinear polynomial commitment schemes. With the development of new hash functions and error-correcting codes, there is significant potential for further optimization in the future. ## 4. Optimization of Brakedown As analyzed above, the only drawback of the Brakedown polynomial commitment is the relatively large proof size. Therefore, the optimization of Brakedown focuses on reducing the proof size. Existing methods include: ① Parallel Execution of Proximity Test and Consistency Test: These tests can be executed in parallel, and the same query data can be used for both testing and evaluation. Alternatively, the Proximity Test can be omitted entirely. ② Rearranging the Polynomial Coefficients: This approach, used by Ligero [11], involves altering the arrangement of polynomial coefficients. Instead of arranging them in a square matrix, the row-to-column ratio can be adjusted according to the actual encoding and query methods. This optimization can significantly reduce the proof size. In addition to the optimizations mentioned above, this article considers another potential approach: expanding the dimensional space of the polynomial coefficient matrix and representing polynomial evaluations using a two-dimensional tensor product (to be developed). --- References: [1]https://drops.dagstuhl.de/storage/00lipics/lipics-vol107-icalp2018/LIPIcs.ICALP.2018.14/LIPIcs.ICALP.2018.14.pdf [2] Kate, A., Zaverucha, G.M., Goldberg, I.. Constant-Size Commitments to Polynomials and Their Applications. International Conference on the Theory and Application of Cryptology and Information Security, 2010. [3] https://doc-internal.dalek.rs/bulletproofs/notes/inner_product_proof/index.html [4] Ulrich Haböck. A summary on the fri low degree test. Cryptology ePrint Archive, Paper 2022/1216, 2022. [5] https://eprint.iacr.org/2021/1043.pdf?ref=hackernoon.com [6] J. Bootle, A. Chiesa, and J. Groth. Linear-time arguments with sub-linear verification from tensor codes. In TCC, 2020. [7] S. Setty. Spartan: Efficient and general-purpose zkSNARKs without trusted setup. In CRYPTO, 2020. [8] T. Xie, Y. Zhang, and D. Song, “Orion: Zero knowledge proof with linear prover time,” Cryptology ePrint Archive, Paper 2022/1010, 2022, https://eprint.iacr.org/2022/1010. [9] B. Diamond, J. Posen,“Succinct Arguments over Towers of Binary Fields,” Cryptology ePrint Archive, Paper 2023/1784, 2023, https://eprint.iacr.org/2023/1784.pdf [10] https://github.com/IrreducibleOSS/binius [11] S. Ames, C. Hazay, Y. Ishai, and M. Venkitasubramaniam. Ligero: Lightweight sublinear arguments without a trusted setup. In CCS, 2017