# Plonky3 migration This note describes a high-level plan of migrating Miden VM to Plonky3. This plan describes a preliminary roadmap that can (and most likely will) be adjusted as we progress through the migration. ### Step 1: PoC migration The first step is to create a PoC that can take a Miden VM execution trace and generate a Plonky3 proof from it. We can apply the following simplifications for the PoC: 1. The proof generation will take only the `main` trace as input (as generated by the [processor::execute()](https://github.com/0xMiden/miden-vm/blob/next/processor/src/lib.rs#L179) function). For the purposes of the PoC, the `aux` trace can be randomly generated. 2. The constraint evaluation step can work with just a single constraint. The simplest constraint to pick is the clock cycle increment constraint - i.e, $clk' = clk + 1$. Once the PoC is implemented, we will use it to compare the performance of the existing implementation Winterfell-based implementation to the Plonky3-based implementation. Specifically, we'd be interested in comparing: - Single-threaded performance for traces of $2^{16}$ and $2^{20}$ on a high-end laptop (e.g., an Apple silicon chips). - Multi-threaded performance for traces of $2^{16}$ and $2^{20}$ on high-end laptops (e.g., M4 Max) as well as server-grade machines with at least 64 cores (e.g., Graviton 4, AMD Zen4/5). To make the comparison "apples-to-apples", we will need to: - Exclude the constraint evaluation step from comparisons. - Use the same hash functions both for both system. Specifically, we should benchmark with both BLAKE3 and Posedion2 hash functions. - Use the same FRI folding factor for both systems. Since Plonky3 currently supports only FRI folding factor of 2, this is what we'll have to use. ### Step 2: Potential optimizations If we identify significant gaps in performance between Plonky3 and our current implementation, we will need to understand if and how we'll be able to bridge these gaps. Overall, if Plonky3 performance is within 20% of our current performance, we can delay optimizations till later, but the gap bigger than this will require some immediate analysis. Specific areas of interest are: - The low-degree extension step (i.e., the FFTs). - The Merkle tree construction step (i.e., hashing). ### Step 3: WASM-compatibility Our current Winterfell-based implementation allows us to run the prover in WebAssembly. This feature is currently used by the wallets that run in the browser and browser extensions. Thus, we will need to make sure that Plonky3-based implementation can be compiled to WASM and can run with roughly similar performance as what we get from Winterfell. If we discover that performance of the Plonky3-based system is significantly worse, we will need to try to improve it, though at this stage we don't need to implement sophisticated optimization techniques such as multi-threading or WebGPU. ### Step 4: Miden VM integration Once we have the PoC working and are satisfied with the performance, we will need to integrate the PoC into Miden VM. This will require the following: 1. We will need to make sure `aux` trace is generated properly. For this we will either need to use our current methodology relying on the `AuxTraceBuilder` or use Plonky3's `AirBuilder` with Interactions. 2. We will need to implement a simple FRI option for Plonky3 to support folding factors of more than 2. Specifically, we need to support folding factor $4$ for our recursive verifier. 3. We will need to update/customize Plonky3 components so that the proof generation process follows the same methodology as we use now. Specifically, this will include: a. Drawing random challenges using sponge construction. b. Inverse element order when computing commitments to the OOD frame. c. Using algebraic batching. d. Anything else? Once the above are implemented, we should be able to replace Winterfell prover in Miden VM with Plonky3. ### Step 5: Integration of constraint evaluation logic Up to this point, constraint evaluation logic for Plonky3-based system would use a single constraint specified earlier in this note. However, we will also need to integrate proper constraint evaluation logic into the Plonky3-based prover. This is expected to be a simple drop-in of the constraint evaluation code generated by the upcoming Plonky3 [AirScript](https://github.com/0xMiden/air-script) backend, but we will probably need to define the interface for this. ### Step 6: Additional optimizations Based on what we learn in steps 2, 3, and 5, we may need to perform additional performance optimizations. Specifically: - If the performance degradation was within the acceptable margin, we may want to implement ways to bring it up to the level we had before or maybe even surpass them. - If constraint evaluation step in Plonky3 is significantly worse than what we are able to do in Miden VM, we will need to find ways to improve it. - We may consider implementing more advanced optimizations for WASM (e.g., WebGPU, multithreading), if these optimizations will lead to significant improvements. ### Bonus step 1: Lifted FRI To prepare for migration of Miden VM to multi-table STARKs, we will need to update our FRI implementation from step 4 to support Lifted FRI techniques (TODO: insert paper link). ### Bonus step 2: Perfect Zero-knowledge To enable true ZK for client-size proving, we will need to update our version of Plonky3 to support perfect ZK. This could follow the work done for Winterfell in [this PR](https://github.com/facebook/winterfell/pull/293).