# Milestones of Machina iO from Q1 2026 to Q2 2027 As discussed in [our position paper](https://eprint.iacr.org/2025/2139), iO is the only way to build trustless confidential smart contracts, namely ones that do not require any trust beyond the existing Ethereum validators, and it is essential for making confidential smart contracts scalable without sacrificing their security. This document describes our team’s milestones from Q1 2026 through Q2 2027 toward realizing practical iO, as well as the research collaborations we seek. ## Milestones ```graphviz digraph milestones { rankdir=LR; newrank=true; nodesep=1.0; // increases the separation between nodes node [color=Black, fontname=Courier, shape=box]; // All nodes will have this shape and colour edge [color=Blue, style=dashed]; // All the lines look like this // ---- Node definitions (IDs + labels) ---- bgg_mul [label="Milestone 1:\nFHE Multiplication over\nKey-Homomorphic Encodings\n(Q1 2026)"] prf_1 [label="Milestone 2:\nBlind PRF over\nKey-Homomorphic Encodings with\nImplementable Parameters\n(Q2 2026)"]; prf_2 [label="Milestone 3:\nExecutable Blind PRF over\nKey-Homomorphic Encodings\n(Q3 2026)"]; obf_nontriv [label="Milestone 4:\nObfuscation with\nNontrivial Input Size\n(Q3 2026)"]; snark_ver [label="Milestone 5:\nSNARK Verification over\nKey-Homomorphic Encodings\n(Q4 2026 - Q1 2027)"]; // blind_snark [label="Blind SNARK i.e.,\nSNARK inside FHE\n(Q1 2026)"]; we_2027q1 [label="Milestone 6:\nWitness Encryption\n(Q1 2027 - Q2 2027)"]; obf_cond_fhe [label="Obfuscation for\nConditional FHE Decryption"]; // snark_rec [label="Blind SNARK recursion\n(Q2 2026 - Q3 2026)"]; verify_fhe [label="Verifiable FHE"] // ---- Clusters ---- subgraph cluster_input_scaling { label = "Input Scaling"; style = rounded; color = blue; bgg_mul; prf_1; prf_2; } // subgraph cluster_snark { // label = "Blind SNARK"; // style = rounded; // color = gray; // snark_ver; // blind_snark; // snark_rec; // } subgraph cluster_products { label = "Products"; style = rounded; color = red; obf_nontriv; we_2027q1; } subgraph cluster_out { label = "Outside of Milestones" style = rounded; color = gray; verify_fhe; obf_cond_fhe; } // ---- Edges ---- bgg_mul -> prf_1; prf_1 -> prf_2; prf_2 -> obf_nontriv; obf_nontriv -> we_2027q1; we_2027q1 -> obf_cond_fhe; snark_ver -> we_2027q1; verify_fhe -> obf_cond_fhe; // blind_snark -> snark_rec; // snark_rec -> obf_cond_fhe; // ---- Ranks ---- // { // rank = same; // nr_dummy_prf; // } // { // rank = same; // blind_prf; // } // { // rank = same; // obf_nontriv; // snark_ver; // } } ``` ### Milestone 1: FHE Multiplication over Key-Homomorphic Encodings We implement a circuit evaluated over key-homomorphic encodings that simulates FHE multiplication. The implemented scheme can choose arbitrary lattice-based FHE schemes. We first demonstrate the usefulness of the circuit by implementing predicate encryption or laconic functional evaluation (LFE), which are weaker primitives than iO but have not been implemented over 10 years. **Connection to subsequent milestones**: We expect that this circuit implementation will serve as a foundation for the Milestone 3 implementation. Specifically, if the PRF can be expressed as a polynomial in the PRF key, then the blind PRF computation described below could be realized by repeatedly performing FHE multiplications (and additions) over the key-homomorphic encoding. **Timeline**: Q1 2026 **Implementation**: An implementation of FHE multiplication over key-homomorphic encodings. **Dissemination**: We will publish a paper that demonstrate the first implementation of predicate encryption (or LFE), along with new techniques that enables the efficient simulation of FHE multiplication over key-homomorphic encodings. The paper could be submitted to [ACM CCS 2026](https://www.sigsac.org/ccs/CCS2026/call-for/call-for-papers.html). ### Outcomes of Milestone 1 Although we have not yet been able to publish the paper, we have completed the implementation planned for Milestone 1. The main implementation updates are as follows. #### GPU Implementation We have developed GPU implementations of the fundamental operations underlying BGG+ encodings and lattice-based iO. Specifically, these include: * arithmetic operations on polynomials and digit decomposition, * arithmetic operations on polynomial matrices, and * preimage sampling. Notably, AI coding tools wrote "almost all" GPU code. Compared with the CPU implementation, the GPU implementation achieved roughly a 1,000-fold speedup for polynomial matrix multiplication and roughly a 20-fold speedup for preimage sampling. The benchmark codes are available as follows: * https://github.com/MachinaIO/mxx/blob/main/benches/bench_matrix_mul_cpu.rs * https://github.com/MachinaIO/mxx/blob/main/benches/bench_matrix_mul_gpu.rs * https://github.com/MachinaIO/mxx/blob/main/benches/bench_preimage_cpu.rs * https://github.com/MachinaIO/mxx/blob/main/benches/bench_preimage_gpu.rs #### New Lookup Table Evaluation over BGG+ Encodings Last year, we have introduced a new technique to evaluate lookup tables (LUTs) over encodings [[SB25]](https://eprint.iacr.org/2025/1870). However, for large circuits, this is still far from practical. One bottleneck is that, if we let $T$ denote the size of a LUT and $G_L$ the number of gates that evaluate the LUT, then an evaluation key generated by a trusted party, which is a part of the obfuscated circuit, must contain $\mathcal{O}(TM)$ lattice preimages. To address this bottleneck, we built a new LUT evaluation technique that reduces the number of these preimages to $\mathcal{O}(T + G_L)$. We believe the construction is secure under the LWE and the private-coin evasive LWE assumptions. The implementation is publicly avaibale here: https://github.com/MachinaIO/mxx/tree/main/src/lookup/ggh15 #### Evaluating Modulo-q Multiplication over BGG+ Encodings with Large Parameters Using the new LUT evaluation technique, we succeeded in evaluating a modulo-$q$ multiplication over BGG+ encodings, where $q$ is a prime factor of the encoding modulus $Q$ and is about 24 bits. The parameters used in the evaluation were sufficiently large for correctness and security; in particular, the ring dimension is $n=2^{14}$, and the encoding modulus size is $\log_2 Q = 264$ bits. However, the encoding modulus size was chosen by considering correctness only, so the actual modulus size may need to be larger in order to ensure security (e.g., for noise flooding). With 8 GPUs (RTXPro 6000) running in parallel, preprocessing (corresponding to obfuscation in iO) took about 1.5 hours, and online evaluation (corresponding to obfuscated-circuit evaluation) finishes in under 1 minutes. The preprocessing output size, corresponding to a part of an obfuscated circuit, was 731 GB. Notably, we confirmed that the main preprocessing bottleneck—preimage sampling for all LUT entries (23,114 entries in our experiment)—parallelizes well. This implies that the preprocessing time can be reduced by scaling out the number of identical GPUs. #### Slotwise Operations over BGG+ Encodings To ultimately support FHE multiplication, we extended modulo-$q$ multiplication over our BGG+ encodings to polynomial multiplication, NTT/inverse NTT, and coefficient modulus switching for polynomials. To achieve this, the BGG+ encodings had to support encoding the value of each evaluation slot of a polynomial separately, as well as a slot transfer operation that transforms values across slots. To enable these new features without increasing the preprocessing output size for LUTs with the number of slots, we introduced a variant of BGG+ encodings in which the public key is shared across all slots, while the secret key is specific to each slot. Furthermore, by introducing a mechanism for switching the secret key across different slots, we realized slot transfer for this variant of the encodings. The preprocessing data size for slot transfer is bounded by $\mathcal{O}(NG_S)$, where $G_S$ is the number of slot transfer gates, and $N$ is the number of slots, namely the ring dimension. #### CKKS FHE Multiplication over BGG+ Encodings with Small Parameters By combining all of the above features, we succeeded in implementing a circuit for CKKS FHE multiplication that can be evaluated over BGG+ encodings. The circuit size for a single FHE multiplication is 216,105 gates, which consists of 512 multiplication gates, 75,663 LUT gates, and 1,632 slot transfer gates. The circuit excluding addition and small scalar multiplication gates is 104, requiring the further optimization of the circuit design. We have confirmed that, when evaluating this circuit over raw polynomials (rather than BGG+ encodings), the difference between the decryption of the circuit output and the expected output plaintext is below an estimated bound. We are now running this circuit over BGG+ encodings. #### GSW FHE Multiplication over BGG+ Encodings We found the CKKS FHE circuit is impractical because the depth is too large so that the modulus larger than the noise growth becomes over the practical range. To address this inefficiency, we next implemented a circuit for GSW FHE evaluation that can be evaluated over BGG+ encodings. As a concrete example, we measured a benchmark by computing a Goldreich's PRG on input an encoding of an encrypted seed. The evaluation was conducted on a single H200 GPU with ring dimension $n = 2^{16}$ and modulus size $\log_2 Q = 1120$ bits. Because the circuit size and total computational cost were too large to complete the full computation, we evaluated only the smallest parallelizable unit, such as the evaluation of a single gate in each circuit layer, and estimated the benchmark results by multiplying these measurements by the corresponding number of parallel computation units. The estimated benchmark results are as follows: * In preprocessing (corresponding to obfuscation in iO), the minimum latency is 10 mins when more than $10^{12}$ GPUs are available. * In online evaluation (corresponding to evaluation in iO), the minimum latency is 51 mins when more than $10^{18}$ GPUs are available. These estimates imply a shift from an era in which the central question was “whether iO can be run on real hardware within a practical amount of time” to one in which the question is “how far the total computational cost can be kept within realistic bounds.” #### Unfinished Items * We believe the LUT evaluation and slot transfer techniques we have introduced can be proven secure under the private-coin evasive LWE and all-product LWE assumptions, which are the same as the assumptions underlying Diamond iO, but we have not completed the full security proofs of these techniques. * We had originally planned to demonstrate the outcome of Milestone 1 as the first implementation of predicate encryption, but we realized that this would be difficult from a security standpoint. The reason is that the new LUT evaluation requires the secret matrices used in preprocessing to be sampled by a single trusted party. While this is acceptable in the iO setting, it is not acceptable in a setting for encryption schemes where the encryptor and the key generator are different entities. Nevertheless, we will publish a paper that presents the formal specification and security proof of this new LUT evaluation technique, likely in some other effective form. <!-- ### Milestone 2: Noise Refreshing with Dummy PRF We implement noise refreshing of GGH15 encodings, described in Subsection 4.2 of the position paper, with using a dummy implementation for a blind PRF over key-homomorphic encodings (described below). This milestone has the following two purposes. 1. We will confirm that, assuming the blind PRF over key-homomorphic encodings, the idea of noise refreshing actually works in practice. In more detail, we need to make sure that **parameters (especially the bit size of the modulus $q$) grow only polylogarithmically in the input size**. 2. By evaluating the performance of noise refreshing with varying the circuit size and depth of a dummy implementation of the blind PRF over key-homomorphic encodings, **we will determine concrete numerical targets for how small the circuit size and depth of the blind PRF must be so that we can support the obfuscation of circuits with sufficiently large input size within a practical running time**. **Timeline**: Q2 2026 **Implementation**: An implementation of noise refreshing with a dummy implementation of blind PRF. The design of the implementation should be modular so that the blind PRF implementation can be easily replaced later. **Dissemination**: ~~We will publish a paper that formally describes the construction of noise refreshing and its security proof. The paper publication might be delayed to Q3 2026.~~ The noise-refreshing process for encodings, which is central to this milestone, will first be introduced in the revised version of the Diamond iO 1 paper, where it is used to instantiate a PRF with practical parameters for Milestone 3-1. --> ### Milestone 2: Blind PRF over Key-Homomorphic Encodings with Implementable Parameters We build an implementation that evaluates a PRF over key-homomorphic encodings of an FHE-encrypted PRF key, without revealing either the PRF key or the PRF output. We refer to this primitive as a blind PRF. This implementation must minimize noise growth so that they can ultimately be instantiated within an implementable range of lattice parameters. However, it will likely require too much total computation to complete the full execution in practice. Our current understanding is that such a blind PRF can be implemented using a noise-refreshing technique that homomorphically discards the error accumulated in a given encoding. Moreover, this technique is useful not only for PRFs, but also for arbitrary computation over encodings, as it makes the lattice parameters independent of the circuit depth. In this sense, it plays the same role as bootstrapping in FHE. **Timeline**: Q2 2026 **Implementation**: An implementation of blind PRF and noise refreshing with the above requirements. We will incorporate this blind PRF into the Diamond iO implementation to address a missing safeguard in the current implementation. **Dissemination**: The PRF and noise refreshing will be introduced in the revised version of the Diamond iO 1 paper. Additionally, the PRF and noise-refreshing techniques will be introduced in the revised version of [the Diamond iO 1 paper](https://eprint.iacr.org/2025/236). We will also estimate the performance of the updated Diamond iO implementation and present the benchmark results on a dashboard website. <!-- The paper could be submitted to [ACM CCS 2026](https://www.sigsac.org/ccs/CCS2026/call-for/call-for-papers.html), but it is not easy to explain a contribution of this work because it does not improve an asymptotic efficiency from existing theoretical iO constructions, and we cannot make sure that a concrete efficiency is improved until the blind PRF implementation becomes practical. --> <!-- ### Blind SNARK i.e., SNARK inside FHE We implement a scheme that produces a SNARK proof within FHE (blind SNARK). Fortunately, a blind SNARK scheme with nearly acceptable performance has already been proposed. We will first study [the state-of-the-art scheme](https://eprint.iacr.org/2025/302.pdf) and organize the remaining open problems. We will then implement a scheme that addresses these issues. **This implementation itself will be useful for proof delegation, i.e., enabling a client to outsource the generation of zero-knowledge proofs to a server for privacy-preserving purposes.** **Update:** PSE has already worked on Blind SNARK (FHE-SNARK) https://pse.dev/blog/const-depth-ntt-for-fhe-based-ppd, but we have not made the end-to-end implementation. **Timeline**: Q1 2026 **Expected deliverables**: an implementation of noise refreshing (with a dummy blind PRF) and a paper describing its formal specification and security proof. --> ### Milestone 3: Executable Blind PRF over Key-Homomorphic Encodings <!-- We implement a circuit evaluated over key-homomorphic encodings that simulates PRF without revealing a PRF key and a PRF output, called blind PRF, over key-homomorphic encodings. The implemented scheme should be useful for the noise refreshing and be proven secure under well-defined (but probably nonstandard) cryptographic assumptions. This milestone is divided into two phases. 1. **Milestone 3-1 (Q2 2026)**: we aim to implement a blind PRF with practical parameter ranges and realistic latency. However, the implementation developed in this phase will likely require too much total computation to complete the full execution in practice. 2. **Milestone 3-2 (Q3 2026)**: we will address this computational-cost bottleneck and make blind PRF evaluation over encodings executable in practice. The key technical point for addressing this issue is a two-layer SIMD optimization: for lattice dimension $n$, a single encoding can handle $O(n)$ integers, namely one polynomial of an FHE ciphertext, and this in turn enables homomorphic computation over $O(n)$ integers in parallel. --> Recall that the blind PRF implementation developed in Milestone 2 is expected to remain impractical, because the total computational cost, measured by the number of required GPUs, will be too large. In Milestone 3, we address this inefficiency primarily by improving the asymptotic complexity of the algorithm, making blind PRF evaluation over encodings executable with a realistic number of GPUs. As a reference, the following table estimates the hourly cost of renting NVIDIA H200 GPUs from several cloud GPU providers. | Number of H200 GPUs | Vast.ai<br>$2.32/GPU-hr | RunPod<br>$3.99/GPU-hr | RunPod Instant Clusters<br>$4.31/GPU-hr | AWS p5e Capacity Blocks<br>$4.975/GPU-hr | CoreWeave HGX H200<br>$6.305/GPU-hr | Google Cloud A3 Ultra<br>$10.601/GPU-hr | |---:|---:|---:|---:|---:|---:|---:| | 1 | $2.32 | $3.99 | $4.31 | $4.98 | $6.31 | $10.60 | | 5 | $11.60 | $19.95 | $21.55 | $24.88 | $31.53 | $53.00 | | 10 | $23.20 | $39.90 | $43.10 | $49.75 | $63.05 | $106.01 | | 50 | $116.00 | $199.50 | $215.50 | $248.75 | $315.25 | $530.04 | | 100 | $232.00 | $399.00 | $431.00 | $497.50 | $630.50 | $1,060.09 | | 500 | $1,160.00 | $1,995.00 | $2,155.00 | $2,487.50 | $3,152.50 | $5,300.43 | | 1,000 | $2,320.00 | $3,990.00 | $4,310.00 | $4,975.00 | $6,305.00 | $10,600.86 | | 5,000 | $11,600.00 | $19,950.00 | $21,550.00 | $24,875.00 | $31,525.00 | $53,004.32 | The key technical point for addressing this issue is a two-layer SIMD optimization: for lattice dimension $n$, a single encoding can handle $O(n)$ integers, namely one polynomial of an FHE ciphertext, and this in turn enables homomorphic computation over $O(n)$ integers in parallel. The success of this milestone depends on whether we can invent such a combination of key-homomorphic encodings and FHE. **Timeline**: Q3 2026 **Implementation**: An implementation of blind PRF and noise refreshing with the above requirements. **Dissemination**: No paper for this milestone because its result is introduced in a paper for Milestone 4. <!-- a paper to claim that we make a variant of the [AKY24 FE scheme](https://eprint.iacr.org/2024/1719.pdf) practical by employing this blind PRF. This is aligned with our goal **Expected deliverables**: an implementation of the required circuit. --> <!-- ### Blind SNARK recursion We implement a process that homomorphically converts the proof of blind SNARK into that of another SNARK scheme verifiable on key-homomorphic encodings. In other words, the implementation performs proof recursion inside FHE. **Timeline**: Q2 2026 - Q3 2026 **Expected deliverables**: an implementation of the required process. --> ### Milestone 4: Obfuscation with Nontrivial Input Size We realize an obfuscation of a circuit with at least 64 input bits. This obfuscated circuit itself is not directly useful. However, it represents an important scientific milestone; when the number of input bits is small, one can build an obfuscated circuit using a simple lookup table, whereas once the input size reaches 64 bits or more, constructing such a lookup table becomes practically impossible. As long as all input bits are meaningfully used, we do not impose any particular requirement on the functionality of the circuit. If the hardware cost of obfuscating and evaluating such a circuit with sufficiently secure parameters is beyond our budget, we will instead measure the cost using smaller parameters and estimate the cost and performance for larger parameters. **Connection to prior milestones**: This milestone will be achieved by 1) integrating the noise-refreshing technique built in Milestone 2 into the input-insertion process of Diamond iO and 2) employing the executable blind PRF built in Milestone 3. **Connection to subsequent milestones**: Real applications including witness encryption in Milestone 6 will require more than 64 input bits. However, since the noise refreshing makes parameters (especially the bit size of the modulus $q$) grow only polylogarithmically in the input size, we can support input sizes larger than 64 bits simply by allowing the running time to grow linearly. Alternatively, by parallelizing, e.g., using GPUs or performing input insertion in a tree structure rather than sequentially, it could be handled with a sublinear growth of the running time. Therefore, as long as noise refreshing is available, **increasing the supported input size is an engineering and computational-resources problem rather than a theoretical one**. **Timeline**: Q3 2026 **Implementation**: An implementation to produce and evaluate the obfuscated circuits with at least 64 input bits. **Dissemination**: We will publish a paper to claim that we are the first to realize iO with practical performance for nontrivial input sizes. This paper will also cover the new techniques introduced in Milestone 1. We might submit it to conferences whose submission deadlines are in October or November 2026, e.g., Eurocrypto or IEEE S&P. We will also present this result at Devcon 2026. ### Milestone 5: SNARK Verification over Key-Homomorphic Encodings We implement a circuit evaluated over key-homomorphic encodings that simulates a verification algorithm of a SNARK scheme. This milestone includes selecting, or if necessary newly designing, a SNARK scheme whose proofs can be easily verified over the encoding. There is an efficiency trade-off between public-verifier (PV) and designated-verifier (DV) SNARK schemes. While a proof of a PV scheme is publicly verifiable, a proof verification of a DV scheme requires a private verifying key. The latter scheme is acceptable for iO because this key can be hardcoded into obfuscated circuits; however, the scheme must guarantee strong reusability of the verifying key. From the viewpoint of efficiency, a DV scheme has a smaller proof size and lower verification complexity than a PV scheme, but it requires simulating the process over BGG+ encodings that executing the verification algorithm homomorphically using FHE, whereas the verification algorithm of a PV scheme can be simulated over the encodings without FHE. Therefore, the choice in this trade-off will mainly depend on the magnitude of the overhead incurred by simulating FHE over the encodings. To the best of our knowledge, [[ADY25]](https://eprint.iacr.org/2025/2160.pdf) achieves the shortest proof size, in particular 768 bits, among PV SNARK schemes. In contrast, although the DV SNARK scheme in [[ADI25]](https://eprint.iacr.org/2025/517.pdf) enables a shorter proof size than that of [ADY25], it does not guarantee the strong reusability. The DV scheme in [[BIOW20]](https://eprint.iacr.org/2020/1319.pdf) guarantees the strong reusability, but it relies on the Paillier encryption scheme for a negligible soundness error, incuring a proof size larger than 1000 bits. The other path is to leverage garbled circuits (randomized encodings) optimized for Groth16 verification such as [Argo](https://eprint.iacr.org/2026/049.pdf). Specifically, the circuit evaluated over the BGG+ encodings just outputs garbled circuits whose randomnesses are derived from the blind PRF. Then, an evaluator evaluates the output garbled circuit, which outputs a secret value if the input proof is valid in the context of witness encryption. **Timeline**: Q4 2026 - Q1 2027 **Implementation**: An implementation of the SNARK verification circuit, along with that of the corresponding SNARK prover. **Dissemination**: No paper for this milestone because its result is introduced in a paper for Milestone 6. ### Milestone 6: Witness Encryption We realize witness encryption (WE) as an obfuscation that on input a SNARK proof outputs a hardcoded message if and only if the proof is valid. Although WE is weaker than iO, it already provides amazing applications as follows: - [Trsutless bitcoin bridge](https://ethresear.ch/t/trustless-bitcoin-bridge-creation-with-witness-encryption/11953) - Trustless encrypted mempool for protection against MEV attacks - [Trustless one-time programs](https://eprint.iacr.org/2017/935.pdf) - [Trustless time-lock encryption](https://eprint.iacr.org/2015/482) **Connection to prior milestones**: There are two challenges to overcome in order to achieve this milestone. 1. The input size needs to be large enough to cover the size of the SNARK proof. For example, the proof size in [[ADY25]](https://eprint.iacr.org/2025/2160.pdf) is 768 bits at a 128-bit security level. 2. A circuit evaluated over key-homomorphic encodings needs to verify the provided SNARK proof. The first and second challenges are addressed by Milestones 4 and 5, respectively. Therefore, we can realize WE by essentially combining these results. **Timeline**: Q1 2027 - Q2 2027 **Implementation**: Implementations of witness encryption and its demoe application. **Dissemination**: We will publish a paper to claim that we are the first to realize WE with practical performance for general NP problems. <!-- ### Obfuscation for Conditional FHE Decryption We implement an obfuscation of a circuit that on input an FHE encryption of some message and a SNARK proof, outputs the decrypted message if and only if the decrypted proof is valid. **Timeline**: Q2 2027 **Expected deliverables**: an implementation to produce and evaluate the required obfuscated circuits. --> The following is outside the scope of the milestones planned through Q2 2027, but we describe how the iO construction could be further developed in the future for reference. ## Out of scope in this roadmap ### Verifiable FHE This produces a SNARK proof that FHE computation is honestly performed. ### Obfuscation for Conditional FHE Decryption This obfuscates a circuit that on input an FHE encryption of some message and a SNARK proof, outputs the decrypted message if and only if the decrypted proof is valid. The input proof is assumed to be that of verifiable FHE. ## Collaborative research We would like to collaborate with academic researchers on security analysis and efficiency improvements. <!-- However, this does not mean that we need to collaborate with everyone on the list; it is sufficient if we can work with at least one person from each list. --> The scope of the security analysis is as follows: - Cryptanalysis of the all-product (Ring-)LWE assumption - Cryptanalysis of the private-coin evasive LWE assumption with specific parameter ranges and samplers used in [Diamond iO](https://eprint.iacr.org/2025/236) - Cryptanalysis of new assumptions that might be introduced to improve efficiency of key-homomorphic encodings A paper on these cryptanalyses are expected to be published before Devcon 2026. <!-- ### Collaboration for Security Analysis The purpose of collaborating is to have them evaluate the security of the non-standard cryptographic assumptions we rely on. --> <!-- * Amit Sahai * Aayush Jain * Huijia (Rachel) Lin * Hoeteck Wee --> <!-- Specifically, we ask our collaborators to work on the following tasks: - Cryptanalysis of the all-product (Ring-)LWE assumption - Cryptanalysis of the private-coin evasive LWE assumption with specific parameter ranges and samplers used in [Diamond iO](https://eprint.iacr.org/2025/236) - Cryptanalysis of new assumptions introduced in Milestone 2 - A regular meeting with us every two weeks A paper on these cryptanalyses are expected to be published before Devcon 2026. ### Collaboration for Efficiency Improvements The purpose of collaborating is to jointly explore new ideas for improving the efficiency of key-homomorphic encodings. --> <!-- * Yuriy Polyakov * Shweta Agrawal * Dan Boneh * Hoeteck Wee * Huijia (Rachel) Lin -->