# Low Cost Commitments in Halo2 > Disclaimer this a more technical post that requires some prior knowledge of how Halo2 operates, and in particular in how its API is constructed. For background reading we highly recommend the [Halo2 book](https://zcash.github.io/halo2/) and [Halo2 Club](https://halo2.club/). A common design pattern in a zero knowledge (zk) application is thus: - A prover has some data which is used within a circuit. - This data, as it may be high-dimensional or somewhat private, is pre-committed to using some hash function. - The zk-circuit which forms the core of the application then proves (para-phrasing) a statement of the form: >"I know some data D which when hashed corresponds to the pre-commited to value H + whatever else the circuit is proving over D". ![](https://hackmd.io/_uploads/HJpy6DFzp.png) ![](https://hackmd.io/_uploads/HkpV4oFfp.png) ![](https://hackmd.io/_uploads/BkiuIjFz6.png) ![](https://hackmd.io/_uploads/HJcz_itfa.png) ![](https://hackmd.io/_uploads/ryn9siFG6.png) ![](https://hackmd.io/_uploads/BybehjFGa.png) ![Uploading file..._c28k90cj6]() ![for_preferred_user](https://hackmd.io/_uploads/Hk-TNEZcp.png) ![for_revered_guest](https://hackmd.io/_uploads/HkKUS4Wcp.png) ![](https://hackmd.io/_uploads/rk5tT25M6.png) ![](https://hackmd.io/_uploads/SJ-_bSTzT.png) ![](https://hackmd.io/_uploads/S1_8MHaGT.png) ![gif-pcd](https://hackmd.io/_uploads/S1hIQRsr6.gif) ![scan-gif](https://hackmd.io/_uploads/Hy7LmCjSp.gif) ![for_respected_bigdoggydog](https://hackmd.io/_uploads/r15iQNZca.png) ![Untitled-2022-01-06-1700(16)](https://hackmd.io/_uploads/SJToQAirp.png) ![Untitled-2022-01-06-1700(15)](https://hackmd.io/_uploads/H1GSXCjra.png) ![Untitled-2022-01-06-1700(17)](https://hackmd.io/_uploads/HysQmRsrp.png) ![Untitled-2022-01-06-1700(16)](https://hackmd.io/_uploads/S1m4QCor6.png) ![Untitled-2022-01-06-1700(18)](https://hackmd.io/_uploads/BkOm7Aora.png) ![Untitled-2022-01-06-1700(20)](https://hackmd.io/_uploads/BygX7CsrT.png) ![Untitled-2022-01-06-1700(21)](https://hackmd.io/_uploads/HksLrAiBp.png) ![Untitled-2022-01-06-1700(22)](https://hackmd.io/_uploads/ryWnRF2BT.png) ![scan-gif](https://hackmd.io/_uploads/r1elZcnBa.gif) ![ezgif.com-resize](https://hackmd.io/_uploads/ByPbz92Bp.gif) ![for_exceptional_bigdoggydog](https://hackmd.io/_uploads/S1fV4EdtT.png) ![for_distinguished_user](https://hackmd.io/_uploads/H1UjrcnSa.png) ![for_loyal_broski](https://hackmd.io/_uploads/ry7NEEOK6.png) ![Untitled-2022-01-06-1700(23)](https://hackmd.io/_uploads/HJLn4p3rT.png) ![ezgif.com-crop](https://hackmd.io/_uploads/ryixDp2H6.gif) From our own experience, we've implemented such patterns using snark-friendly hash functions like [Poseidon](https://www.poseidon-hash.info/), for which there is a relatively well vetted [implementation](https://docs.rs/halo2_gadgets/latest/halo2_gadgets/poseidon/index.html) in Halo2. Even then these hash functions can introduce lots of overhead and can be very expensive to generate proofs for if the dimensionality of the data D is large. You can also implement such a pattern using Halo2's `Fixed` columns. These are Halo2 columns (i.e in reality just polynomials) that are left unblinded (unlike the blinded `Advice` columns), and whose commitments are shared with the verifier by way of the verifying key for the application's zk-circuit. These commitments are much lower cost to generate than implementing a hashing function, such as Poseidon, within a circuit. > **Note:** Blinding is the process whereby a certain set of the final elements (i.e rows) of a Halo2 column are set to random field elements. This is the mechanism by which Halo2 achieves its zero knowledge properties for `Advice` columns. By contrast `Fixed` columns aren't zero-knowledge in that they are vulnerable to dictionary attacks in the same manner a hash function is. Given some set of known or popular data D an attacker can attempt to recover the pre-image of a hash by running D through the hash function to see if the outputs match a public commitment. These attacks aren't "possible" on blinded `Advice` columns. The annoyance in using `Fixed` columns comes from the fact that they require generating a new verifying key every time a new set of commitments is generated. > **Example:** Say for instance an application leverages a zero-knowledge circuit to prove the correct execution of a neural network. Every week the neural network is finetuned or retrained on new data. If the architecture remains the same then commiting to the new network parameters, along with a new proof of performance on a test set, would be an ideal setup. If we leverage `Fixed` columns to commit to the model parameters, each new commitment will require re-generating a verifying key and sharing the new key with the verifier(s). This is not-ideal UX and can become expensive if the verifier is deployed on-chain. An ideal commitment would thus have the low cost of a `Fixed` column but wouldn't require regenerating a new verifying key for each new commitment. ### Unblinded Advice Columns A first step in designing such a commitment is to allow for optionally unblinded `Advice` columns within the Halo2 API. These won't be included in the verifying key, AND are blinded with a constant factor `1` -- such that if someone knows the pre-image to the commitment, they can recover it by running it through the corresponding polynomial commitment scheme (in ezkl's case [KZG commitments](https://dankradfeist.de/ethereum/2020/06/16/kate-polynomial-commitments.html)). If you're familiar with the Halo2 API the pseudo code for this is akin to ```rust for (column_index, advice_values) in &mut advice_values.iter_mut().enumerate() { for cell in &mut advice_values[unusable_rows_start..] { *cell = Scheme::Scalar::random(&mut rng); if witness.unblinded_advice.contains(&column_index) { *cell = Blind::default().0; } else { *cell = Scheme::Scalar::random(&mut rng); } } } ``` Now that we have a commitment that can be consistently reproduced given a known pre-image -- where do we put it within the plonkish grid of Halo2? ### Grid Engineering In the proof transcript that Halo2 produces, the commitments to the advice columns are the first serialized elements. Further these points are serialized in the order in which the advice columns are initialized. To avoid having to do complex indexing later on (see next section), we want to put the values that are being committed to in the first columns of the plonk grid. Say we're committing to 3 separate datum, `D1,D2,D3` we'll want to assign each of these to the rows of the unblinded advice `a1,a2,a3`. ![](https://hackmd.io/_uploads/B14IuGgMp.png) The proof transcript is then generated and the affine projections of the curve points that correspond to the commitments to `D1,D2,D3` are its first elements. Given these commitments don't correspond to instances ... how then does a verifier check that these commitments correspond to publicly committed to values ? ### Verifier proof slicing Given that the commitments are the first elements written to the transcript, and that the hypothetical verifier knows the number of values being committed to, the verification process is thus: - Given a set of public commitments the verifier recreates a transcript consisting solely of the publicly committed to values, creating a transcript of N bytes. - Upon receiving a proof, the verifier then swaps out the N first proof bytes for the verifier's transcript bytes. - Verification then proceeds as usual. Again if you like rust code: ```rust= // kzg commitments are the first set of points in the proof, this we'll always be the first set of advice for commit in commitments { transcript_new .write_point(*commit) .map_err(|_| "failed to write point")?; } let proof_first_bytes = transcript_new.finalize(); snark_new.proof[..proof_first_bytes.len()].copy_from_slice(&proof_first_bytes); ``` If the verifier or data provider needs to generate the commitments themselves, given some pre-image, this can easily be done using `ezkl` and only require access to the verifying key and the public SRS (if using KZG commitments). ### How can I use this today You can already use this in `ezkl` and we have an example for doing so [here](https://colab.research.google.com/github/zkonduit/ezkl/blob/main/examples/notebooks/kzg_vis.ipynb). If you're using the cli, all it requires is setting the `kzgcommit` visibility on whichever part of the computational graph you want to commit to. For instance if pre-commiting to the settings: ```bash= ezkl gen-settings -M network.onnx --param-visibility "kzgcommit" ``` If you want to publish the commitments you can generate them using: ```bash= ezkl gen-witness -D input.json -M network.compiled -V vk.key -P kzg.srs ``` where `-V` and `-P` are optional parameters required to generate the commitments. > **Note:** the commitments can also be generated standalone using python (see below) as: ```python= # if you want to generate the commitments separately ezkl.kzg_commit(message, srs_path, vk_path, settings_path) ``` As the verifier you can easily swap the proof bytes for the pre-committed ones as such: ```bash= ezkl swap-proof-commitments --proof-path proof.proof -W witness.json ``` where witness.json is a `.json` that contains the commitments. All of these commands have python analogues ```python= run_args = ezkl.PyRunArgs() # this tells the ezkl compiler to leave the columns corresponding to the parameters unblinded run_args.param_visibility = "kzgcommit" ezkl.gen_settings(model_path, settings_path, py_run_args=run_args) ... # gen commits ezkl.gen_witness(data_path, compiled_model_path, witness_path, vk_path=vk_path, srs_path=srs_path) ... # swap out proof bytes ezkl.swap_proof_commitments(proof_path, witness_path) ... # if you want to generate the commitments separately ezkl.kzg_commit(message, srs_path, vk_path, settings_path) ```