This document presents the findings of the QEDIT team during the review of the Zcash NU5 protocol update. Specifically the review is focused on (1) the Halo 2 proving system with its latest PLONK-style arithmetization for representing constraint systems; (2) the Orchard protocol, which, even if based on Sapling, differs considerably from it; (3) the underlying primitives of Orchard, from hash functions and commitments to signature schemes; (4) the latest key structure and derivation system; (5) the underlying field and elliptic curve arithmetic and parameter generation; and (6) the actual circuit implementation that describes the Orchard Action statement. Note that we explicitly exclude any blockchain-related component not associated with orchard or its interface (consensus, network, block generation, etc.)
For this review, we evaluated the specification of the protocol design for security and audited the implementation for vulnerabilities that could affect the system as a whole.
This review spanned the period of ~5 months, intermitently, from April 21st until September 24th, 2021.
Resources used in the review
In this section we include the resources used to conduct the review of NU5, including specification, documentation, implementations, test vectors and individual ZIPs.
Specification, Design & Security
The Orchard / NU5 specification, for which the low-level details of Orchard, its primitives, security requirements, encodings and more are best explained.
Halo2 Book, for which the Halo 2 construction and PLONKish arithmetization are best explained.
Orchard Book, for which the explanation of nullifier security, and high-level orchard protocol is best explained
Zcash Improvement Proposals (ZIPs) This is a list of all the ZIPs included in NU5 upgrade, of which we reviewed both in design and in code the following:
The following is a table that describes what reviewed components are included in each of the repositories.
Summary of Findings
Most importantly, we did not find any critical issues that could affect the well being of the deployed system or the funds, anonymity or confidentiality of the Zcash users. Only a few of the issues that are reported were found to be exploitable under (preventable) specific circumstances (such as QZ2-305).
In general the protocol is well specified and the code is mostly well written. Although the circuit generating code is well documented, many other components are not well documented. Overall the implementation is in accordance with the detailed specification.
It is important to note that reviewing the security of such a complex system and its implementation is a challenge. Mainly, when trying to ensure the privacy and integrity of the Zcash blockchain, one needs to analyze different layers, both individually and in conjuction with one another. Integrity is provided by the soundness of the proving system and privacy is provided by the zero-knowledge property of the proving system. Both, however, depend on the robustness of the circuit design and implementation and the underlying security of the cryptographic primitives used. For example:
If any of the required constraints are missing (such as checking that a variable is zero), it could enable malicious actors to steal or generate illegitimate funds.
If any of the circuit variables are not properly referenced in each of the repeated uses, one could generate a proof of a non-verifying statement.
If the signature scheme is broken (e.g.: DDH is broken), then malicious actors could potentially spend your funds (especially if they have the viewing keys).
Privacy, however, is especially tricky since data leaked on-chain or via the peer-to-peer network is persistent and can be analyzed retroactively. Here is a (non-exhaustive) list of components that need to be reviewed as they affect the overall privacy of the system:
No secret witness data should be leaked into the ZKP instance or transaction records.
The Halo 2 scheme and its implementation achieve zero-knowledge (given a good randomness source).
The source of randomness used across the system is uniform and never reset or reused.
Primitives (commitments, hashes, signatures) are secure and properly implemented (e.g.: if commitments / transactions could be linked, the whole transaction graph would be recovered).
Elliptic curves are properly generated and implemented, according to a secure instantiation.
There is proper canonization of all transmitted values such that there is a unique representation, and unique encoding in the circuit code.
Example: ensuring modular reduction for finite field values
Example: unique EC point representation
Leakage via the number of actions could compromise the privacy of the system, but this was out of scope for this audit.
Example: dust attack where sending many small payments to an address may induce sends from that address to have abnormally many actions
Side channel attacks could compromise the privacy of the system, but this was out of scope for this audit.
EDIT 29/09/2021: The ECC team found a warning issue related to checking non-zero values of specific witnessed points, as per the requirement outlined in issue QZ2-514. See the official release announcement.
Legend and stats
In this audit, we report on four types of findings, each of which is identified by a unique identifier, e.g.: [QZ1-205] implies an issue reported in Phase 1 "QZ1" and it is the 5th issue found in subsection 2 (Orchard Primitives) of this section.
In red we describe Critical Issues, findings that we have demonstrated can be exploited to attack the Zcash system.
We have not found any critical issues in this review.
In yellow we describe Warning Issues, findings that we believe could result in exploits, but their impact on the system have not been demonstrated.
In blue we describe Minor Issues, findings that have no meaningful impact on the system but should be fixed, such as typos, spec <-> code inconsistencies, styling and naming conventions, potential optimizations, etc.)
This is incorrect from the cryptographic point-of-view, because in Jacobian coordinates, the point at infinity should be defined as for any . The "usual" representant used is . Nevertheless, from an engineering point-of-view, it's correct provided that wherever it could be used (or computed) proper checks are performed.
Recommendation: Replace the function implementation by the following one:
Rationale: Using the correct definition of the point at infinity allows to implement EC operations as usual, with no specific check for this case (checks can still be done for optimization purposes though).
Description: Equality testing for EC points can be optimized Currently, the implementation "normalizes" points to affine coordinates. A first test is done to know if they are points at infinity or not. Then, it returns the following boolean:
(self_is_zero & other_is_zero)// Both point at infinity|((!self_is_zero)&(!other_is_zero)& x1.ct_eq(&x2)& y1.ct_eq(&y2))// Neither point at infinity, coordinates are the same
Recommendation: Provided that [QZ1-001] is implemented, this could be optimized by only keeping the normalization + comparison, and remove all is_zero tests.
Description: Implementation of low-level functions (adc, sbb, mac), and modular functions have been checked, and tested against test vectors generated using PARI-GP.
Description: Square root computation using Sarkar2020 has been checked for consistency and correctness. It has been checked against test vectors using PARI-GP.
Description: The macro new_curve_impl and all sub-macros have been looked at carefully, to ensure the proper implementation of elliptic curves operations. These functions have been tested against a PARI-GP implementation for both Jacobian and Affine coordinates.
Description: Implementation of the import/export functions of field elements/ECPoints from/to binary buffers are correct, and match the specification. The same attention has been paid to the point compression techniques.
Description: Benchmarks have been implemented to check whether the implementation is indeed constant time, with the following code.
fnbench_test_points_equality(c:&mutCriterion){userand::rngs::ThreadRng;letmut group = c.benchmark_group("ct-points-equality");// Initialize the benchmark data.let not_zero =(0..100000).into_iter().map(|_|Point::random(ThreadRng::default())).collect::<Vec<Point>>();let only_zero =(0..100000).into_iter().map(|_|Point::identity()).collect::<Vec<Point>>();// Let the bencher compute the average execution time.
group.bench_function("not-null",|b| b.iter(||check_point_equality_vectors(¬_zero,¬_zero)));
group.bench_function("one-null",|b| b.iter(||check_point_equality_vectors(¬_zero,&only_zero)));
group.bench_function("only-null",|b| b.iter(||check_point_equality_vectors(&only_zero,&only_zero)));}
Results are good, and differences are considered in the noise threshold:
points-equality/not-null
time: [28.527 ms 28.552 ms 28.582 ms]
points-equality/one-null
time: [28.358 ms 28.372 ms 28.388 ms]
points-equality/only-null
time: [27.902 ms 27.968 ms 28.044 ms]
Description: Parameters for the Pallas & Vesta curves have been checked, tested for primality, group order, generators, etc., and checked their correct implementation.
No issue found.
NB: No security review has been done on the Pasta parameters / generation to ensure that they can be considered as a DLP-secure group with the desired level of security (as required for the Halo 2 proving system).
Description: Implementation of Pasta curves and their isogeneous curves match the specification. Careful attention has been paid to check that all the constants involved were indeed matching the specification.
Description: Applying the isogeny mapping to the point at infinity should output the point at infinity on the output curve. In the current algorithm, even if the implementation matches the specification, when applied to the point at infinity (in Jacobian coordinates), it gives .
Recommendation: This is a strange behaviour, and the original algorithm should be reviewed for completeness.
Description: The current implementation computes at each step the value as cur *= &point; then computes at the next step the element and add it to the accumulator as acc += &(cur * coeff);.
pubfneval_polynomial<F:Field>(poly:&[F], point:F)->F{// TODO: parallelize?letmut acc =F::zero();letmut cur =F::one();for coeff in poly {
acc +=&(cur * coeff);
cur *=&point;}
acc
}
Recommendation: In the serial implementation, this can be optimized by using the Horner's method to evaluate polynomials: .
One can see that this code requires one less multiplication per step. NB: this optimization is still feasible in a parallelized way using a tree structure (particularly efficient if the degree of the polynomial is a power of 2, which is the case in polynomials used here).
[QZ1-107] Typo in specification §5.4.2 on computing OVK
Description: There is a typo in the use of BLAKE2b-256 versus BLAKE2b-512 as seen below
Recommendation: Fix it.
[QZ2-108] Ordering of Instance / Advice / Fixed variable is forced in the code, not in the spec
Description: halo2/plonk/circuit.rs l.45 forces it to Advice < Instance < Fixed This may break interoperability if another implementation uses a different convention.
Description: This function is complex (+150 lines), and could be simplified. For example, this possible simplification has been identified when creating the selector_map and selector_replacements variables:
This code seems to first create a vector of Option and then iterates over each value to call .unwrap().
Recommendation: Simplify the code by removing useless operations.
[QZ2-111] Generation of Proving / Verifying Key halo2: keygen.rs
Description: The generation of Proving/Verifying has been reviewed. The implementation is straightforward, and follows the specification. Some optimizations were suggested in the code as comments.
No issue found: Nevertheless, implementing optimizations suggested in the code would be good.
Description: This rust code computes the required degree of an argument using its input/table polynomial expressions. This code is very naive and implies unnecessary for loops, while it could be done using Rust-specific optimizations. It may increase readability. E.g:
Description: The implementation of polynomial commitment (prove/verify) has been reviewed and matched against the Halo2 book. Implementation is straightforward and follows the spec.
Description: The implementation of lookup argument (prove/verify) has been reviewed and matched against the Halo2 book. Implementation is straightforward and follows the spec.
Description: The implementation of permutation argument (prove/verify) has been reviewed and matched against the Halo2 book. Implementation is straightforward and follows the spec.
Description: The implementation of vanishing argument (prove/verify) has been reviewed and matched against the Halo2 book. Implementation is straightforward and follows the spec.
No issue found
Orchard Primitives [ID QZ*-2**]
This diagram represents the different components used in the PRFs, Hash functions and commitments deriving from the GroupHash functionality. A connecting line originates at the end with a diamond and lands on the "empty" end of the line. This means that the box at the end of the line is instantiated using the primitive in the box at the beginning of this line. There is no specific color coding, except that security requirements of connecting primitives are displayed in pink.
Description: Function prototype does not match specification definition. The function hash_to_field uses DST in the spec, while DST is derived from the curve name, and the domain prefix. In the code, it directly requires the domain_prefix, and the curve name.
Recommendation: Replace in the code the prototype of the function to take as inputs the DST instead of the domain prefix.
Description: Function implementation is error prone. In the function hash_to_field, it is required to compute , , , using DST. Instead of building DST once, and reusing it for the three computation, it is built three times.
Recommendation: Build DST once using the needed parameters, and reuse it wherever it is required.
NB: If the recommendation of [QZ1-201] is implemented, then this issue can be ignored.
[QZ1-203] Security Comment on Sinsemilla Instantiation §5.4.8.4
Description: In the instantiation of the SinsemillaHashToPoint function, an incomplete addition formula is used and there is a claim that "the probability of returning bottom is insignificant", yet there is no further explanation, whereas it seems that the failure mode of even not checking the output properly can produce a non-trivial discrete logarithm relation, as proved in Theorem 5.4.4
Description: The (which in turn is instantiated using function) replaces the Bowe-Hopwood Perdersen hashes and is used in three main components in the protocol:
to compute the tree path elements, which requires collision resistance
to generate the hiding and binding commitment to be added as a leaf of the Merkle tree
to derive the incoming viewing key
The main requirement for the function's security is to be collision-resistant between inputs of fixed-length. Theorem 5.4.3 proves this is the case where acts as a random oracle. The hiding property in the note commitment is derived from the pseudo-randomness provided by the component.
[RedPallas SpendAuthSig Group Generator] The GroupHash function is used to generate the base generator for the RedPallas signature scheme, and can be instantiated as a pseudorandom generation as GroupHash can be used as a random oracle. This prevents others from generating a second dependent generator of the group and its discrete log, as expected in the scheme.
[RedPallas BindingSig re-randomization] The re-randomization algorithms, both for key generation and signing/verifying are accurately described in the specification, including bit encodings and endianness. The arithmetic of the key-homomorphic property is correct for the instantiation in BindingSig.
No issues found.
Recommendation: Explain further the security requirements and how they are instantiated for RedPallas in Orchard.
Description: The GroupHash function is used widely across the Orchard protocol to generate random-like ellitpic curve group elements, making it one of the most sensistive pieces of the system. In most cases the GroupHash must be used as a random oracle to produce safe group elements and independent generators. It is instantiated by using the Blake2s hash function
The diagram above is a recreation of the typical key structure diagram in the spec, yet this diagram includes the type of variables, the functions invoked that map each of the variables, as well as a color code to show what is private / public. Here is a more concrete legend:
Variable types
circles represent field elements / elliptic curve scalars
triangles represent byte strings
rectangles with round corners represent elliptic curve group elements
the parallelogram represents base field elements of the Pallas curve
Color coding
red represents private elements, not shareable with anyone else
green represents public elements, shareable with all the network
yellow represents elements that must be shared with the receiver of the transfer
orange represents elements that can be shared with external parties but that imply a loss in privacy
blue represents intermediate computations
[QZ1-301] Tests are not triggered in keys.rs and other files.
Description: A new strategy is defined using the prop_compose macro, but it is never triggered at the higher level. Generally speaking, the code has low test coverage.
Recommendation: Make sure the existing tests are being triggered. Consider copying some of the test vectors from the Python implementation and encode them as tests in this repository.
[QZ1-302] Numeric domain separators are used directly. #1#2#3#4
Description: The domain separators are being used as literals directly in a manner that makes it hard to understand the origin of the numbers without reading the spec.
Recommendation: Consider extracting the domain separators into named constants.
[QZ2-303] Security of the derivation of keys and addresses
Description: As shown in the diagram above, the keys and addresses have been restructured in Orchard, removing / unifying some of the keys previously used in Sapling. This issue alludes to the review done on the key derivations and their security
The viewing keys ivk and ovk provide visibility access to the transaction details (incoming and outgoing, respectively) and do not allow for spend authorization
The re-randomization process of ak and ask is secure under pre-image resistance, and collisions cannot be found
The nullifier key is unique and not derivable from other shareable keys
The Orchard addresses are properly diversified to prevent linkability across transactions, even from a sender or receiver to the transaction.
Description: Unified Addresses specify a standard way to convey several methods for payment to a Recipient's Wallet, based on any of the active Zcash protocols. The Sender's Wallet can then non-interactively select the method of payment, based on the protocol prioritization of the UA. We note that UAs are implemented as an external component to the main Zcash (Orchard, Sapling, Transparent) protocols. In fact, they only interface with the different protocols by using the address structures, but UAs are dealt with at the level of the third-party wallets. It is still important to review to ensure that no impactful malicious behaviour can be put in place.
The general design of the UA protocol is correct and secure.
Given the encoding and the invertibility of the F4Jumble, a UA can be correctly decoded by any third party wallet / user, and wrong addresses are ignored.
A given UA is unique to the prioritization of the protocol used as a form of payment.
Same applies to the Unified Full Viewing Keys and Unified Incoming Viewing Keys.
No issues found.
[QZ2-305] UA address prioritization not enforceable on-chain
Description: Given that the UA is never verified on-chain by consensus rules, the receiver of an Orchard, Sapling or transparent transfer does not have a formal way to verify that the protocol of UA prioritization was enforced.
A simple attack, that could break some privacy, would simply be that for a UA containing an Orchard address as first priority and a transparent address as second priority, then the sender could chose to send by transparent address, even if it supported Orchard.
The UA is not on-chain, so no validation is done on the blockchain consensus level.
Recommendations: As long as nothing about UAs is verified on-chain, it would be difficult to enforce properties. Even then, prioritization is very difficult to enforce - the UA protocol would have to be more robust. Note that this problem, at least the privacy leaking, is solved when transparent addresses are gone.
[QZ2-306] Near second pre-image collision of the encoding of Unified Addresses and Unified Viewing Keys
Description: As described in the address hardening section of ZIP-316, one of the security goals is preventing a consumer of a UA from generating a collision UA, where one of the underlying addresses is either changed or removed completely
The UA is an approximation to a random permutation, which means that using the 4-round Feistel construction, a small change in the underlying address will generate a large change in the UA encoding, not allowing for a more efficient attack than brute-force.
We checked that the producer does not have a way to form a valid but not legal UA. Though there does not seem to be a meaningful attack, producers could not even find a partial collision between two UAs that they generate.
No issues found.
[QZ2-307] Non-malleability of Unified Addresses and Unified Viewing Keys §
Description: The use of the F4Jumble algorithm in the address hardening process of the encoding adds enough redundancy to the UA encoding that no malleability is possible of the underlying address and / or prioritization
A consumer of a UA is not able to malleate the UA to add an address under its control. This would break malleability, which is ensured by F4Jumble.
As discussed in the ZIP-316 and in the Gentry and Ramzam paper, a four-round Feistel network is enough for a random permution. In this paper they argue that the construction is secure under the strong security property, preventing even partial collisions.
No issues found.
Orchard Actions & Protocol [ID QZ*-4**]
[QZ1-401] Lack of Orchard Shielded Protocol Proof of Security
Description: Throughout the specification there are very good intuitive notes and arguments for how the different components of the system achieve security. However, there is no overarching proof of security, or high-level sketch of the proof, as was done for the Sapling protocol GabizonHopwood. This implies that not having a proof means that one may miss important security issues for requirements or security properties such as
non-malleability,
unlinkability / indistinguishability,
balance,
spendability / ownership
This is true especially given some of the changes from the Sapling protocol (non-malleability of transaction hash, unified addresses, etc.)
Recommendation: Provide an overarching proof of security, or at least a sketch at the end of the Orchard protocol description. Another example for such a proof is in Appendix B of Decentralized Anonymous Micropayments.
[QZ1-402] Nullifier Generation Clarification Needed in the specification §4.16
Description: The generation of the nullifier in the Orchard protocol is one of the most sensitive components, since there are many potential attacks that could be implemented if any part of the derivation is not secured robustly. The nullifier creates a linked chain of the asset's life, while generating the necessary sender-controlled randomness to ensure the nullifier's uniqueness:
As defined in ZIP 224, nullifiers are generated using Poseidon hash and as a combination of the nullifier key, , , and the note commitment itself, :
In Section §4.17.4 it appears that , for which a new random element must be generated to make the nullifier unique, i.e.:
As explained in §3.5 of the Orchard book, the nullifier derivation process must ensure several properties:
Balance of funds
Note privacy (in and out of band)
Spend unlinkability
Faerie-gold attack resistance
Recommendation: follow similar arguments in the spec to those outlined in §3.5 of the Orchard book for arguing why the nullifier generation yields a secure protocol.
Description: The Action statement for the Orchard protocol differs from Sapling's Spend & Output in various ways, as outlined in ZIP 244. One of these is that the two proofs are combined into a single statement, hence proving the integrity of the spend and output notes, commitments, derived values. The two subcircuits are mostly maintained, with the spend checking the note commitment, merkle path and nullifier integrity and the output checking the new note commitment integrity. We checked the different integrity checks
Both spend and output note commitments are generated correctly, fixing the owner, value and nullifier randomness
The commitment belongs to the set (tree) of existing note commitments
The spend note produces a valid nullifier - its uniquess is critical for preventing double-spend attacks. The validator nodes MUST verify that the nullifier has not yet been published to the blockchain.
The re-randomized signature public key, was generated properly from the spend note's owner's unique signature public key, . This ensures that the transaction signature authorizing the transfer is indeed done by the owner of the note being spent.
The "diversifier address integrity" performs a critical role: ensuring that the are all related to the same spending key, . If this check was not in place, person A could spend notes that are not owned by A.
The values of all the notes have been committed correctly, using the same values as in the note commitments above. The value commitment is done by taking and committing to this difference. The validator nodes MUST verify that the addition of all value commitments is equivalent to a commitment to zero. This is done by verifying a homomorphic signature.
No issue found.
Recommentation: For each of the non-normative notes in Section §4.17.4, there should be a sketch or proof of security alluding to each of these. Furthermore, it is important to clarify that the integrity of the statement is not purely contained in the proof being verified, but also in further verifications (either by all the network or by the receiver of the transfer). Some examples include: nullifier list check, SpendSignature and BalanceSig verification, etc. These are critical and deserve their mention and security sketch.
[QZ2-409] RNG and source of entropy on virtual machine (VM) in the Cloud
Description: Privacy of transaction and zero-knowledge property may not be ensured if the prover runs on an improperly-configured VM.
Random numbers are required everywhere to ensure zero-knowledge property / privacy.
On Intel Machines (and some AMD as well), a call to getrandom is remapped onto the RDRAND CPU instruction. But when launched on a VM, this remapping is handled by the hypervisor… only if it has been properly configured.
The internal RNG uses a syscall that reads /dev/urandom on Unix systems.
/dev/urandom: lesser — but still high — quality of randomness, generated by an intermediate Cryptographically (Secure) Random Number Generator (CRNG); it will block until it is properly seeded from the entropy pool (at boot time) but not afterwards (albeit with decreasing randomness quality if you read too many data, too fast, before it gets re-seeded)
In cloud environments, a hacker owning a machine on the same host could empty the entropy available in /dev/urandom asking for a huge amount of random data. This will decrease the randomness quality of subsequent calls, and may lower the security.
Recommendation: We don't think these issues could be fixed since the running environment is out-of-scope, and cannot be controlled. So trust is the key here.
[QZ2-410] Randomness reuse in restored virtual machines
Description: Privacy of transaction and zero-knowledge property may not be maintained when transactions are sent from restarted VMs
As pointed out by QZ2-409 the internal RNG uses a syscall that reads /dev/urandom on Unix systems, which (in the absence of fresh entropy) behaves deterministically.
Thus, if VM state is saved, a random value is drawn, and then the VM is reset to the save state and randomness is re-generated, and a random number is drawn again (quickly enough), then the two values may be identical with high probability. (See. e.g., the When Good Randomness Goes Bad paper.)
Such randomness reuse may break the zero-knowledge properties of the proving system, or other privacy properties of the Orchard protocol.
It affects commitments, hashes and other primitives, except for the signatures that are deterministic.
Recommendation: Find and implement best practices.
Circuits [ID QZ*-5**]
This section includes the issues reported on the Action circuit, which can be found at this link.
This diagram is a visual representation of the Action circuit and the relations between the different variables used. It includes the type of variables, the functions invoked that map each of the variables, as well as a color code to show what is private / public. Here is a more concrete legend:
Variable types
parallelograms represent base field elements
triangles represent byte strings
rectangles with round corners represent Pallas elliptic curve group elements
the marquise shapes represent values in a given range
Color coding
red represents private elements, not shareable with anyone else
green represents public elements, shareable with all the network as part of the transaction
yellow represents elements that are known by the receiver of the transfer
orange represents elements that can be shared with external parties but that imply a loss in privacy
blue represents intermediate computations that must be verified as constraints in the circuit
**EDITThere were last minute changes to the types of the diagram variables to adapt to the finding reported by the ECC team of ill-typed MerkleCRH
[QZ2-501] Naming style and convention of gate definition and usage
Description: There is a kind of "disconnection" between gate definition, and usage. There is a calling convention of enabling a selector on the right row, and low-level copies of arguments into anonymous columns and offsets (e.g., advices[4], uint offset vs Rotation type).
Recommendation: The suggestion is to give names to every entity such as columns, cells, and offsets, using named functions with named arguments. Maybe common getter functions to access cells. Generally try to unify the "config" items shared between definitions and invocations in the type system (like most other concepts do enjoy a good level of type safety).
Example: v_net logic (still simple, but does not belong in the main synthesize function as an anonymous block); note commit (very complex including very indirect contracts on value range checks); short fixed-base multiplication (very indirect with magnitude.y = y_qr at offset+1).
[QZ2-502] Naming style for regions and columns
Description: In the circuit code dealing with regions and columns, there are a few naming style issues:
SingleChipLayouter.columns is column_sizes and regions is region_starts, so the naming does not convey the real meaning
Some of the functions named assign_region mean allocate_region, while others mean assign_into_region.
Recommendations:
Wrap indexing into a function (layouter.regions[region_index] + offset), which will also help to ensure type safety
Give names of locations in regions instead of e.g., (config.advices[0], offset 0)
[QZ2-503] Unfixed offset for the instance column count
Recommendation: In the same way that the advice column has a fixed constant, make 9 a named constant.
[QZ2-504] Minor inefficiency in cloning patterns
Description: In several places (i.e.: ecc_chip, sinsemilla_config), the clone() function is used to copy large structures, which can cause minor inefficiency.
Recommendation: Use references where possible
[QZ2-505] note_commit_config region assignment
Description: the layouter.namespace of the note commitment configuration is the same for the old and new notes, prone to mistake or confusing logs
Recommendation: Fix it.
[QZ2-506] No unintended consequences of v_old and unconstrained anchor
Description: With v_old=0,
the old note did not go through “new note integrity”
“Old note integrity” does check again all the checks of “new note integrity” (gd and pk_d are points, the note commitment is calculated, value is 64 bits).
No issues found.
[QZ2-507] v_net and anchor logic check
Description:
The connection between v_old, v_new, magnitude, sign, anchor is verified and the same variables are used in all computations.
magnitude and sign range check. Magnitude 66 bits in decompose. Then in q_mul_fixed_short, last_window_check 0 or 1 (advices[5]), and sign +1/-1 (window advices[4]).
v_old and v_new are in range 64 bits (note_commit.assign_region).
Part of the q_canon_1 gate (advices[2]).
Combination of d_2, z1_d, e_0.
d_2 (advices[3])in range 8 bits, witness_short_check.
z1_d = d_3 (advices[4]) in range 50 bits.
e_0 (advices[5]) in range 6 bits, witness_short_check.
v_old = 0 or enable_spends = 1
v_new = 0 or enable_outputs = 1; value=1 to enable, anything else to disable.
v_old - v_new = magnitude * sign.
anchor matches public input if v_old != 0.
correct connectivity of circuit variables to chip.
No issues found.
Recommendation: implement as a sequence of simple add/mul gates (4 gates for q_orchard and 1 for the addition in nullifiers), instead of using a custom gate.
[QZ2-508] Value commitment integrity check
Description:
negation of point by negation of y.
Connection between magnitude_mul, and offset+1, y_qr, sign, and output point y_p.
signed point (already seen for magnitude_sign range check).
Commit and blind with fixed-base mul short (magnitude,sign), fixed-base mul full non-canonical rcv, complete addition, ([v_net] ValueCommitV + [rcv] ValueCommitR).
Connection to public inputs.
No issues found.
[QZ2-509] Note commitment check
Description: We verified that the following properties hold
Old note commitment
cm, g_d, ak are points.
psi_old connection between commitment and nullifier.
Connection of cm_old between commit, nullifier and leaf of Merkle path.
No issues found.
[QZ2-510] Nullifier old check
Description: We verified that the that the nullifier is properly generated
The poseidon_hash(nk, rho_old) is computed correctly a. Checked the connection of nk across the circuit (advices[0] offset 1 in q_commit_ivk).
The modular addition is done correctly in the circuit.
The EC exponentiation of NullifierK is computed correctly.
Add cm_old with complete addition.
No issues found.
[QZ2-511] Elliptic curve operations as constraints
Description: We verified that the following properties hold
ECAdd region assigned + operations
ECIncompAdd region assigned + operations
ECMul[Fixed] region assigned + operations
Witnesses are Pallas Points
No issues found
[QZ2-512] Sinsemilla chip and gadget check
Description: We verified the following
Regions assignment coherency
Parameters constraints
SubChips:
Merkle Chips
CommitIvk
NoteCommit
No issues found
[QZ2-513] New note integrity check
Description: We verified that the following properties hold
psi_new and rcm_new, no constraints necessary.
public input connections. (Only X coordinate).
g_d_new and pk_d_new are points.
old nullifier connection.
value v_new connection.
commitment with full-width blinding factor.
No issues found.
[QZ2-514] Diversified address integrity check
Description: We verified that the following properties hold
Connection to gd_old in note commitment.
gd_old cannot be the zero point
it is declared as a non-zero point g_d_old: Option<NonIdentityPallasPoint>
the default value is the generator of the curve, (which is obviously non zero)
and when imported from a byte buffer, it's checked again against the zero point
Connection to ak in spend authority.
Connection to nk in nullifier.
Derivation of ivk from nk and ak (X coordinate).
ivk cannot be a congruence (base field < scalar field order).
Derivation of pk_d_old from gd_old and ivk. Variable mul with scalar (Pallas base field).
pk_d_old is a point.
Connection of pk_d_old to note commitment (X, Y coordinates).
No issues found.
EDIT 29/09/2021: a warning issue was found by the ECC team after the report was shared. In fact, the circuit did not originally check that gd_old is not the zero point, contrary to our report. For more information, see this PR.
[QZ2-515] Spend authority check
Description: We verified that the following properties hold
Exponentiation with full-width scalar (no constraint on alpha needed).
Add ak with complete addition.
ak is a point.
Connection of ak to ivk (X coordinate).
Public input connection of rk. (X and Y coordinates).
No issues found.
[QZ2-516] Typo in circuit comment
Description: wrong comment in ecc/chip.rs constrain_equal - “Constraint x-coordinate"