After understanding the entire flow of the Verkle Cryptography API, and how it's supposed to work in Nim, for the Nimbus client.
I raised an issue soon after in Constantine, and broke down my work into 15 objective tasks.
They were soon after listed in my PR. The tasks were as follows:
Fixes #275
This to-do is updated to 7/10/2023.
For the past few weeks I have been racking my head on how to implement the precompute approach for Lagrangian interpolation of the polynomials. However, soon after I figured out, thanks to this article.
The followting is an excerpt from Kev's article.
We briefly restate the formula for a lagrange polynomial:
The i'th lagrange polynomial evaluated at
is 1 and 0 everywhere else on the domain
We introduce the polynomial
We also introduce the derivative of
You can derive this yourself by generalising the product rule: https://en.wikipedia.org/wiki/Product_rule#Product_of_more_than_two_factors
In general this derivative does not have a succint/sparse form. We do have a succint form if the domain is the roots of unity!
Now note that
If we plug in
into all the terms with will vanish, this is why the sum disappears into a single product.
We can use
Looking at the original lagrange formula,
is the denominator and is the numerator.
The first barycentric form for a polynomial
The Nim equivalent is stated here.
One of the main portions of the code is as follows:
func newPrecomputedWeights* [PrecomputedWeightsObj] (res: var PrecomputedWeights)=
var midpoint: uint64 = DOMAIN
var barycentricWeightsInst {.noInit.} : array[(midpoint*2), EC_P_Fr]
for i in uint64(0)..midpoint:
var weights {.noInit.}: EC_P_Fr
weights.barycentricWeights(i)
var inverseWeights {.noInit.}: FF
inverseWeights.inv(weights)
barycentricWeightsInst[i] = weights
barycentricWeightsInst[i+midpoint] = inverseWeights
midpoint = DOMAIN - 1
var invertedDomain: array[(midpoint*2), EC_P_Fr]
for i in uint64(0)..DOMAIN:
var k: EC_P_Fr = cast[EC_P_Fr](i)
k.inv(k)
var neg_k : EC_P_Fr
var zero : FF
zero.setZero()
neg_k.diff(zero, k)
invertedDomain[i-1] = k
invertedDomain[(i-1) + midpoint] = neg_k
res.barycentricWeights = barycentricWeightsInst
res.invertedDomain = invertedDomain
Here the precomputed weights are
type
PrecomputedWeights* = object
#This stores A'(x_i) and 1/A'(x_i)
barycentricWeights: openArray[EC_P_Fr]
#This stores 1/k and -1/k for k in [0, 255]
invertedDomain: openArray[EC_P_Fr]
The theoretical equivalent of this is here-
For our case where the domain is the discrete interval
We only need to precompute
How would we lookup these values?
First we imagine that we have stored the values in an array:
We first note that we can easily get from
Example
We want to compute
In practice, we can use an if statement to check whether 255 or 0 is larger, and subtract accordingly.
With the roots of unity, we did not need this optimisation as
For this, we will need to store precomputed values, if we want to efficiently compute
(Dankrad): We precompute
How would I lookup these values?
Similar to the previous optimisation, we store
Example
We want to compute
This is an optimisation that is being followed only in Go implementation apart from the Nim (Constantine) implementation, which is newly being developed.
Most of these functions exist in the common_utils.nim
file in my Constantine PR.
One of the very important tasks for me in here was writing a function to Derive the generator element for Pedersen Commitment.
These are the set of dependencies
The current seed being used is: eth_verkle_oct_2021
fn generate_random_elements(num_required_points: usize) -> Vec<EdwardsAffine> {
use bandersnatch::Fq;
use sha2::{Digest, Sha256};
let mut hasher = Sha256::new();
hasher.update(b"eth_verkle_oct_2021");
let bytes = hasher.finalize().to_vec();
let u = bandersnatch::Fq::from_be_bytes_mod_order(&bytes);
let choose_largest = false;
(0..)
.into_iter()
.map(|i| Fq::from(i as u128) + u)
.map(|x| EdwardsAffine::get_point_from_x(x, choose_largest))
.filter_map(|point| point)
.filter(|point| point.is_in_correct_subgroup_assuming_on_curve())
.take(num_required_points)
.collect()
}
For Nim we use the Projective
coordinates, as that's what we've been using in order to make the library comparitively as fast as Go. Here is how the code looks like -
func generate_random_elements* [FF]
(points: var openArray[FF],
num_points: uint64) =
var incrementer: uint64 = 0
while uint64(len(points)) != num_points:
var digest : sha256
digest.init()
digest.update(seed)
var b {.noInit.} : array[8, byte]
digest.update(b)
var hash {.noInit.} : array[32, byte]
digest.finish(hash)
var x {.noInit.}: FF
x.deserialize(hash)
doAssert(cttCodecEcc_Success)
## Deserialisation success !
incrementer=incrementer+1
var x_as_Bytes {.noInit.} : array[32, byte]
x_as_Bytes.serialize(x)
doAssert(cttCodecEcc_Success)
## Serialisation Success !
var point_found {.noInit.} : EC_P
point_found.deserialize(x_as_Bytes)
doAssert(cttCodecEcc_Success)
points[incrementer] = point_found
Sorry for writing articles biweekly, I just get lost into implementation sometimes, next update will be about how I'll be implementing splitting and folding of scalars
while performing Pedersen Commitments inside the Inner Product Argument setting.
I'll be also updating about how I'll be creating IPA proofs, creating a Multiproof using those IPA proofs and finally calling those proofs in a Multiproof Verifying Mechanism.
Till then, best wishes