Agnish Ghosh

@agnxsh

Joined on Jul 5, 2023

  • PeerDAS POC To-do list Add the data-column sidecar type to protobuf/networking layer Add it's SSZ related types and methods Add the newly released spec constants to relevant to data columns sidecars Add the new specs to config YAML file, they should includeNumberOfColumns DataColumnSidecarSubnetCount SamplesPerSlot CustodyRequirement Enable peer to compute the assigned custody subnets and record these changes in an in-memory cache.
     Like  Bookmark
  • Context Most clients in today's Ethereum L1 eceosystem uses a hash-based storage, which adds new nodes to the database, resulting in a significant growth in the size of the database. A path-based db will essentially enforce that there is only ONE version of the world state at any given time. Thereby, avoiding reorgs. Approach Initial The database would stories verkle nodes based on the location of the same in the trie. Which basically means, the path associated to it, something we call as stem in the verkle world very frequently. Advantage
     Like  Bookmark
  • Designing cryptosystems that can run on today's classical computers and are secure against quantum attacks. What's the rush in developing post-quantum cryptosystems? Big QCs probably won't exist at a commercial level for several years, however :Harvesting Attacks: SNDL (Store Now Decrypt Later), Attackers all over the world are currently following this attacks strategy where they are storing today's cipher texts and public keys and are in hopes of performing a brute force attack, once QCs are a common thing, and are commercially available. Rewriting Past Timestamps (fork history and rewrite transactions that happened in the past thereby, making blockchains mutable) Deploying new cryptography at scale takess about 10 years. Lattices Why Lattices? Linear and highly parallelizable by the computer.
     Like 1 Bookmark
  • Nimbus Team This week was mostly spent in writing tests, the tests are actually divided into the following parts: [X] Helper math functions (in progress) [ ] IPA proof creation and verification tests (in progress) [ ] Multiproof creation and checking tests [X] Polynomial interpolation tests, with Barycentric Precompute optimisations and without them as well. (in progress) [X] Transcript generation tests and tests for generating a random scalar. Apart from I found a small security vulnerability in Go-IPA. Spoke to Ignacio about it and he has already raised an issue along with a PR. I think it's closed by now.
     Like  Bookmark
  • After understanding the entire flow of the Verkle Cryptography API, and how it's supposed to work in Nim, for the Nimbus client. I raised an issue soon after in Constantine, and broke down my work into 15 objective tasks. They were soon after listed in my PR. The tasks were as follows: Fixes #275 [X] Transcript generation using Fiat Shamir style [X] Barycentric form using precompute optimisation over domain
     Like  Bookmark
  • After going through Dankrad's IPA note with Pedersen and then going through a few more resources on IPA. I finally realised how IPA multipoint really worked! Intro Here I'll talk about how we can open multiple polynomials at different points. Finally aggregating to 1 proof, 1 commitment and 1 scalar. Definition Given $m$ IPA commitments $C_0 = [f_0(X)] ... C_{m-1} = [f_{m-1}(X)]$, prove evaluations: $$
     Like  Bookmark
  • Vector Commitments for Multipoint: Intro Basic Refresher Monomial Basis A polynomial can be represented in different bases, one of which is the monomial basis. In the monomial basis, a polynomial $P(t)$ of degree $t$ is expressed as: $$P(t) = a_0 + a_1t + a_2t^2 + ... + a_nt^n$$ To evaluate the $P(t)$ at a specific point $t=t_0$, we simply substitute t_0 into the polynomial and perform the assigned mathematical operations. $$P(t_0) = a_0 + a_1t_0 + a_2t_0^2 + ... + a_nt_0^n$$
     Like  Bookmark
  • Efficient Hybrid Exact/ Relaxed Lattice Proofs and it's Applications to Rounding and VRFs (Notes) Basics of a Lattice A lattice is a discrete, linear structure extending infinitely in every direction. Mathematically, given a set of linearly independent vectors $b_1, b_2, b_3, ...., b_n$ in $\mathbb{R^m}$, the lattice $Λ$ they generate can be defined as: In the cryptographic setting, the public matrix $A$ is often a random $n × m$ matrix over some finite field or ring, like $\mathbb{Z/qZ}$ where $q$ is a large prime. It serves as the public parameter for the system. The public vector $s$ is a random vector chosen from some distribution, often uniform or Gaussian over $\mathbb{Z/qZ}$. This vector is kept secret and forms the basis for cryptographic operations. Addressing the Fundamental Lattice Relation
     Like  Bookmark
  • Hi there, it's Agnish again. After racking my head over ALOT of topics, and even more github issues, I am finally able to narrow down, what I REALLY want to contribute and work on for this EPF Cohort. So, this week I have dived deep into Verkle trees and how the roadmap of Stateless Ethereum is being planned. I'm doing an overall update of whatever I've learnt in this domain for the past week. General Background I started off with Vitalik's first Verkle Tree post, he repeatedly referred to the first by Verkle Tree paper by Kuszmaul back in 2018. He also discussed about the merits of VKT over Merkle trees with the most important ones being : Merkle trees generate proofs that have data of each of the siblings in the particular node while Verkle trees don't. In practice, a merkle tree has 15 siblings for each node. Whereas, the average depth of a Verkle Tree is roughly ~4, VKTs specify more on the index of the of the particular child and it's path leading towards the root. So here's a basic comparison table stating the exhaustive details of PROOF SIZES for both MKT and VKTs:
     Like 1 Bookmark
  • Hey there Agnish here. This is my update for Week 5 of EPF. Following up on whatever I did last week, this week I mainly focused on drafting my project proposal, thinking of what all can I really afford to take up such that I can finish off the cohort with some good significant contributions. I started this week with reading further about how Verkle Tries are supposed to work with Bonsai, another blog by Karim Taam. Went through the codebase of Bonsai, how Account Hashes effectively were mapped to a flat array of leaves of the trie in their RocksDB, which saved a lot of query processing time as whenever the Storage Tries were sparse enough, with the help of flattening, Bonsai prevented redundant traversal of the tries. Read further about how Preload Caching worked in Bonsai and how a part of the Storage Trie is always cached during the initial read/write operations, and as block processing happens the new transaction information is added only by adding new branches to the Storage Trie. Along with that Trielogs, Accumulators and Layered States help in Rolling forward and backward of states for Ethereum, as Bonsai always has ONE persisted state stored for the entire chain. Me along with the few of the members from the Only Dust team also scheduled a call with Karim Taam, where he walked us through some of the internals of how the code was structured for Bonsai. Furthermore, I spent some time understanding and deep diving into the codebase as well. Additionally, this week I also studied about the Barycentric Formula that is used to compute Inner Product Arguments for Multiproofs using the Pedersen Commitment. This formula helps compute the IPAs using their vector of coefficients at any point in the domain of the polynomial without converting it to the corresponding Coefficient form which thereby optimises the time complexity of such operations from $O(d^2)$ to $O(d)$, where $d$ is the min number of multiplications. You can read it from this blog of Dankrad's. Lastly I was also added as a collaborator for Verkle Tries Besu, and I've already started working on an issue.
     Like 1 Bookmark
  • Hi there Agnish here. Last 2 weeks have been pretty hectic for me. Most of it was consumed because of some family urgencies. However, I managed to cover up some decent portions of Verkle tries for both the Nimbus client in the Nim programming language and for Hyperledger Besu in Java (with the crypto primitives in Rust) Studied the fundamental difference between Bandersnatch and Banderwagon and how Banderwagon specifically solves the cofactor issue of the Twisted Edwards representation of Bandersnatch curve. We can find it here and read more about it. Most of these blogs are by Ignacio or Kev. Apart from this I attended a call with the Nimbus Team working on Verkle Tries. Went through Mamy's repo on Constantine, a constant-time crypto library in Nim. I took some time and dug deeper into codebase, because I've been previously working on the Inner Product Arguments implementation in Nim. Additionally, got myself added to the Nimbus verkle repo, as a collaborator to start work on the Verkle Trie library. Also, had some significant discussions on to go about with the Besu implementation of Verkle Tries, on Telegram and scheduled a call with on the follwing week, with Karim Taam from the Besu team and Only Dust who've taken up work of integrating Besu with the Java library for the same.
     Like  Bookmark
  • Refer to VerkleTrie_Besu The current version uses Banderwagon and Banderwagon compatible IPA Multipoint impl, refer to crate-crypto. How are we planning to go about using Banderwagon and not Bandersnatch for VerkleTries_Besu. Edwards Projective does reduce the number of inverse operations during MSM, however, there's no separate Edwards Projective crate for Banderwagon, we just have plain Field element with prime r (Fr). How to go about this? Stable MSM crate (Rust) / Java library for Verkle Tries that support Montgomery Trick for batch inverse, refer to Ignacio's blog? Amount of changes needed to be Bonsai Trie operations / calling the Bonsai Trie interface as Matkt mentioned? RocksDB changes need to be done?
     Like  Bookmark
  • Hi, Agnish here. I finally decided upon a couple of things. But let's talk about what I learnt the previous week. Sorry for delaying this so much, I was really overwhelmed with school and my day job. What I learnt / did I dived deeper into Ignacio's notes, learnt more about Affine and Projective Coordinates and how to double Multiplicative and Inverse operations on them. Studied the different forms in which curves like BLS12_381 and Bandersnatch can be represented, namely the Twisted Edwards, Weistrass and Montgomery form. Again there's a Montgomery trick which often works for BLS12_381 but not Bandersnatch, you can read it from Ignacio's blog, and find my comment on Montgomery trick to read about how our goal should be maximise mult ops and minimize inv ops as inv ops are ~100x costlier than mult. Learnt ways to effectively benchmark MKTs and VKTs. Spoke to the Only Dust team from Quadratic Labs to proceed further with development of Java libraries of VKT for Besu. Asked Karim Taam to be my mentor for the same, someone who actively contributed in creating Bonsai Tries for Hyperledger Besu. Had 1 meeting with them to understand the proper workflow of how to proceed with dev and integration. Also had a chat with Zahary from Nimbus, and had a call with the Nimbus Team where I decided to work on the Nim library for Inner Product Arguments and Multiproofs for the Verkle Trie library for Nim Lang, for the Nimbus client.
     Like 1 Bookmark
  • This is the week 1 update for me, Agnish. This week went pretty hectic, as I mostly spent my time balancing out my day job and my EPF cohort work. So let's get straight to what I went through and learnt this week. 18th July, 2023 Just got off with the EPF meeting, and I decided that now I need to go through individual projects and need to really set my mind to it. I started with Ephemery Testnet by Mario Havel. At first I was a bit unsure of the terms ForkChoice and ForkDigest, then I realized that these were pretty essential concepts when it came to Consensus Layer of Ethereum. On further realisation, I understood that I pretty much had to dive deeper in order to understand how Proof of Stake really worked. 19th July, 2023 Verkle Trees in the Nimbus Execution Client was also another major project that I had shortlisted lately, and wanted to contribute. So I watched some further videos on Verkle Trees, the recent one hosted at ETHCC'23. For some further reading and deep dive I went through Paritosh's Notes on Verkle Transition. To finish it off I checked out some of the recent code implementations of Verkle Trees, the notable ones that I found were the one in Rust, and found another one from Weihan's notes, the one in Golang. 20th July, 2023 Did further reading through Eth2book, started with the preliminary concepts like Practical Byzantine Fault Tolerance.
     Like 1 Bookmark
  • 8th July, 2023 - Saturday Agnish : Reading: Ethereum World State Nodes and Clients RLP encoding basics Ethereum Yellow Paper Walkthrough Blogs 1-7 Watching:
     Like 1 Bookmark
  • Hello there, my name is Agnish, I'm a permissionless participant here, only to prove my value. I've been previously associated with university research in NLP and LLMs. It's been a year that I've moved in core Ethereum, before that I've mainly done smart contract development. I really look forward to contributing to the Ethereum Protocol Fellowship - Cohort 4. Here, I'll be updating on a weekly basis. As this was week 0 and mostly prerequisite stuff, I felt like ultilising this week fruitfully. Thereby, felt like pushing an update before the kickoff meeting (July 13th, 2023, 15:00 UTC) is really important. I started with gathering up some resources regarding several crucial topics. Then, I started them covering one by one... July 8th, 2023 Pretty much unsure where to start from, I started looking into some project ideas on the cohort repo Figuring out what I don't know yet, yes this was needed, so I started reading the Ethereum Yellow Paper right away. At first I was a bit unsure of the technical notations and language, hence, I used something lighter to get a gist of it, part by part. I went through the Ethereum Yellow Paper Walkthrough Part 1->7, for further reading about The Ethereum World State, I went through this. Here I realised the following things:
     Like 3 Bookmark
  • This blog is aimed to give the reader a developer's view to KZG polynomial commitment schemes as it's a more optimal approach than normal Merkle Proofs. But before diving right in, we need to understand what is a Blob on Ethereum. Well, Blobs can be defined as large amounts of data, that are about 4096 x 32 bytes (around 128 KB). I hope everyone gets a better understanding after reading this :) Optimisation Our job is to : 1) commit to the data succinctly, basically validating the existence and immutability of the data, and 2) prove values in the data set and a given position, let's say transaction number 27 on blob number 10. Something like that. Previously, before EIP-4844
     Like 1 Bookmark