or
or
By clicking below, you agree to our terms of service.
New to HackMD? Sign up
Syntax | Example | Reference | |
---|---|---|---|
# Header | Header | 基本排版 | |
- Unordered List |
|
||
1. Ordered List |
|
||
- [ ] Todo List |
|
||
> Blockquote | Blockquote |
||
**Bold font** | Bold font | ||
*Italics font* | Italics font | ||
~~Strikethrough~~ | |||
19^th^ | 19th | ||
H~2~O | H2O | ||
++Inserted text++ | Inserted text | ||
==Marked text== | Marked text | ||
[link text](https:// "title") | Link | ||
 | Image | ||
`Code` | Code |
在筆記中貼入程式碼 | |
```javascript var i = 0; ``` |
|
||
:smile: | ![]() |
Emoji list | |
{%youtube youtube_id %} | Externals | ||
$L^aT_eX$ | LaTeX | ||
:::info This is a alert area. ::: |
This is a alert area. |
On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?
Please give us some advice and help us improve HackMD.
Syncing
xxxxxxxxxx
"Streaming" Aggregation techniques for Testudo
Goal
The goal is to allow a prover to prove multiple times the same circuit with different witness but only using memory linear in one proof, or with a small constant per proofs.
In other words, we would like to have a snarkpack-like aggregation technique for Testudo. In Snarkpack, the prover generates Groth16 proofs sequentially and only keeps around each individual Groth16 proofs before aggregating them all.
Context
Using uniformity of the circuit already brings down cost a lot for uniform circuits. However,
sometimes we need to compute many of these proofs at once. For example in Snarkpack, miners will compute thousands of Groth16 proofs first and then aggregate with Snarkpack to only publish one aggregated proof onchain. The nice property in this model is that the memory required by the prover in this case is linear but very small, only 1 Groth16 per proof.
Goal: Have a snarkpack like aggregation mechanism where the memory of the prover can be linear but for a small amount of memory, i.e. prover is not required to keep the whole witness for each proofs.
High Level view of current Testudo
Testudo is composed of the following:
We are using Testudo on BLS12-377 and BLS12-381, both curves are part of a 2chain model. More specifically, we can build another proof on top of a Testudo proof to "compress" the original proof. For BLS12-377 it'd be BW6 with Groth16, and for BLS12-381, it's to-be-defined (it exists but unclear which proof system to use yet).
Strawman 1
Pros: simple to get
Cons: limited in number of aggregation:
* verifier work is linear in number of aggregated proof
* (a) circuit size will be the limiting factor
Strawman 2
Problem is that prover needs to keep all the individual polynomials (witnesses) in memory
before deriving the randomness \(r\), thus it doesn't solve the problem.
Final Proposition
To prove K relations. This does not require to have the whole K witnesses but only K smaller states.
We call \(\pi_{PSC}\) the "Spartan Sumcheck Core": the two sumcheck proofs proved in Spartan now. You can also think of it as the witness to our current Groth16
The protocol:
On reception of witness \(w_i\) ("keeping" denotes keeping as state)
When done steps above for all witnesses:
Optimizations:
send the committed U_is in the proof
- The image file may be corrupted
- The server hosting the image is unavailable
- The image path is incorrect
- The image format is not supported
Learn More →Verifier
\(K\) is number of proofs to aggregate, \(N\) is circuit size
Either natively or in circuit using an "outer curve":
- The image file may be corrupted
- The server hosting the image is unavailable
- The image path is incorrect
- The image format is not supported
Learn More →- The image file may be corrupted
- The server hosting the image is unavailable
- The image path is incorrect
- The image format is not supported
Learn More →State
O(K sqrt(N))
Proof size
O(K log(N))
Perf estimation (TODO)
We need the following numbers (optimization (a))
A reasonable lower bound on the estimated proving time should be:
\(K * (M_c + S + M_p) + C_c + P + M_m\)
The verification will be higher but we plan to have a final proof system that verify all
of this, for "compression".