Smart contract state contains
Method/Cost | Operation cost | Calldata cost | Estimate per pubkey |
---|---|---|---|
Single Deposit | store_word + base_fee |
word |
40k |
Batch Deposit | store_word * B |
word * B |
20k |
Single Withdraw | hash + base_fee |
word |
40k |
Batch Withdraw | B * hash |
word * B |
20k |
Estimates are for depth
= 20 (This table is borrowed from https://hackmd.io/JoxnlDq3RT6WhtA-KBxtYg?both#A-Pubkey-map. The depth has no effect on the gas cost)
In the description below, epoch equals M seconds.
The user maintains the entire tree.
The full tree with depth 20
will require 67MB
storage.
She can also maintain a partial tree which lowers the storage overhead to 0.128 KB
.
Howeer, there is a problem with slashing when users only hold the partial tree.
The user always persists the tree root
s for the last E
epochs.
The user sends a transaction containing her pk
and X
Eth (in addition to the transaction fee) to the smart contract.
That transaction invokes the register function of the contract.
As the result, the user's pk
gets inserted into the list of pks.
The user needs to listen for an insertion event to be emitted with her pk
and some index
. After that, the user can start using waku-rln-relay.
The user also needs to read the state of the contract and downloads all the registered pks.
The user then creates a Merkle tree and its root
out of the group pks.
The user can use that tree to compute her authentication path auth_path
.
The estimated gas cost is 40k
. This is for performing the insertion without locking any ether for the sake of slashing.
TODO One thing to keep in mind is that the state of the contract will expand as a function of number of public keys registered in it. It isn't unlikely that we'll hit a limit here in regards to contract storage. Currently on Ethereum mainnet, there seems to be a 24kb limit for contract creation. It isn't clear if this limit is the same for updating a contract. The contract size limit of 24kb was introduced as an anti DoS mechanism. This also relates to the storage rent discussion (which doesn't seem to have lead anywhere at the time of this writing, but might change with things like stateless clients in Eth2). Things might look different on L2s etc. To be investigated further.
The user keeps listening to the state of the contract for the pk insertion and pk deletion events.
If the user goes offline, then she can catch up with the current state of the group by looking at the current state of the contract (no need for listening to the past events). A deleted pk is replcaced with a zero, which indicates the deletion operation/event.
She reflects each perceived update into her local Merkle tree and recomputes the root
and her auth_path
.
The full tree with depth 20 will require 67 MB
storage.
The user always persists the tree root
s for the last E
epochs (Number of blocks, or seconds).
The user selects a message.
Measures the current epoch (measure the number of epochs passed since the Unix epoch).
Uses her local Merkle tree to obtain the root and her auth path (the Merkle tree gets constantly updated in the background during the synchronization with the contract phase, think of it as a parallel process that keeps the tree updated based on the incoming events)
Computes Shamir shares of her sk, the nullifier and creates zkSNARK proof.
Publishes the message with the proof and other public inputs (epoch, root, Shamir shares, nullifier) to the waku-rln-relay pubsub topic.
No gas cost.
The user receives a message
with the proof
(via waku-relay).
checks the pubsub topic to match the waku-rln-relay pubsub topic.
The user checks whether the tree root
attached to the message matches any of the roots recorded within the last E
epochs (this is a raw idea and needs more thoughts, there is no enforcement, it is just a parameter). If no, discards the message. Otherwise, checks the nullifier
, if there is a match, checks whether the messages are the same or not (compare the share_x
components), if they are different, proceeds with slashing. Otherwise, verifies the proof
. If the proof
is verified, relays the message, otherwise, discards.
Note that we are assuming the verifier is honest, there's no verification on chain or dispute mechanism. (Same is true for on-chain case).
If the prover is ahead of the verifier, the prover can attach the version of the tree to the message, and then the verifier can resalize she is behind and then can update herself.
No gas cost.
The slasher reconstructs the spammer sk_spam
using the two distinct Shamir shares.
The slasher identifies the index_spam
of the spammer's pk in the group (the index of the tree leaf holding the pk
). It does so by comparing the H(sk_spam)
with all the leaves of the tree. Therefore, the entire list of pks (tree leaves) should be available for this search to work out.
The slasher sends a transaction containing the sk_spam
and the index_spam
to the contract which in turn invokes a function corresponding to slashing (pk deletion). The deletion/slashing function checks whether H(sk) = list[index]
. Accordingly, the fund of the deleted member is transferred to the transaction owner.
The estimated gas cost is 40k
.
State of the smart contract contains
RMT
: The rightmost branch of the tree (that includes the tree root as well)update = hash * depth (to raclculate the root)
update_batch = hash * (depth - b) + hash * (B-1)
inclusion_check = hash * depth (inclusion check is to prove that the deleted pk belongs to the tree)
Method/Cost | Operation cost | Calldata cost | Estimate per pubkey |
---|---|---|---|
Single Deposit (Register) | update |
word |
~406k |
Batch Deposit | update_batch |
word * B |
~23.02k |
Single Withdraw (Delete) | update + inclusion_check |
word * depth (depth is for the auth_path ) |
~812.5k |
Estimates are for depth
= 20 (this table is derived from https://hackmd.io/JoxnlDq3RT6WhtA-KBxtYg?both#A-Pubkey-map, the values are updated for d=20
and B=128
In the description below, epoch equals M seconds.
The user maintains her auth path
and the root
.
For a tree with depth d=20
, she requires:
auth path
= 20 hash valuesroot
= 1 hash valueThe user always caches the tree root
s for the last E
epochs.
While the tree root for all the past epochs is available on the chain, caching some portion of epochs locally speeds up the message verification process, otherwise, the user has to read all the emitted roots from the contract per incoming message with a mismatching root.
The user sends a transaction that contains her pk
together with X
ether to be locked on the contract.
The X
ether is in addition to the gas fee
she has to pay for the transaction to be mined.
That transaction calls the register function of the contract.
That function recalculates the tree root after the inclusion of pk
in the tree. The registration function emits the insertion event with the inserted pk
, its index
, and the new root'
.
The user needs to listen for an insertion event to be emitted with her pk
and some index
and root
. After that, the user can start using waku-rln-relay.
The user computes her auth_path
using the state of the contract i.e., RMT
at the time of the insertion of her pk
.
Estimated gas cost 406k
. This is the cost for performing the insertion without locking any ether for the sake of slashing.
The user keeps listening to the state of the contract for the pk insertion and pk deletion events.
With each perceived update she updates her auth path
and stores the new perceived root
.
The Pk
insertion event emits the inserted pk
, index
, auth_path
and the new root
.
She recalculates her auth path
using the new pk
, adn the new pk
's auth_path
(no further info is needed).
The pk
deletion event carries the deleted pk_del
, index_del
, auth_path_del
, and the root'
i.e., the updated root.
The user updates her auth_path
using the auth path_del
.
For the offline users, they have to look at the past events to update their auth paths. This might be aggregated with some event aggregation service such as The Graph. Otherwise query time/complexity will be a function of offline time.
The user selects a message
.
Measures the current epoch (measure the number of epochs passed since the Unix epoch).
She uses her auth_path
and the Merkle tree root
to generate the proof.
Note that the tree root
and auth_path
get constantly updated in the background during the synchronization with the contract phase, think of it as a parallel process that keeps the tree updated based on the incoming events.
Computes Shamir shares of her sk
, the nullifier
and creates zkSNARK proof
.
Publishes the message
with the proof
and other public inputs (epoch
, root
, Shamir shares
, nullifier
) to the waku-rln-relay pubsub topic.
No gas cost.
The user receives a message
with the proof
(via waku-relay).
checks the pubsub topic to match the waku-rln-relay pubsub topic.
The user checks whether the tree root
attached to the message matches any of the roots recorded within the last E epochs (this is a raw idea and needs more thoughts). If no, discards the message. Otherwise, checks the nullifier
, if there is a match, checks whether the messages are the same or not (compare the share_x
components), if they are different, proceeds with slashing. Otherwise, verifies the proof
. If the proof
is verified, relays the message, otherwise, discards.
Note that, users do not and MUST not accept messages whose proofs use an old root, the reason being that a spammer may attempt to use an old version of the tree in which her pk was still part of the group.
(a proposal) All the roots preceding a pk deletion event should be deleted.
TODO There are some subtleties to consider here. A prover can either be ahead of a verifier's root or be in the past. Both cases should have well-defined semantics in terms of how up to date you are to the chain assumptions, etc.
Example: 100 nodes, 10 light nodes that can only do batch query of past events (e.g. Infura rate limit) and 1 spammer. After spammer slashed, the tradeoff is between (a) Damage they can do to system over some elastic/slack period vs (b) Damage this does to 10 light nodes in terms of requiring them to be up to date.
Argument could be made that relay nodes should just DO THEIR JOBS. But Infura might make you pay $$ :(
Isolate as parameters and make behaviour explicit, then we can just set some reasonable defaults for now.
No gas cost.
The slasher reconstructs the spammer sk_spam
using the two distinct Shamir shares.
Identifying the spammer index_spam
and auth_path_spam
: The user needs to go through the history of events of the contract and identifies the index of the spammer's pk in the group (the index of the tree leaf holding the pk). It does so by comparing the H(sk_spam)
with all the inserted pks.
The slasher also needs to find the auth_path_spam
. This can be done by either 1- querying a full node that persists the entire tree, or 2- by recalculating the tree from scratch locally.
The slasher sends a transaction containing the sk
and the index
and the auth_path_spam
to the contract which in turn invokes a function corresponding to slashing (pk deletion). The deletion/slashing function checks the inclusion of the pk_spam
. Accordingly, the fund of the deleted member is transferred to the transaction owner. The function also recalculates the root based on the submitted auht_path_spam
. (Side note: We currently have an open issue regarding race i.e., where two users simultaneously slash the same user).
The estimated gas cost is 812.5k
.