owned this note
owned this note
Published
Linked with GitHub
# Bridge design to a PoW chain based on random sampling
We want a bridge design that is light enough to deploy on Ethereum 1.x. It will be too expensive to put 1000 public keys on Ethereum, so we basically have two choices: verify all signatures in succinct proofs or only verify a few signatures. This is a design that tries to make the latter cryptoeconomically secure. The ideal security to aim for is for it to be as expensive to attack the bridge as the minimum of the market cap of DOTs and Ethereum. Since we can only use small deposits or slash the few validators who reveal, any individual attack attempt is necessarily much cheaper. However we can aim to make an attack very expensive in expectation by making sure that an attack succeeds with very low probability and that failed attacks still cost the attackers. This translates to a successful attack being very expensive for an attacker to carry out with high probablity, which would deter a rational attacker.
The aim is to have an interactive protocol with a prover that has access to all the votes who voted for that outcome and a verifier who wants to know if an outcome is true. The verifier needs to inquire from prover signatures from a randomly selected sample of validators who are claimed to have voted for this outcome.
This means that the validators must sign something after they know it is already final, which is an extra signature/vote beyond just that for Byzantine agreement.
Our aim is to be efficient and have this interaction to be minimal number of signatures that have to be inquired by the verfier to be conviced of the outcome with a certain probability. The verifier (or the chain in case of a bridge) needs to find out whether at least one honest voter has voted for the outcome, meaning that at least 1/3 of validators voted for the outcome.
DELETE**We will use a block hash for this purpose, which means we need to deal with the issue that this is influenceable.**
Since we are working with the assumption that at least 2/3 of validators are honest and up to almost 1/3 might be dishonest, we can only expect 2/3 of validators to vote for something.
DELETE**If only exactly $(n-1)/3 + 1$ validators vote for something, then we cannot prove that at least $(n-1)/3 + 1$ did by random sampling.**
**If all honest validators vote for the outcome that the prover tries to convince the verifer of it is enough **
To show that a single honest validator voted for an outcome, it suffices to argue that over 1/2 of these 2/3 claimed voter have been voting for the outcome. This holds because the dishonest voters are less that 1/2 of the honest voters. Example, let us assume we have 100 total voter that are validators of a chain. The chain has agreed to outcome X, where 67 honest voters have voted for it and 33 dishonest voters have abstained from voting. The prover wants to convince the vierfier that X has been agreed upon. The verifier needs to inquire about signature of these 67 honest voters to be convinced with probability 1/1000 that X has indeed been agreed upon. If the verifier inquires about 34 signatures, it can be sure with overwhelming probabilty that at least one of these signatues is from a honest voter since dishonest voters are at most 33 voters. Now to be sure with probabililty 1/1000, the verfier only needs to inquire about 10 signatures, because every time successful inquiry reduces the that the claim that X has been agreed upon being false by a factor of 1/2.
We can describe this as a light client that uses an interactive protocol and then discuss how to implement this in Ethereum
## The interactive protocol
A prover wants to convince a light client that at least $1/3$ of validators signed a statement, which they claim that a specific set of at least $2/3$ of validators do. We assume that the light client already has a Merkle root $r_{val}$ of validator public keys.
1. The prover sends to the light client the statement $S$, a bitfield $b$ of validators claimed to sign it (which claims that more than $2/3$ of validators signed $S$), one signatures $sig$ on $S$ from an arbitrary validator together with Merkle proofs from $r_{val}$ of their public key.
2. The light client verifies the backing signature and its proof and asks for $k_{approval}$ validators at random among those that $b$ claims signed $S$
3. The prover sends the signatures $sig_i$, Merkle proofs of public keys from $r_{val}$ and Merkle proofs of signatures from $r_{sig}$ corresponding to the validators the light client asked for.
4. If all these signatures are also correct then the light client accepts the block.
Analysis: If at least $2/3$ of validators are honest but no honest validator signed $S$, then at least $1/2$ of validators the prover claimed to sign $S$ did not. Therefore the proof fails with probability at least $2^{-k_{approval}}$.
Furthermore, if signing an incorrect statement $S$ is slashable and we slash by at least $minsupport$, then if the light client reports the initial claim, then at least $minsupport$ stake can be slashed for an incorrect inital statement. Now if at least $2/3$ of validators are honest, then the proof fails with probability $2^{-k_{approval}}$ and so there is an expected cost of $2^{k_{approval}} minsupport$.
## Implementing this on a PoW Ethereum chain
In this case the light client is a smart contract. We can use a block hash as a source of randomness, although this can be manipulated.
In order to have an adversary with access to sufficient hashpower on Ethereum still undertake an unknown risk when submitting backing signatures on an invalid statement to the light client, we use the block hash of the block exactly 100 blocks later than the block in which the original claim transaction was included as a source of randomness.
Now, if we e.g. assume that over $2/3$ of the hashpower of Ethereum is owned by honest miners and that over $1/2$ of the claimed signers are honest validators who didn't sign $S$, then we can analyse the maximum probability that the 100th block after the first transaction was included has a blockhash that results in the test succeeding against any strategy by the adversarial miners. This will involve building a Markov chain model of the worst case and proving that the bad guys can't do better. A back of the envelope calculation gave me that this would be something like $7p/2$ chance of success vs $p$ for a random number.
Now we'd rather argue about rational miners than honest ones. In this case, producing a block with a hash that fails the test, which happens with probability $1-p$, would gain the miner some block reward $R$ if they released it. It would cost them in expectation $(1-p)/p$ block rewards. With $p=2^{-k_{approval}}$, this is $(2^{k_{approval}}-1)R$. With $R=5$ ETH and $k_{approval} = 25$, this would be 167,772,155 ETH which is more than the 112,421,804 ETH currently in existence. Something like this would be secure enough for rational miners to be honest even if there was only one mining pool for Ethereum.
### The protocol
1. First a transaction including the data as in 1. above:
*the statement $S$, a bitfield $b$ of validators claimed to sign it (which claims that more than $2/3$ of validators signed $S$), one signatures $sig$ on $S$ from an arbitrary validator together with Merkle proofs from $r_{val}$ of their public key.*
is placed on the Ethereum chain. The smart contract validates the signature and Merkle proof from the $r_{val}$ stored on chain. If this passes, it records $S$, $b$, the block number $n$ where this transaction is included and maybe another id or counter $id$ for disambiguation.
2. Nothing happens until the block number is at least $n+k+1$. At this point, a designated[^designation-motivation] relayer (probably the same as sent the first transaction), can send a second transaction. The blockhash of block $n+k$ is used as a pseudorandom seed to generate $k_{approval}$ random validators from the $b$ validators who signed. The relayer generates a second transaction containing $S$, $id$, and these signatures.
3. The smart contract processes this transaction. It generates the pseudorandom validators to check from the blockhash of the $n+k$th block. It then checks whether these signatures were included, whether they are correct and whether the Merkle proofs from $r_{val}$ are correct. If so, it accepts $S$ as having happened on Polkadot.
Assuming the relayer used the correct blockhash and has all the signatures they claimed, this should suceed. Probably the pseudorandomness is generated by repeatedly hashing the blockhash.
### Relayer designation procedure
<!---(temporary working name for 1st & 2nd transactions: initialization & finalization transactions)--->
The second transaction (finalization transaction) is expensive compared to the first (initialization transaction). Thus, we need a mutual exclusion protocol that ensures that the finalization transaction is only submitted once by one of the protocol-abiding relayers.
<!--This can be solved by designating a single relayer only to submit this transaction, within some timeout period.-->
The most apparent choice of this relayer is the author of the first transaction. Since the bridge can be attacked by intentionally timing out on the finalization transaction, we require collateral to be locked by the designated relayer to secure the mutex lock for them. In this case, the designated relayer should already be chosen before the initialization transaction.
Since the light client will be unaware of the designation choice, it can nonetheless be blocked with illegitimate initialization transactions since they are comparatively cheap. A possible solution for this is to have the light client store a bounded number of these initialization transactions concurrently, but to require them to lock collateral with the light client as well.
If we use the author of the initialization transaction, we still need to cater for the possibility that they -- intentionally or not -- time out on the second transaction.
### Optimistic scheme
A protocol with competing initialization transactions for a given statement $S_n$ is only required whenever
1. the designated relayer for $S_n$ times out on the initialization transaction or submits a malicious statement $S'_n$
2. the designated relayer for $S_n$ times submits a valid initialization transaction, but times out on the finalization transaction
Assuming that these costly situations are rare, we can take an optimistic approach where on the Polkadot side, we already commit to the relayer for $S_n$ much earlier and include their identity in the statement $S_{n-k}$, where $k$ is a tradeoff between the minimum distance between Polkadot blocks that we relay and the number of backup relayers, and the range of statements for which we must pre-commit to designated relayers.
This would then allocate a mutex for $S_n$ for a block range within which the smart contract will only accept initialization transactions from the pre-elected relayer. If and once the relayer chosen in $S_{n-k}$ times out on the initialisation transaction for $S_n$, the relayer for statement $S_{n-k+1}$ (who was elected for relaying statement $S_{n+1}$) attains the mutex for $S_n$ on the initialization transaction and submits this initialization transaction instead.
This scheme can be iterated for up to $k-1$ failures, at which point we must revert to a protocol with competing initialization transactions. As such, $k$ increases the number of backup relayers we can have to remain within the optimistic scheme, but thus also increases the impact a sequence of colluding designated relayers can have.
[^designation-motivation]: The motivation for having a designation procedure is that the second transaction will be very expensive, yet we will only refund one single relayer for sending it