# Introduction
### Definitions
###### **(Canonical) Bridge** (_noun_)
In the context of blockchains, a **bridge** or **canonical bridge** establishes a guarantee that:
1. A synthetic asset on the destination chain is backed 1:1 by a native asset on the origin chain,
2. The synthetic asset can only be minted at a $1:1$ ratio by correct locking of the native asset in the origin chain, and
3. A locked native asset can only be redeemed by correct burning of the synthetic asset in the destination chain.
###### **Bridge** (_verb_)
In the context of blockchains, the act of **bridging** assets consists of exchanging a native asset in an origin chain for a synthetic asset in a destination chain, or vice versa. In order for bridging of an asset to be possible, we _require_ the existence of a bridge, which guarantees the validity of the synthetic asset.
###### **Bridge canonically** (_verb_)
The act of using the _canonical bridge_ for bridging assets.
###### **sBTC** (_noun_)
A synthetic BTC token on a destination chain secured by a canonical bridge.
###### Provable Program (_noun_)
A provable program is a program that can run inside a [zkVM](https://www.lita.foundation/blog/zero-knowledge-paradigm-zkvm), producing a cryptographic proof of its correct execution. This proof ensures computational integrity, allowing verification without re-executing the program.
###### Self-Recursive (_adjective_)
We define a provable program to be _self-recursive_ if:
1. It verifies itself, and
2. It proves some other logic
:::spoiler Self-Recursive Programs and zkVMs
In the context of zkVMs, we use a _verification key_ (sometimes called a _program ID_) as a commitment to the program being proven. In order to ensure that a program is self-recursive, we need to assert that the proof being verified _inside the program_ is _of the same program_, i.e., it has the same verification key.
A naive approach is to hardcode the verification key of the program inside the program itself, but doing so **changes the verification key**, meaning this approach is equivalent to finding a hash collision. The only practical solution is to **commit to the verification key** of the program whose proof is being verified inside. We can guarantee self-recursion by asserting that this committed verification key does in fact match with the one of the self-recursive program.
:::
# Peg Out Process
The Peg Out process refers to the act of burning an amount of synthetic Bitcoin, sBTC, on a destination chain, and unlocking an equal amount of BTC on Bitcoin.
A registered agent initiates a Peg Out request by specifying a Peg In ID that **belongs to them** and then burning the corresponding amount of synthetic Bitcoin on the destination chain. Once this burn is considered _finalized_ in the destination chain, the agent can generate a _proof of withdrawal_. Upon verification of this proof on Bitcoin via _BitSNARK_, the BTC is unlocked and the bridging process is complete.
### Smart Contract
#### Burning Tokens
The smart contract stores a mapping that stores the state of each performed Peg In. Note that, while sBTC is a fungible token, pegins are not: each Peg Out secures an arbitrary amount of BTC. An agent initiates a Peg Out by burning the sBTC amount corresponding to a given Peg In ID. This in turn nullifies the Peg In ID by changing their state from `Minted` to `Burned`, preventing additional sBTC from being burned.
Inside the [Peg Out Proof](#General-Outline-of-a-Peg-Out-Proof) verified on Bitcoin, we can check that the state of this Peg In ID was set to `Burned` and unlock the Bitcoin.
### General Outline of a Peg Out Proof
A Peg Out proof does the following:
1) Verifies a self-recursive proof of the destination chain's consensus.
2) Asserts that the corresponding Peg Out transaction to unlock a given Peg In ID was included in a **finalized** block. Generally this can be done by checking that a single storage slot changed, which represents burning of the xBTC associated with the Taproot address.
By designing consensus programs to be self-recursive, we can generate **a single proof** that verifies consensus of a destination chain **all the way back to some genesis state**. Such proof is _updateable_: as the destination chain grows in the number of blocks, we can generate a new proof that validates the previous one and all new generated blocks at the same time. Inside our self-recursive program, we first verify the previous proof, if provided, and start validating blocks from the last verified one. When no proof is provided, we start validating from the genesis state.
#### On Consensus Programs
For most blockchains, running a consensus node has significant hardware requirements. This means that generating a _proof_ that a consensus node executed correctly is simply unfeasible. A [light client](https://a16zcrypto.com/posts/article/an-introduction-to-light-clients/) may be used instead to define a consensus program, although this raises some [security concerns](#Security-Considerations).
Consensus programs of destination networks without a concept of _finality_ are more vulnerable to **private network attacks**. An attacker can create valid blocks on a private, non-canonical chain that would deceive light clients while eventually being rejected by full nodes. This vulnerability is particularly pronounced in Proof-of-Stake networks due to their costless simulation property: valid blocks can be generated without expending computational work.
#### Public Values
The necessary public values of the Peg Out Proof are the following:
- The Peg In ID representing the Taproot address that holds the locked Bitcoin. This effectively nullifies every [Peg Out Proof](#General-Outline-of-a-Peg-Out-Proof), as the same proof cannot be used to unlock the BTC behind a different Taproot address.
- The **program verification key**, which functions as a cryptographic commitment to the program being verified. This allows us to directly use **SP1's Groth16 verifier**, and removes the need for a circuit-specific trusted setup for _every Taproot address_.
Note that both these values are known in advance.
:::spoiler How Public Values Work on zkVMs
In reality, due to how zkVMs are designed, the Groth16 proof being verified only ever deals with _two_ public inputs: the **verification key** of the program being proven and a **hash commitment** to all the public inputs specified in the program. In our case, this commitment would be the hash of the Peg In ID.
:::
### ZK Proof Verification on Bitcoin via _BitSNARK_
Upon successful verification of a [Proof of Withdrawal](#Proof-of-Withdrawal), the BTC is unlocked and the briding process finalized. Unlocking funds (via proof verification) can only be carried out by the original address that locked the funds. If the proof submitted is successfully challenged, funds are permanently locked to the detriment of the prover.
# Peer-to-Peer Bridging
A canonical bridge presents the full security guarantees needed of a bridge, as described in our [Bridge definition](#Canonical-Bridge-noun). Our synthetic asset in the destination chain is therefore **valid**, and may either be canonically bridged back and forth, or exchanged peer-to-peer between the two networks.
Such peer-to-peer exchanges effectively remove the frictions and limitations of our canonical bridge design, enabling a much more seamless user experience while the synthetic asset still enjoys its full security guarantees. Such exchanges can be achieved via cross-chain orderbooks, [atomic swaps](https://github.com/distributed-lab/taprootized-atomic-swaps), and other similar strategies.
## Atomic Swaps
[Atomic swaps](https://github.com/distributed-lab/taprootized-atomic-swaps) achieve the same security guarantees and can help minimize overall bridging costs by moving most computation directly offchain. They can also add an additional layer of privacy. The main drawback is their interactive nature, which contrasts with the "set it and forget it" nature of most bridges.
# Case Study: Ethereum Light Client
With the [Altair hard fork](https://github.com/ethereum/annotated-spec/blob/master/altair/sync-protocol.md), the foundation for light clients was layed out by introducing the concept of a sync committee: a set of 512 validators that are randomly selected every sync committee period (256 epochs or ~27 hours). The purpose of the sync committee is to allow light clients to keep track of the chain of beacon block headers: light clients can verify the sync committee with a Merkle branch from a block header that they already know about, and use the public keys in the sync committee to directly authenticate signatures of more recent blocks.
Currently, there is no disincentive against signing malicious Beacon block roots, an open attack vector to exploit trust-minimized bridges like this one. Note that even if the entire sync committee could be fully slashed all the way down to 0 ETH, that this would still cap the security level to `SYNC_COMMITTEE_SIZE * MAX_EFFECTIVE_BALANCE` = `512 * 32 ETH` = `16384 ETH` ~= $33M @ ~$2k / ETH. It is important to note that this is a developing area of research, and this security level is expected to be improved in the future.
## Security Considerations
Due to the intrinsic limitations of _**any**_ BitVM-inspired bridge, our security assumptions are weaker for two main reasons:
1. Since validator signatures are not revealed by the proof (otherwise costs would be astronomically high), they are **not slashable**. Validators can, while still actively validating Ethereum, collude and _privately_ finalize a fraudulent block without suffering the slashing incurred from double voting.
2. Past validators that have exited the chain can collude and perform a **long-range attack**, by validating a fraudulent fork of the chain that would see the BTC unlocked.
Both these vulnerabilities are due to the _costless simulation_ property of Proof-of-Stake chains. In practice, this means our security assumption is that **$2/3$ of validators at any given sync committee since our Genesis are honest for _eternity_**.
While this may seem as a very weak trust assumption, it is worth noting that the protocol would not be fully permissionless initially, but instead be secured by a handful of **whitelisted agents**. As BitSNARK, ZK technology, and the space as a whole all mature, the protocol will look to remove its _training wheels_ and minimize these trust assumptions. Nevertheless, this is still arguably a more trust-minimized alternative to the [2-out-of-3 multisig securing ~$10B worth of $WBTC](https://blockworks.co/news/bitgo-wbtc-custodial-changes).