owned this note
owned this note
Published
Linked with GitHub
# CDK Bridge Service
## Golas / non goals
Design a service with in the CDK client that allows to realise the uLxLy functionality, includding:
- Provide MerkleProof to perform the claims
- Making claims on behalf of the user (FKA auto-claim)
- Work with all the options the AggLayer offers:
- FEP chains
- PP chains
- EVM and others
Non goal: do any assumption on potential protocol updates (and design accordingly towards those potential updates)
## Flow
```mermaid
sequenceDiagram
UI->>CDK A: bridge tx to CDK B
CDK A->>CDK A: index bridge tx
CDK A->>AggLayer: Settle batch / certificate
AggLayer-->>CDK A: L1 tx hash
CDK A->>CDK A: Add relation bridege : includded in L1InfoTree index X
UI->>CDK A: Wen bridge ready?
CDK A-->>UI: L1InfoTree index X
UI->>CDK B: Get first GER injected that happened at or after L1InfoTree index X
CDK B-->>UI: GER Y
UI->>CDK A: Build proof for bridge using GER Y
CDK A-->>UI: Proof
UI->>CDK B: Claim(proof)
CDK B->>CDK B: send claim tx
CDK B-->>UI: tx hash
```
Important notes, in this diagram:
- Both CDK A & B represent the combo of [execution client + CDK client] for simplicity. However some interactions happen on the execution client (bridge tx and claim tx) while the others happen on the CDK client
- L1 is not represented, and behaves differently
The settlemetn layer behaves very differently on FEP and PP mode, this complicates building the relation of `bridege : includded in L1InfoTree index X`. Also a bridge originated on L1, has a different behaviour in this part. This forces the design to work different on each case:
### L1
When bridges happen on L1, those can update the L1InfoTree in two ways:
- Directly: within the same bridge tx
- Batched: the update is queued and another tx can update the mainnet exit tress with many bridges at once
On both cases, all the information needed to understand the `bridege : includded in L1InfoTree index X` relation can be obtained from L1. TBD who and how is going to sync this
### FEP
This chains use the relayer end point of the aggLayer, but essentially they do update the L1 Info tree with a `VerifyBatches` call. From this call, it's possible to understand that all the bridges includded from batch X to Y (batches being verified) are includded on the index Y (value of the index updated within the same tx). The problem is to understand what bridge txs are includded on those batches: currently the `localbridgesync` is designed to be used by the `aggSender`, and this component is not used on the FEP mode.
In order to give support to FEP mode it's needed to:
- sync the relation of `block : batch`: **needs to use the custom `zkevm_` endpoints**
- sync the verify batch events to understand the relation `batch : includded in L1InfoTree index X` so we can build the relation `bridege : includded in L1InfoTree index X`
This results in the creation of a new syncer: `batchsync`
### PP
This chains use the certificate end point of the aggLayer, in this case to understand the `bridege : includded in L1InfoTree index X` relation, it's needed to understand the `certificate : includded in L1InfoTree index X` relation. For this to happen, the aggLayer needs to (at very least) indicate the L1 tx hash of the tx sent to settle the certificate. There has already been a conversation towards this topic [here](https://0xpolygon.slack.com/archives/C07CDMA1RT9/p1721995988495499?thread_ts=1721657063.814249&cid=C07CDMA1RT9)
That being said, this way of doing things don't fit really well into the architecture of a syncer, ideally the syncer should be able to synchronize by just looking at L1. This is probably doable, but would need to understand how the "verify PP" L1 function works (what is included on the call data and what events are emitted)
Either way, we need to build a `certificatesync`, that can fill the relations of `bridge : included on certificate Y` and `certificate Y: includded in L1InfoTree index X`
## Components
```mermaid
flowchart TB
C{"consumer"}
A(("API server"))
L2E(("L2 Execution\nRPC"))
L1S["l1infotreesync"]
LBS["bridgesync (local instance)"]
LBSM["bridgesync (mainnet instance)"]
BS["batchsync"]
CS["certificatesync"]
C --> A
A --> L1S & LBS & LBSM & BS & CS
A -- claim tx --> L2E
```
### Consumer (UI or any sort of service)
- CDK team isnot gonna develop this :upside_down_face:, but will develop E2E tests that will behave as it
- For it to work needs access to the [execution RPC, CDK RPC] of [CDK x, CDK y] (note that x or y could refer to L1, in this case, the other chain could give support for the CDK RPC part, as L1 obviously doesnt have a CDK client associated)
### API Server
Implement the endpoints (specified bellow on this doc). In order to implement them it interacts with the different syncers + the L2 RPC to send the claims (auto claim feature). Note that only one of [batch sync, certificatesync] will be used depending on the nature of the CDK [FEP, PP]
### L1 Info Tree Sync
It interacts with L1 in order to:
- Sync the L1 info tree (already done)
- Generate merkle proofs (already done)
- Build the relation `bridege : includded in L1InfoTree index X` for bridges originated on L1
- Sync the rollup exit tree (tree of local exit trees), persist, proofs
### Bridge Sync
It interacts with the L2 or L1 execution RPC in order to:
- Sync bridges and claims (already done, needed for `aggSender`). Needs to be modular as it's execution client specific (already done)
- Build the local exit tree
- Generate merkle proofs
### Batch Sync
It interacts with the L2 execution RPC and the L1 RPC in order to:
- Build the relation `bridege : includded in L1InfoTree index X` for bridges originated on CDKs with FEPs
- Sync GERs that have been injected
Note that this requires the use of the `zkevm_` custom endpoints
### Certificate Sync
It interacts with the L2 execution RPC and the AggLayer RPC in order to:
- Build the relation `bridege : includded in L1InfoTree index X` for bridges originated on CDKs with PPs
- Sync GERs that have been injected. Needs to be modular as it works different on PP vs FEP, let alone anything non-EVM
Note: this should cover any non EVM cahin as well, since the execution client is already abstracted by other components (`aggoracle`, `localbridgesync`)
:warning: Needs more research on how the AggLayer API and the L1 contracts work
## API
List of endpoints needed to implement the flow presented above:
- `cdk_wenBridgeReady`
- Inputs:
- bridge index
- networkId (redundant as it's contextual to the CDK, but seems a good sanity check)
- Outputs:
- First L1 Info Tree Index in which the bridge was includded
- `cdk_GERWithIndex`
- Inputs:
- L1 Info Tree Index
- Outputs:
- GER injected on the L2 that is linked to the given index or greater
- `cdk_claimProof`
- Inputs:
- bridge index
- networkId
- GER
- Outputs:
- Proof: merkle proof for the bridge towards the given GER
- `cdk_sponsorClaim`
- Inputs:
- claim
- Outputs:
- tx hash
- Note: needs to be modular as it's execution client dependant (although it's the same with current FEP & PP chains, as they are both only EVM today)
Other endpoints not represented on the flow, but most likely needed to make any service / UI that consumes the service feasible:
- `cdk_getClaims`
- Inputs:
- address
- Outputs:
- claims
- `cdk_getBridges`
- Inputs:
- address
- Outputs:
- bridges
- ...?
## TODOs
List of TODOs derived from all the content of this doc (more details of each point in the relevant parts):
- Local exit trees: buld them using the `bridge` events, persist, build merkle proofs
- Sync and build the rollup exit tree (the tree of local exit trees) @ `l1infotreesync`
- Detect the relation `bridege : includded in L1InfoTree index X` for L1 from `l1infotreesync` to enable claims that come from L1
- Develop the `batchsync`
- Develop the `certificatesync`
- Develop the `API server`
- Sync used GERs from `localbridgesync`
- E2E tests that also showcase how a UI would integrate the service
## PoC
Due to the nature of the modularity of the system, we need to decide what we want to include in the first PoC, as there are many options. Any of the options would require building the API (and other "shared components", listed on the TODOs section). To give answer to the queries the API will fetch the data from different components behind the scenes. The used components will depend on the configuration of the CDK and the nature of the bridge:
### From L1 to CDK (either FEP or PP)
- Need the updates on `l1infotreesync`
### From CDK (FEP) to Y
- Need to develop `batchsync`
### From CDK (PP) to Y
- Need `aggSender` + `aggLayer` working (updating L1 info tree through PP)
- Need to develop `certificatesync`
---
---
## Sync the `L1 Bridge -> L1 Info tree index` relation
### Already done
#### l1infotreesync
Update L1 Info tree, get the L1 Info index, store relation `l1 info tree index` -> `mainnet exit root`(already done)
#### bridgesync:
Store bridge event and update exit tree: index (deposit count) -> mainnet exit root (already done)
### New syncer `l1bridge2infoindexsync`
On loop:
1. Asks for last finalised block @ L1 client -> block X
2. Asks for greatest L1 info index until block X to `l1infotreesync` -> infoIndex Y
3. Checks the last linked `l1 info index -> bridge index` entry -> infoIndex Z
4. For info index (`i`) index range(Z, Y):
- ask `l1infotreesync` the mainnet exit root of `i` -> mainnet exit root W
- ask `bridgesync` for the index that created root W (TODO: add `root -> index`) -> mainnet index W / not found
- if not found, break loop as `bridgesync` is still catching up: add sanity check of last processed block, if block >= X panic
- Store relation `l1 info index -> bridge index` and viceversa