owned this note
owned this note
Published
Linked with GitHub
# Banana implications for CDKs
This doc will describe Banana (expected to be fork 10) features with a focus on how they impact CDK
## Rollback sequences
> Sc ability to remove a sequence to avoid sequence data could potentially stop the prover or makes impossible to generate a proof
How does this affect CDK?
- In the synchronization process: a new event will be emitted stating that some batches need to be dropped. Ideally, this event should be fetched aside from the "block by block" event fetching so it's not needed to download and execute batches that are going to be rejected later on
- Tooling to actually perform the rollback (SC interaction)
## Deterministic l1InfoRoot
> L1InfoRoots will be indexed and stored by the smart contracts, this way the sequencer can choose which one should be used. Doing this enables:
> - Sequencer can know the expected `accInputHash` before virtualizing the state (usefull for next feature outlined on this doc)
> - In the future this will be useful for reducing the delay to start generating ZKPs, AggLayer, ...
How does this affect CDK?
The sequence-sender needs to choose a l1InfoRoot index before sending the sequence to L1
## AccInputHash sanity check
> The sequenceBatches call of the smart contract will replace the `initialBatchNum` for `expectedFinalAccInputHash`. The idea is that the sequencer will send the accInoutHash resulting of sequencing the sequence and if it doesn't match the one built by the contract, the transaction will be reverted
How does this affect CDK?
The sequence-sender needs to compute the `accInputHash` and send it to L1
## Bugs to avoid re-hashing
> Remove some "supported bugs" that were there to avoid re-hashing
> - fix Log issues (full-tracer)
> - logs != 32 bytes aligned
> - keeped on fork.9 to avoid reorgs in all mainnet chains
> - clean [Erigon fixes](https://github.com/0xPolygonHermez/qa-protocol-kanban/issues/373)
How does this affect CDK?
Erigon and Executor need to remove the "supported bugs" on this fork
## Validium consensus signature schema
The protocol is updated on Banana. [Before](https://github.com/0xPolygonHermez/zkevm-contracts/blob/main/contracts/v2/consensus/validium/PolygonValidiumEtrog.sol):
```solidity!
for (uint256 i = 0; i < batchesNum; i++) {
// ...
// Accumulate non forced transactions hash
accumulatedNonForcedTransactionsHash = keccak256(
abi.encodePacked(
accumulatedNonForcedTransactionsHash,
currentBatch.transactionsHash
)
);
// ...
}
// ...
dataAvailabilityProtocol.verifyMessage(
accumulatedNonForcedTransactionsHash,
dataAvailabilityMessage
);
```
[Banana](https://github.com/0xPolygonHermez/zkevm-contracts/blob/feature/banana/contracts/v2/consensus/validium/PolygonValidiumEtrog.sol):
```solidity!
dataAvailabilityProtocol.verifyMessage(
// expectedFinalAccInputHash needs to be calculated anyway, so no extra job
expectedFinalAccInputHash,
dataAvailabilityMessage
);
```
In banana the thing that will be hashed will be the `accInputHash`, which needs to be calculated by the contract anyway. This is NOT huge in terms of gas saving (~50 units of gas per batch), although it adds up in the long run (for reference Hermez zkEVM has sequenced +2M batches in ~1 year)
### Implications
#### New Go interface
[Old]():
```go!
// SequenceSender is used to send provided sequence of batches
type SequenceSender interface {
// PostSequence sends the sequence data to the data availability
// backend, and returns the dataAvailabilityMessage
// as expected by the contract
PostSequence(ctx context.Context, batchesData [][]byte) ([]byte, error)
}
```
New:
```go!
type Batch struct {
transactionsHash common.Hash;
forcedGlobalExitRoot common.Hash;
forcedTimestamp uint64;
forcedBlockHashL1 common.Hash;
}
type SequenceSender interface {
PostSequence(ctx context.Context, batchesData [][]byte) ([]byte, error)
PostSequenceBanana(
ctx context.Context,
batches []Batch,
indexL1InfoRoot uint32,
) ([]byte, error)
}
```
#### DAC
- New endpoint to support the new signature schema (need to provide needed data to calculate the accInputHash to the DAC nodes). Querying can still be done by batch hash data, although **could be a nice opportunity to improve the service by having single query to get all batches instead of one per batch**
#### 3rd party DA protocols
- Hard to tell, depends on the implementation
#### Retrocompatibility
- We need to start thinking as the DA as something that can evolve in a similar way as the ForkID, until certain batch is done one way, and the protocol changes after that batch. This will complicate the logic of the synchronizer