# CDK Questions
#### What do the below Prometheus metric outputs mean and why do some show zero values?
<details open>
<summary>Details</summary>
```
sequencer_sequences_sent_to_L1_count
synchronizer_get_trusted_batchNumber_time_sum
synchronizer_get_trusted_batchNumber_time_count
```

</details>
<details open>
<summary>Answer</summary>

This may be due to the fact that these metrics are only incremented/changed under specific conditions (e.g. interaction with L1, permissionless node, etc..).
In fact, [sequences_sent_to_L1_count](https://github.com/0xPolygonHermez/zkevm-node/commit/cd4c30c3c7cb376782fea3a19a10c89c9bf91888#diff-9420435b2f8e7ca901e1efd3c5f0dd6c7e267d8a90ae837c135fbd8f239ea5bfL139) is removed in the recent tags. For the [get_trusted_batchnumber_time](https://github.com/0xPolygonHermez/zkevm-node/blob/develop/synchronizer/l2_sync/l2_sync_incaberry/sync_trusted_state.go#L91), perhaps its just not being called because it seems related to syncing with a permissionless node. So if your setup doesn't have a permissionless node setup (as with mine), I think it's expected for this metric to output 0. Doesn't seem like an issue with the zkevm-node itself.
</details>
---
#### I deployed our testnet on Sepolia. when I test bridging fund back from l2 to l1, it’s stuck at the “finalise” step forever
<details open>
<summary>Details</summary>

</details>
<details open>
<summary>Answer</summary>
Root Cause:
* The root cause was ws rpc in dac config was incorrectly set
* etherman/etherman.go:45 error connecting to ws://zkevm-mock-l1-network:8546
* Therefore the service is down
* So sequence sender cannot connect to the dac
* sequencesender/sequencesender.go:113 error getting signatures and addresses from the data committee: too many members failed to send their signature
* So sequence sender never sent anything
* So bridge from l2->l1 is stuck forever
Fix:
* Wipe dac and bridge dbs
* Restart the node
</details>
---
#### We want to forward our egress traffic ( as of now l1 rpc calls) from cdk network via SquidProxy. But the url format to call squid is like "squid url:3128 L1_RPC_URL" which config.toml doesn't support.
<details open>
<summary>Details</summary>
</details>
<details open>
<summary>Answer</summary>
</details>
---
#### What the reasoning behind the change in the FflonkVerifier constants?
<details open>
<summary>Details</summary>

</details>
<details open>
<summary>Answer</summary>
Changing that values means basically changing the unerlying circuit. So everytime there is an update to the circuits ( for adding features like new opcodes (PUSH0), new features ( blob, compression)) the circuit needs to change, and this is how it's reflected form the SC side.
It is analogous to the *hash* of the circuit.
</details>
---
#### DAC has started to return errors: error filtering events: end (5509754) < begin (5509761), bridging back from L2 -> L1 doesn't work anymore
<details open>
<summary>Details</summary>
Both L1 and L2 accounts are sufficiently funded, and this error has not happened before, and only happened days after leaving the chain running.
Node logs only show one error:
```!
failed to send tx 0x819de4a58440df4d4fdb3d83c8b618c1dca0a61a409fdef2a22f91fde80f59b1 to network: INTERNAL_ERROR: nonce too low {"pid": 2664757, "version": "v0.0.3-hotfix6", "owner": "sequencer", "monitoredTxId": "sequence-from-12-to-14", "createdAt": "2024-03-05T13:47:57.025+0100", "from": "0xfE2F282259CE13fEf3836114205661a2321E047C", "to": "0x6Bff1a18147e04A808d07422c7C25B61F6Bb5727"}
```
Nonce is 25 now, there were several txs after the failed timestamp `2024-03-05T13:47:57`
```!
{"Error":"CalculateEffectiveGasPrice#1: L1 gas price 0; CalculateEffectiveGasPricePercentage#1: effectiveGasPrice or gasPrice cannot be nil or zero; CalculateEffectiveGasPrice#2: L1 gas price 0","Enabled":false,"GasPrice":0,"BalanceOC":false,"Reprocess":false,"GasPriceOC":false,"L1GasPrice":0,"L2GasPrice":0,"Percentage":0,"ValueFinal":0,"ValueFirst":0,"ValueSecond":0,"GasUsedFirst":30054,"MaxDeviation":0,"GasUsedSecond":0,"FinalDeviation":0}
```
</details>
<details open>
<summary>Answer</summary>
So in state_db.monitored_txs was a sequence that was stuck, it's the same tx zkevm-node was referring to about nonce too low. I deleted this row and restarted zkevm_node. After that all the batches that were stuck got processed
</details>
---
#### I am on the Deploying contracts step. I made sure to follow all the instructions as they are specified, however, Hardhat is having trouble connecting/is not able to detect a network.
<details open>
<summary>Details</summary>
```!
root@zkevm-validium-ronald-poc-ora-iad-a00:/home/zkevm/cdk-validium/cdk-validium-contracts-0.0.2/deployment# npm run deploy:deployer:CDKValidium:sepolia
> cdk-validium-contracts@0.0.1 deploy:deployer:CDKValidium:sepolia
> npx hardhat run deployment/2_deployCDKValidiumDeployer.js --network sepolia
Error: could not detect network (event="noNetwork", code=NETWORK_ERROR, version=providers/5.7.2)
at Logger.makeError (/home/zkevm/cdk-validium/cdk-validium-contracts-0.0.2/node_modules/@ethersproject/logger/src.ts/index.ts:269:28)
at Logger.throwError (/home/zkevm/cdk-validium/cdk-validium-contracts-0.0.2/node_modules/@ethersproject/logger/src.ts/index.ts:281:20)
at EthersProviderWrapper.<anonymous> (/home/zkevm/cdk-validium/cdk-validium-contracts-0.0.2/node_modules/@ethersproject/providers/src.ts/json-rpc-provider.ts:483:23)
at step (/home/zkevm/cdk-validium/cdk-validium-contracts-0.0.2/node_modules/@ethersproject/providers/lib/json-rpc-provider.js:48:23)
at Object.throw (/home/zkevm/cdk-validium/cdk-validium-contracts-0.0.2/node_modules/@ethersproject/providers/lib/json-rpc-provider.js:29:53)
at rejected (/home/zkevm/cdk-validium/cdk-validium-contracts-0.0.2/node_modules/@ethersproject/providers/lib/json-rpc-provider.js:21:65)
at processTicksAndRejections (node:internal/process/task_queues:95:5) {
reason: 'could not detect network',
code: 'NETWORK_ERROR',
event: 'noNetwork'
}
```
</details>
<details open>
<summary>Answer</summary>
Configure [hardhat.config.ts](https://github.com/0xPolygonHermez/zkevm-contracts/blob/main/hardhat.config.ts#L187-L197) to make sure the network config is correct.
</details>
---
#### After attempting to deploy the contract, I get 'legacy pre-eip-155 transactions not supported'
<details open>
<summary>Details</summary>
```!
reason: 'legacy pre-eip-155 transactions not supported',
code: 'UNSUPPORTED_OPERATION',
error: ProviderError: only replay-protected (EIP-155) transactions allowed over RPC
at HttpProvider.request (/home/zkevm/cdk-validium/cdk-validium-contracts-0.0.2/node_modules/hardhat/src/internal/core/providers/http.ts:88:21)
at processTicksAndRejections (node:internal/process/task_queues:95:5)
at EthersProviderWrapper.send (/home/zkevm/cdk-validium/cdk-validium-contracts-0.0.2/node_modules/@nomiclabs/hardhat-ethers/src/internal/ethers-provider-wrapper.ts:13:20),
method: 'sendTransaction',
transaction: {
nonce: 0,
gasPrice: BigNumber { value: "100000000000" },
gasLimit: BigNumber { value: "1000000" },
to: null,
value: BigNumber { value: "0" },
```
</details>
<details open>
<summary>Answer</summary>
Double check the [CDK version compatibilities](https://docs.polygon.technology/cdk/version-matrix/?h=permissionless#cdk) and make sure the right repo of the contracts is being used for forkid7 onwards.
If the deployment has been done using the [quickstart](https://docs.polygon.technology/cdk/get-started/quickstart-validium/), most likely the docker images for older versions may not support the latest upgrades on Sepolia.
For example:
```!
4f7b53940fc5 hermeznetwork/cdk-validium-node:v0.0.3-RC2 "/bin/sh -c '/app/zk…" 4 hours ago Up 4 hours 0.0.0.0:8123->8123/tcp, :::8123->8123/tcp, 0.0.0.0:8133->8133/tcp, :::8133->8133/tcp, 0.0.0.0:9091->9091/tcp, :::9091
```
This is running on `v0.0.3-RC2` which will not have support for Sepolia post-Dencun.
</details>
---
#### We were trying to get the bridge-ui up today so we can bridge tokens to run loadtests as required.
<details open>
<summary>Details</summary>
From my understanding `ETHEREUM_BRIDGE_CONTRACT_ADDRESS` should be the address on the L1 for sepolia and `POLYGON_ZK_EVM_BRIDGE_CONTRACT_ADDRESS` should be the rollups bridge address. Is this a correct assumption?
</details>
<details open>
<summary>Answer</summary>
In the bridge-ui parameters, using the same address for both parameters seem to be the correct usage. Bridging from/to L1<->L2 seem to work.
</details>
---
#### L1 -> L2 bridge infinitely stuck on Bridge UI and never updated on L2.
<details open>
<summary>Details</summary>

```!
{"level":"info","ts":1710917796.9736583,"caller":"claimtxman/claimtxman.go:150","msg":"Mainnet exitroot 0xd41219dcf07555d999f2f290e64613986f54457bb94b0952d7dacb7d4674eb6f is updated","pid":7,"version":"v0.4.2"}
{"level":"info","ts":1710917796.973927,"caller":"claimtxman/claimtxman.go:158","msg":"Ignoring deposit: 3: dest_net: 12721, we are:1","pid":7,"version":"v0.4.2"}
```
</details>
<details open>
<summary>Answer</summary>
Need to fix the network_id variable to make sure it is correct
```!
if tm.l2NetworkID != deposit.DestinationNetwork {
log.Infof("Ignoring deposit: %d: dest_net: %d, we are:%d", deposit.DepositCount, deposit.DestinationNetwork, tm.l2NetworkID)
continue
}
```
</details>
---
#### During bridging, Error: cannot estimate gas: transaction may fail or may require manual gas limit
<details open>
<summary>Details</summary>

```!
at Logger.makeError (/home/debian/zkevm-contracts/node_modules/@ethersproject/logger/src.t
s/index.ts:269:28)
at Logger.throwError (/home/debian/zkevm-contracts/node_modules/@ethersproject/logger/src.
ts/index.ts:281:20)
at /home/debian/zkevm-contracts/node_modules/@ethersproject/abstract-signer/src.ts/index.t
s:301:31
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async Promise.all (index 6) {
reason: 'cannot estimate gas; transaction may fail or may require manual gas limit',
code: 'UNPREDICTABLE_GAS_LIMIT',
error: ProviderError: contract creation code storage out of gas
at HttpProvider.request (/home/debian/zkevm-contracts/node_modules/hardhat/src/internal/
core/providers/http.ts:88:21)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at EthersProviderWrapper.send (/home/debian/zkevm-contracts/node_modules/@nomiclabs/hard
hat-ethers/src/internal/ethers-provider-wrapper.ts:13:20),
```
</details>
<details open>
<summary>Answer</summary>
Our error doesn't seem to be with our rpc, but with gas estimation, we faced this issue before while running npm run deploy:testnet:ZkEVM:sepolia getting the following error: The way we fixed it was by sending over more tokens to the address.
</details>
---
#### FATAL cmd/run.go:122 error getting forkIDs. Error: error getting forkIDs from db. Error: ERROR: relation "state.fork_id" does not exist (SQLSTATE 42P01)
<details open>
<summary>Details</summary>
Currently the setup was broken down to just docker containers in order to debug it. But I'm always missing some relations in database, to be exact:
```!
FATAL cmd/run.go:122 error getting forkIDs. Error: error getting forkIDs from db. Error: ERROR: relation "state.fork_id" does not exist (SQLSTATE 42P01)
```
Could you please help debugging this? At which stage fork_id is added to db? Currently the setup consists of DB, prover, zkevm-node. And Zkevm-node constantly fails due to this relation missing, but I couldn't find where it is created.
</details>
<details open>
<summary>Answer</summary>
</details>
---
#### Network Error We cannot connect to the Ethereum node.
<details open>
<summary>Details</summary>

</details>
<details open>
<summary>Answer</summary>
It seems like the Bridge-UI service cannot reach the L1 and L2 RPCs. This is most likely due to either:
* L1 and L2 RPC endpoints not being setup properly
* They have been setup properly but are not reachable by the Bridge-UI service
* The Bridge UI Image requires passing in these variables as env variables, and this may not have been done properly
Could you ssh into the service and:
* `env` to see if the variables are correctly assigned
* `netstat -natp` to check if the addresses/ports are connected?
</details>
---
#### Best practices to save Sequencer costs?
<details open>
<summary>Details</summary>
To save sequencer cost, what i should i do? from my understanding:
Increase `BatchMaxDeltaTimestamp` , this will control sequence batch period
Increase `L2BlockMaxDeltaTimestamp` , this will have less empty block, the block will more compact
about `MaxTxSizeForL1` this will also effect right? because if a batch size larger than this, it will also sequence a batch?
</details>
<details open>
<summary>Answer</summary>
Adjusting `BatchMaxDeltaTimestamp` will control the time to close batches even if they're not full.
`L2BlockMaxDeltaTimestamp` will adjust the maximum time to close a block. An l2block can be closed before the `L2BlockMaxDeltaTimestamp` if the batch that contains it must be closed because of running out of resources (it gets full) or the time resolution of the batch is triggered (and it is not a multiple of the l2block max delta timestamp).
`MaxTxSizeForL1` variable sets the maximum size a single transaction can have. I think the default is 128kb (131072 bytes). In the docs, it mentions "This field has non-trivial consequences: larger transactions than 128KB are significantly harder and more expensive to propagate; larger transactions also take more resources to validate whether they fit into the pool or not."
</details>
---
#### How compatible is the Polygon CDK EVM with smart contracts that run on Ethereum?
<details open>
<summary>Details</summary>
</details>
<details open>
<summary>Answer</summary>
Not even smart contracts, most applications/tools/infrastructure built on Ethereum can immediately port over to Polygon zkEVM, with limited to no changes needed.
With the latest CDK Etrog upgrade, the zkEVM is almost type 2, meaning full EVM-equivalent according to Vitalik’s article.
The Etrog upgrade comes with support for most of the EVM’s precompiled contracts. It only leaves out the barely used RIPEMD-160 and blake2f.
Reference:
https://docs.polygon.technology/zkEVM/spec/evm-differences/
https://docs.polygon.technology/zkEVM/architecture/protocol/etrog-upgrade/
https://vitalik.eth.limo/general/2022/08/04/zkevm.html
</details>
---
#### RPC Provider throughput exceeded, and Data Availability configuration doesn't work on Forkid7
<details open>
<summary>Details</summary>
```!
sequencer-001 zkevm-node[634942]: {"level":"fatal","ts":1706494050.4076195,"caller":"cmd/run.go:127","msg":"error getting data availability protocol name: 429 Too Many Requests: {\"jsonrpc\":\"2.0\",\"id\":3,\"error\":{\"code\":429,\"message\":\"Your app has exceeded its compute units per second capacity. If you have retries enabled, you can safely ignore this message. If not, check out https://docs.alchemy.com/reference/throughput\"}}\n/opt/zkevm-node/log/log.go:142 github.com/0xPolygonHermez/zkevm-node/log.appendStackTraceMaybeArgs()\n/opt/zkevm-node/log/log.go:223 github.com/0xPolygonHermez/zkevm-node/log.Fatal()\n/opt/zkevm-node/cmd/run.go:127 main.start()\n/root/go/pkg/mod/github.com/urfave/cli/v2@v2.26.0/command.go:277 github.com/urfave/cli/v2.(*Command).Run()\n/root/go/pkg/mod/github.com/urfave/cli/v2@v2.26.0/command.go:270 github.com/urfave/cli/v2.(*Command).Run()\n/root/go/pkg/mod/github.com/urfave/cli/v2@v2.26.0/app.go:335 github.com/urfave/cli/v2.(*App).RunContext()\n/root/go/pkg/mod/github.com/urfave/cli/v2@v2.26.0/app.go:309 github.com/urfave/cli/v2.(*App).Run()\n/opt/zkevm-node/cmd/main.go:191 main.main()\n/opt/go-1.21.3/go/src/runtime/proc.go:267 runtime.main()\n","pid":634942,"version":"v0.0.3-159-ga03bdeb5","stacktrace":"main.start\n\t/opt/zkevm-node/cmd/run.go:127\ngithub.com/urfave/cli/v2.(*Command).Run\n\t/root/go/pkg/mod/github.com/urfave/cli/v2@v2.26.0/command.go:277\ngithub.com/urfave/cli/v2.(*Command).Run\n\t/root/go/pkg/mod/github.com/urfave/cli/v2@v2.26.0/command.go:270\ngithub.com/urfave/cli/v2.(*App).RunContext\n\t/root/go/pkg/mod/github.com/urfave/cli/v2@v2.26.0/app.go:335\ngithub.com/urfave/cli/v2.(*App).Run\n\t/root/go/pkg/mod/github.com/urfave/cli/v2@v2.26.0/app.go:309\nmain.main\n\t/opt/zkevm-node/cmd/main.go:191\nruntime.main\n\t/opt/go-1.21.3/go/src/runtime/proc.go:267"}
```
When checking the Alchemy logs, the eth_call method for error getting data availability protocol name seems to be successful (image below), but its just rate limited and fails as well.
```!
execution reverted is all coming from -> // ChainID is a free data retrieval call binding the contract method 0xadc879e9.
```
</details>
<details open>
<summary>Answer</summary>
On Forkid7, the contract scripts automatically sets the dataAvailability Protocol and setups the Committee, ***only if the deployer is same as the admin address***.
The above error should be resolvable by setting the deployer/sequencer addresses to equal the admin address.
In the later Forkids, the `test` flag has been added as a workaround for local deployments.
</details>
---
#### FATAL aggregator/aggregator.go:1155 failed to send batch verification
<details open>
<summary>Details</summary>
```!
2024-02-26T04:27:50.836Z FATAL aggregator/aggregator.go:1155 failed to send batch verification, TODO: review this fatal and define what to do in this case {"pid": 7, "version": "v0.5.11-RC15-17-gd410cc0a", "owner": "aggregator", "monitoredTxId": "proof-from-11383-to-11384"}
github.com/0xPolygonHermez/zkevm-node/aggregator.(*Aggregator).handleMonitoredTxResult
/src/aggregator/aggregator.go:1155
github.com/0xPolygonHermez/zkevm-node/aggregator.(*Aggregator).Start.func1
/src/aggregator/aggregator.go:116
github.com/0xPolygonHermez/zkevm-node/ethtxmanager.(*Client).ProcessPendingMonitoredTxs
/src/ethtxmanager/ethtxmanager.go:664
github.com/0xPolygonHermez/zkevm-node/aggregator.(*Aggregator).Start
/src/aggregator/aggregator.go:115
main.runAggregator
```
</details>
<details open>
<summary>Answer</summary>
The problem is the node cannot send the Tx to L1. So, the recommendation is to check in your SC the last verified batch.
In case it's the batch 11382, you should fix state db to recompute verification:
* Stop aggregator and ethtxmanager
* Go to statedb > state.monitored_txs and remove las entry with id = 'proof-from-11383-to-11384'
* Go to statedb > Remove content from state.proof
* Restart aggregator and ethtxmanager
In case it's the batch 11384, the Tx is already sent and you only need to notify the state:
* Stop aggregator and ethtxmanager
* Go to statedb > state.monitored_txs and set status = 'done' for entry with id = 'proof-from-11383-to-11384'
* Restart aggregator and ethtxmanager
</details>
---
#### When could the (executor) `ResourceExhausted` error be thrown?
<details open>
<summary>Details</summary>

</details>
<details open>
<summary>Answer</summary>
The ResourceExhausted error will be thrown in the component that is calling the executor.
Its because of the resources on the executor isn't enough to keep up with the requests. Therefore, having dedicated executors for critical components makes sense, or even better a load balanced network with multiple RPC/executors (but this may result in individual endpoints reporting the same ResourceExhausted error, but likely won't halt the network because other endpoints will be running anyways).
Testing different values and adjusting the maxExecutorThreads in the executor config based on the HW setup may also be helpful.
```!
worst_case_memory * maxExecutorThreads + dbMTCacheSize + dbProgramCacheSize + baseline_mem_usage
worst_case_memory: the amount of memory that can be used by a single request
maxExecutorThreads: the configuration in your json file for how many threads the executor can use before hitting some kind of resource exhaustion limit
dbMTCacheSize: a configurable cache parameter
dbProgramCacheSize: a configurable cache parameter
baseline_mem_usage: essentially how much memory is used by the executor when it starts and hasn't received any traffic yet. the memory will never go below this amount
```
</details>
---
#### We are sending a eth_call to a non-view function with empty from address. this function also does a check to ensure tx.origin must be address(0) to prevent execution. What is the reason for changing the from arg to 0x1111...111 if it's not set or it's empty?
<details open>
<summary>Details</summary>
```!
func (args *TxArgs) ToTransaction(ctx context.Context, st StateInterface, maxCumulativeGasUsed uint64, root common.Hash, defaultSenderAddress common.Address, dbTx pgx.Tx) (common.Address, *types.Transaction, error) {
sender := defaultSenderAddress
nonce := uint64(0)
if args.From != nil && *args.From != state.ZeroAddress {
sender = *args.From
```
</details>
<details open>
<summary>Answer</summary>
You can check in this following [issue](https://github.com/0xPolygonHermez/zkevm-node/issues/2869) the difference between behaviour in mainnet and zkEVM of an unsigned transaction (eth_call).
The reason behind using 0x1111...111 is that when we check ecrecover in the ROM, we assume that resulting address is != 0. Otherwise, the signature is invalid
</details>
---