Goerli TX DOS post-mortem First reported 8:40 CET by @jcksie & Jakub from Status on the ETH R&D discord and subsequently confirmed by Pari from EF devops CPU usage suddenly spiked at 1am CEST on all geth nodes on goerli. Both Disk writes and Reads increased significantly as well as Ingress and Egress for all geth nodes. Unfortunately the geth team does not have goerli nodes within our own grafana instance, so we had to rely on EF devops to look into the issue. EF devops does not support all of the very geth specific metrics, since they are based on influx not prometheus. I had a hunch that it was because of txpool churn. @ding_wanning and I worked on a PR to mitigate some DOS issues during the Protocol Fellowship: https://github.com/ethereum/go-ethereum/pull/26648
3/13/2023Number Reason Accepted Rejected Not seen 1 Basefee Besu, Geth
1/24/2023The following test case was generated by the fuzzer so it looks a bit funky. The genesis block can be created with the following config: { "config": { "chainId":1, "homesteadBlock":0, "eip150Block":0, "eip155Block":0, "eip158Block":0, "byzantiumBlock":0,
11/8/2022Similar doc to the one we had for the merge. Trying to track all initiatives of all clients. Useful Links EL spec: https://eips.ethereum.org/EIPS/eip-4895 CL spec: https://github.com/ethereum/consensus-specs/blob/dev/specs/capella/beacon-chain.md Static EL test vectors: https://hackmd.io/PqZgMpnkSWCWv5joJoFymQ EL engine api fuzzer: https://github.com/MariusVanDerWijden/merge-fuzz/tree/withdrawals Status
10/31/2022or
By clicking below, you agree to our terms of service.
New to HackMD? Sign up