--- tags: eth2devs description: Notes from the regular proof of stake [Eth2] implementers call image: https://benjaminion.xyz/f/favicon-96x96.png --- # PoS Implementers’ Call #68 - 2021-07-15 [Quick contemporaneous notes by Ben Edgington; fka "Eth2 Implementers' Call"] Agenda: https://github.com/ethereum/eth2.0-pm/issues/226 Livestream: https://youtu.be/-Bzq4s8Lr5E ## Altair Devnet 1 [Pari] Launched Devnet 2 hours ago. Forked to Altair at epoch 10. Finalising, but down from 100% to 80% participation [looks like one client didn't make it through the fork - it's Prysm, see below]. ## Client updates **Prysm** Mostly optimising Altair, sync committee receiver side. Have an issue on the devnet - not propagating blocks to peers. Still figuring it out. On Phase 0 side, enhancing slasher by backfilling attestations. Done with the Eth2 API implementation. **Grandine** Release with small fixes and optimisations. Trying to run forks - this work has suggested an interesting discussion around the Merge [see [below](#Multiple-runtimes-for-forks)] **Lighthouse** Optimising Altair: added a 1-pass method for balances and indices. Changed some metrics due to Altair changes around attestations. Lots of upgrades to networking, and refactoring. Will include in v1.5.0 release; this will be a big change. A few new features. **Nimbus** Focus has been on Altair. Very few missing features; validation is not entirely complete. Need to work more on light client sync. Weak subjectivity sync partially implemented, but no back-fill yet. Finished testing with other clients' beacon node/validator combinations (Nimbus + LH etc.). **Teku** Finishing up Altair. Doing profiling and tidying of APIs. Implemented tool to migrate archive nodes from RocksDB to LevelDB. **Lodestar** Updated docs. Stress testing the node against validators. Optimising gossip handlers and signature domains. Participated in Devnet 1, and it went well. Minted the first block! Progress on REST-based light client. ## Altair ### Release and Testing Lots of test coverage improvements in Beta 1 out yesterday. Clients are passing the enhanced testing. A few more test cases to come yet. Move from Alpha to Beta - all clients had participated in devnets and given feedback on the Alphas. Sync aggregator selection constant was set too low at 4; changed to 16 in Beta 1 [thanks to Jim McD]. Could upgrade Devnet 1 on the fly to test this. ### Planning Last week of July was planned for upgrading Pyrmont if all went well on devnets. One client is not quite there yet - what does the meeting think about timing of this? More devnets needed? [AdrianS] Main concern is that we haven't yet seen sync committees working _very_ well. We believe we know the issues: Prysm currently offline; too low a selector for aggregators. But it would be good to see it near 100% working before committing. Proposal to update nodes on Devnet 1 next Monday/Tuesday. [It's not a consensus issue, so doesn't require a fork.] Decide towards the end of the week whether we need another devnet. Then decide at next meeting in two weeks if/how to fork Pyrmont. ### Other Lighthouse plan to drop Devnet 0 nodes soon. Other clients planning this or already done. No issue. ## Research Updates None ## Some Merge discussion points See [here](https://github.com/ethereum/eth2.0-pm/issues/226#issuecomment-880393878). Dedicated Merge call was cancelled, but a few points for discussion. [Worth listening to if you are interested; I didn't catch the whole discussion.] > Replace `mixHash` field with `random` and expose it via `DIFFICULTY` opcode Mixhash is not currently exposed in EVM. [Some discussion I missed due to claiming POAP :stuck_out_tongue_winking_eye:] Setting `DIFFICULTY` to 0 or 1 rather than RANDAO reduces header size by 31 bytes, which is nice. What we replace `DIFFICULTY` with might hide bugs where execution clients are still following heaviest chain PoW fork choice rule somewhere. Mikhail to draft a spec for discussion. > WS checkpoint between the Merge fork and terminal PoW block Merge is two steps: (1) Merge fork to enable logic to embed the execution payload; (2) the actual Merge event, when the total difficulty reaches the transition point. Will be about a week between these events. What if a weak subjectivity checkpoint occurs between these events? Fresh nodes that begin with a WS checkpoint in this period will need to get the transition total difficulty from somewhere. Either we need to provide the TD to the syncing node, or prevent them from doing this. Note that we don't really have standards yet around serving WS checkpoints. Simplest would be to avoid using any checkpoint in that week. This conversation can continue over the next months, and we need to be sure to improve the WS checkpoint provision in general. Q. Could we cancel the Merge during that week - will clients be shipped with this capability built-in? A week is quite short; it might be a good idea. Another issue is having a terminal proof of work block manual override in case the TD never reaches the planned value. Setting the transition TD low or high at runtime could serve both purposes with the same logic. > enforce `terminal_pow_block.parent_block.total_difficulty < transition_total_difficulty` There are rules about what the terminal PoW block must look like. Currently must have TD > transition TD. Enforcing this extra condition might make it safer, and provide less variance. But could have execution layer reversion issues in the case that the beacon chain is not live for a couple of epochs, which means selecting a later block. But the consequences are no worse than post-merge behaviour if the beacon chain loses liveness. Note that the execution side does not know the transition TD; it is driven by the consensus layer. Might need to communicate the TTD across to the execution client. ### Multiple runtimes for forks Grandine is experimenting with running multiple forks concurrently. Could be used for the Merge. If there is an incident during the Merge, we could have two runtimes: one Altair client that runs straight through, and one Merge client that forks. If successful, use social consensus to stop building on Altair. If not successful, keep on building on Altair. This reduces the attraction for adversaries to mount a coordinated attack. Most clients don't run like this, and it might be complex to implement. It would double resource requirements (CPU, data, bandwidth) for stakers. "Failure" of a fork is hard to codify. It would more likely look like emergency fixes than an abort. There is something to be said for having plan B looking more like "Make plan A succeed" than dividing our effort. Failover code is messy and hard to write. A fuller write-up of the idea would be useful. Grandine plans to try this approach for Altair. Will share findings from this. ## Spec discussion No. ## Open discussion Prysm seems to be back on Devnet 1, and sync committees are already looking much better :tada: * * * # Chat highlights From Micah Zoltu to Everyone: 03:34 PM : Will clients ship with the ability to "cancel" the merge during that 1 week window? Or will devs need to hack out a "fix" during that 1 week plus get it delivered to everyone? From Micah Zoltu to Everyone: 03:39 PM : Is total difficulty an actual consensus rule? I know it kind of is as a derivative value, but clients don't actually come to consensus on it, so it is *possible* that not all clients (because of bugs) don't agree what total difficulty is... From Adrian Sutton to Everyone: 03:45 PM : @Micah - no total difficulty is not included in consensus - only difficulty for each block. But total difficulty is then just the sum of each difficulty so reasonably difficult to stuff up and if you did get it wrong currently it would still lead to weird forking issues. Plus it’s the kind of thing devotes would flush out pretty quickly. From Micah Zoltu to Everyone: 03:46 PM : Might be good to verify that all clients actually agree on total difficulty. And also all track it! It is possible that not all clients actually track total difficulty, since it isn't actually required I don't think. From Adrian Sutton to Everyone: 03:46 PM : They definitely all track it - you can’t follow the right fork without knowing it and can’t participate in devp2p. From Micah Zoltu to Everyone: 03:46 PM : Hmm, it is communicated over devp2p? From Adrian Sutton to Everyone: 03:47 PM : In fact devp2p probably also verifies that they get the same total difficulty, since you should disconnect any client that claims a total difficulty and can’t back it up with actual blocks. From Alex Stokes to Everyone: 03:47 PM : p sure it is in the devp2p handshake From Micah Zoltu to Everyone: 03:47 PM : Hmm, OK. I didn't realize it was part of devp2p. From Adrian Sutton to Everyone: 03:47 PM : Yes, devp2p includes total difficulty with one of the block gossip messages I think and definitely in the handshake. From Alex Stokes to Everyone: 03:47 PM : (for the eth subproto) From Micah Zoltu to Everyone: 03:48 PM : 👍