[Quick contemporaneous notes by Ben Edgington; fka "Eth2 Implementers' Call"]
Agenda: https://github.com/ethereum/pm/issues/667
Livestream: https://youtu.be/KFc1sWYlVZ4
Last week's All Core Devs call recap (of consensus stuff). Danny understands that we will keep withdrawals separately specified from EIP-4844, and slated for Capella. If they do end up getting implemented together, we would not combine the specs, but would just stagger the upgrade. This was confirmed to be the common understanding on this call.
Running two testnets (no EIP-4844): one post-merge, one pre-merge (to facilitate Prysm). Many participants… (see recording for the details). Full withdrawals working fine.
Nimbus not yet participating, the consensus layer withdrawals implementation is in progress.
BLS credentials change operation is being tested with ethdo
, and a subset of the clients. Gossip of change messages not yet tested, but is about to be.
Plan to move to a longer-lived single testnet when Nethermind and Besu are ready, and Prysm can start from a post-merge genesis. (I.e. soon.)
Link.
[Alex] The bound is set very small, following feedback. No objections to moving forward with this. More withdrawals test cases coming to cover edge cases - by end of next week.
[I probably garbled some of the below…]
[SeanA] Issue to discuss around syncing blobs. When finality is delayed (to before the prune depth), it might be complex to validate blob availability. [Danny] The link between finalisation and data availability is not clear - availability should be checked regardless. [Proto] To clarify, if the chain has not finalised for 18 days (the pruning period), there is a portion of the blob data that we no longer have and is not finalised. [Sean] Peers might lie about what the finalised epoch is, so we might need to reprocess part of the chain. [Jacek] We never trust; always verify. [Sean] Not feasible during an optimistic sync. Concern that this issue could introduce complexity. [Proto] There are a couple of edge cases, but we could simplify the spec to say just verify the last 18 days, and verify finality as we currently do. [Sean] Needs more thought. [Terence] Spec says we cannot import a block without a blob - are we thinking of changing this. [Proto] For 18 days' worth, no change, but change it if non-finality.
Action: discuss further on the PR
Could we store blobs until last finality if it is earlier? [Micah] Doesn't this lead to unbounded state growth? [Jacek] No, because the inactivity leak would lead the chain to finalise within a month or so.
Trusted setup, Second audit with SigmaPrime being completed. Moving into public contribution period in the New Year. Starting to ramp up communication/education. If you have a community you want to reach, contact trent or Carl. Aim for it to be the largest "summoning ceremony" in crypto: 8-10k people.
For those with an interest in cryptography there will be a grants round for independent implementations. Maybe a day or two's work.
Public contribution period will be about 2 months.
[Jess - Coinbase] Aligned on the social/content side. Working on call to action in their product.
[Carl] Don't hesitate to ask if you have any questions or concerns about the process.
[Age] Attnets revamp. Add a single deterministic subnet to each node, and remove random subnets. Planning to implement in Lighthouse. What is the general view?
Will run on testnets first, but these have very different node distributions, and may require more subscriptions per node.
[Jacek] Most interesting thing is that it changes sync committee subnets: currently getting onto a sync committee mesh is a problem as they are sparse. Hope that this does not make other committee subnets too sparse.
[Danny] Goal is to make sure that all nodes contribute, while reducing bandwidth overall. Hard to validate a priori.
[Age] Will post some statistics. Nodes currently subscribed to 64 subnets would be reduced to 1. But often new peers cannot connect to these nodes anyway as they are full, so they might have limited usefulness. [Danny] With the new scheme, the connectivity scales with the network size rather than the distribution of validators. [Age] It should be a strict improvement.
[Jacek] Would like to see this added to the spec (as a matter of principle), even if it is backward compatible. Also note that it is all still based on honesty: nodes can lie about their subnet subscriptions. Would be good to validate if possible: but difficult to do.
Action: Age to post stats on this.
[Age] Many peers are behind NAT. Discv5 hole punching needs some extra fields in the ENR. Lighthouse is working on this. Happy to work with other teams on this to test interop. [Jacek] Libp2p is doing AutoNAT 2.0 - how does this relate? [Age] That is enirely TCP based, and needs a relayer. We can omit the relayer as we have discovery. There are PRs for this on the Discv5 spec.
[Age] IPv6 support: need IPv6 compatible boot nodes. Considering enabling v6 on the Lighthouse bootnodes. This will allow for a future upgrade in LH clients. Any issues? Are we concerned about a network split between v4 and v6 nodes? LH plans to run only dual-stack for this reason, but it opens up the possibility of a partition more generally.
[Micah] Can we make it hard for people to fall into IPv6 only?
[Micah] Related to the previous point, it would be neat if nodes could use IPv6 to establish a NAT hole for IPv4. [Age] Should be possible.
Action: Age will put up an issue for comments.
[Mikhail] Previously proposed two endpoints for checkpoints: one for state providers, one for trust providers.
The Checkpointz tool now provides everything we need for state provision in a more sophisticated way. And it can do trust provision as well. Some concern that Checkpointz becomes the only provider of this data. There are a couple of other proposals in flight that mitigate that risk.
Therefore, requesting to deprecate this proposal. No objections on the call. Will close it by end of the week if nobody raises an objections.
[Oisin - Obol] Aggregation duty for distributed validator technology. Currently aggregation duty is decided by hashing a signature over the slot. This is not MPC-friendly. Proposal is to add a couple of API endpoints to make this work with distributed validators.
Action: teams to review, and decide on next call whether to implement.
[Mikhail] Two main questions:
Do we need engine_getCapabilities
method? Jacek's optimistic strategy of falling-back on error seems reasonable. No reason to support EngineAPI versions two or more generations old. Will not introduce this method if it's not needed. Action: consensus client devs - give feedback on your EngineAPI implementations as to whether this is needed.
Structure of EngineAPI spec documents. Two approaches:
a. New spec for each fork, or
b. Split by functionality.
See Mikhail's examples of each. Pro and con exist for both.
[Micah] Client diversity.org has wildly divergent measures of client numbers. [Danny] One is crawler derived, the other from block data. The latter should be fairly accurate measure of validator proportions. The former of node count. The crawler data looks strange.
[Jacek] Heads up for a Nimbus release coming with a separate validator client component. Will be keeping the consolidated VC version around as well (with long term support). [Saulius] Grandine is working on similar - does Nimbus VC use the Beacon API, or an internal API? [Jacek] The internal VC uses a private non-JSON version, with a couple of shortcuts. Can mix-and-match VC/BN combos via the JSON API.
Next call 2 weeks from today.
Clients must support listening on at least one of IPv4 or IPv6. Clients that do not have support for listening on IPv4 SHOULD be cognizant of the potential disadvantages in terms of Internet-wide routability/support. Clients MAY choose to listen only on IPv6, but MUST be capable of dialing both IPv4 and IPv6 addresses.