[Quick contemporaneous notes by Ben Edgington; fka "Eth2 Implementers' Call"]
Agenda: https://github.com/ethereum/pm/issues/660
Livestream: https://youtu.be/IK1jNCQz5yk
Summary document. (Discussion brought forward from later in the agenda.)
getStateRandao
endpoint from the pre-release Beacon API (Teku has it already, Prysm has a pre-release, Lighthouse on unstable).See here.
[AlexS] We changed the withdrawals to remove the queue. The spec now says that, in the worst case, we sweep the entire validator set. We may wish to bound this. [Danny] This adds a little complexity for probably small benefit, but I am weakly against. [Lion] Agreed - it feels like it might be needed, but the need for it has not been demonstrated in testing yet. [Danny] It's not a classic DoS vector. [Alex] If the validator set were double the size, would it be an issue to scan the whole thing? [Lion] The risk is having to do a full sweep per block rather than per epoch.
[Me] How much extra complexity is involved? [Alex] It's non-trivial if we want to ensure it treats withdrawals fairly.
[Danny] "I've kind of changed my mind in this discussion." [Alex] "I was weakly for, but am now weakly against."
Do we care about fairness? If not, then it is very easy. [Danny] Yes, we must advance the pointer if it's bounded else there could be edge cases making validators unwithdrawable.
Action Alex to keep working on the PR.
We previously agreed not to sign the blobs. But that makes them mutable in transit. So, should we sign them or do a full blob verification to avoid this? See this issue for discussion and benchmarks
A full blob verification looks feasible, PR here. For the signature verification alternative, a flat hash would be much faster than SSZ.
Some inconclusive discussion on how blob verification scales…
[Proto in chat] Since we need to verify the blobs at some point, this is better - adding signatures adds complexity.
Need to investigate the benchmark numbers more. Decision We will go ahead with full blob verification for now on the testnets and make sure there is no problem in practice. Implementing this is not a blocker for standing up testnets.
See here.
Two main parts:
getCapabilities
method.
final
, deprecated
, experimental
…[Lightclient] Overloading the use of hardforks makes sense for implementers, but is not a great way to understand the EngineAPI as a whole. This is already a pain point with the consensus specs. [But the Annotated Spec solves this!]
[Mikhail] An advantage of tying the structure to hard forks is that once implemented, the files can be immutable.
[Danny] Who is the EngineAPI spec for? Almost entirely for client devs, since it is effectively internal. [LC] A changelist to index into the files would help.
[Danny] We have discussed moving to a more functional description like the Beacon APIs, but still using Markdown. [Mikhail] MD is definitely easier to update.
[Jacek] The getCapabilities
API - this kind of thing is not always terribly useful as the info gets stale when the exec client is upgraded. Standardising error codes for deprecated and removed methods may be more useful, and consensus clients can fallback accordingly. [Mikhail] That could also work. Just looks ugly! Except that it multiplies the number of calls, otherwise we cannot detect if the exec client has been upgraded to support new methods.
[AndrewA] Strict versioning may be better since data types change at forks. Do we need v2 to be backward compatible to v1? [Jacek] Backwards compatibility is nice in general. Has proved useful around beacon chain upgrades. [Mikhail] Strong view that we should not support more than, say, two forks with one version. Backwards compatibility for one fork would be ok.
Action Discuss the following on the issue:
getCapabilities
API vs error codesThis is not particularly blocking for Shanghai. Comments in the next week for discussion either at ACD next week, or here in two weeks.
None.