potuz

@potuz

Joined on Dec 11, 2020

  • A one sentence, no-nonsense, FAQ for why we should include EIP-7732 in Fusaka What does ePBS mean?: you should think of the acronym as enshrined payload block separation and you will start to think differently about it. Pipelining Benefits Is the complexity worth it? Yes it is, all of the complexity lies in the separation of consensus an execution blocks, with all the benefits explained in this FAQ. The auction mechanism is the trivial part and can be swapped at any time. Take a look at my devcon talk explaining this in detail. What timing benefits do we get? Instead of 2 seconds to validate the execution and the data availability of the block, we get 9 to 12 seconds for it. What bandwidth benefits do we get? Instead of having to broadcast the consensus block, the execution block and all blobs in less than 2 seconds to all the network, validators only need to broadcast the consensus block in a short time. This means that peak bandwidth requirements are much smaller while maintaining the average needed bandwidth but properly distributing during the entire slot. I am a homestaker why do I care? It means better attestations, less missed proposals. I am a builder why do I care? If you are a vertically integrated builder you don't care, nothing changes. If you are instead auctioning in a relay, you get better latency and thus more time to build the block until the last millisecond.
     Like 3 Bookmark
  • Bulleted list, normal is handwavy, italics is user-facing-translation. Many times repeated the same points with different optics. Validators have 9 seconds to validate the blocks instead of 2: you can validate the block on a toaster Validators have 9 seconds to distribute blobs instead of 2: Your internet connection may be in the middle of the desert You are a homestaker self-building, you have 6 seconds to produce KZG proofs, Peer-DAS does not need to worry about joint-blob-building. Your attestations will miss less heads: Less penalties, more money You are a proposer, you don't need to trust anyone to outsource your block. You are a builder, you do not need to deal with any intermediary to reach the proposer. You are a builder, the proposer will request bids from you directly, you do not need to worry about cancellations. You are a builder, the proposer will requests bids from you directly, no extra latency in sending to a relay.
     Like 2 Bookmark
  • We explain in detail the different components that make up an Ethereum validator's reward. We describe common situations that make stakers miss some of these rewards and show how to diagnose the -- typically uncommon-- scenarios in which there are actual problems with the software and action is required. 1. Introduction A common concern of node operators is --Is my node performing well?--, or -- Is my node performing at least as the average?--. The essence of the question is in fact --Should I be changing anything in my setup or not?--. It turns out that these questions do not really have an easy answer. There have been many attempts to trying to create a single score that would measure a validator's performance, but they all fail because of different reasons to answer the very basic questions mentioned above. Instead of trying to track a single indicator (or a ton of useless ones and be worried about some small variance on any of those), validators should analyze why their performance took a hit: did I miss an attestation due to a late block, a reorg, something else out of my control?, did I miss a proposal due to my EL being stressed or was my block reorged because it was received late by the network?. In this document, we will try to help node operators to identify warnings, their cause, and make informed decisions on when to tweak their setup. In order to do this we first start by describing, in some level of technical detail, the different components of a validator's reward. We then move to analyze each one of them separately as they are susceptible to different external factors. We finally describe how to exploit existing monitoring tools (grafana, logs, beacon explorers, etc) to diagnose causes for drops in rewards. The organization of this post is as follows. In section 2 we describe the statistical expected variance in income amongst perfectly working validators. In section 3 we describe all the components that make attestation rewards, how much to expect in different scenarios of attestation inclusion and how to diagnose a missing attestation. In section 4 we describe the analogous situation for proposals. In section [5](# 5-Sync-committee-messages) we describe quickly sync committee rewards. In section 6 we cover transaction tips. In section 7 we mention some tools to monitor performance that are available in Prysm, and finally in section 8 we mention how to get support from the Prysm team. 2. Reward variance is evil
     Like 2 Bookmark
  • Basic definitions and notation We will use $S$ for slots, $B$ for blocks, $E$ for epochs. Often times we will use $S$ for the block at slot $S$ abusing notation when no confusion could arise. For example, a forkchoice diagram of the form digraph{ rankdir=RL 10 -> 9; 11 -> 10; 12 -> 11; 13 -> 11; 13 -> 12 [style=invis];
     Like  Bookmark
  • Often times we want to make statements of the form, if x% of validators are honest, property P holds, or equivalently, under the supposition that an attacker holds less than y% of the total stake, statement Q is true. Examples of these are of the form: "With a proposer boost of 40% then an attacker with more than 60% of the stake can reorg arbitrarily the block right before he's assigned to propose". "If 90% of the validators are honest, and we have seen all attestations and all blocks in the last two epochs, then the head of the chain cannot be reorged", etc. These statements become much harder to make in the context of PBS, where trust assumptions on validators may become trust assumptions on builders, which is a much more reduced set. CL data vs EL data In a context where most validators defer their execution payload, data blobs, or any future non-pure consensus part of the block to an external builder. The builder may have a final decision on certain aspects of blocks, thus whatever trust assumptions we are making on honesty of validators may cease to be true. In other cases, the builder has zero or very little impact. For example, a trust assumption of the form: "x% of validators are not willing to arbitrarily fork blocks". The validity of this statement is the same under PBS or not, builders have very little bearing on the LMD weight of a given chain, validators alone decide their attestation targets.
     Like 2 Bookmark
  • The following reorg was seen on Goerli Unfortunately this graph iss ordered by slot number (left to right) and not by timestamps. The lightblue epoch on the left is 155221, the other lightblue epoch on the right is 155223, the white one in the middle is (you guessed it) 155222. When epoch 155221 ends, the last block that the chain has seen is 4967101 (slot 29 in that epoch, the top branch is based on that block). This block has not enough votes to justify epoch 155221, so the chain advances keeping the justified epoch to be 155220 The chain advances on the top branch, everything is normal, all clients participating, until right after slot 4967121 After that block is imported, block 4967102 appears (4 minutes late). This is the last block in epoch 155221, in slot 30, since the block in the next slot was never proposed (or seen). The block 4967102 has enough votes to justify 155221, this splits the view of the chain: those that have deployed the witholding fix (only prysm) will not see this block as head and continue to build on top of the canonical chain. Those that didn't deploy it yet will immediately reorg to this block. Lodestar proposes block 155224 on top of that block (the first block in the bottom branch) and from there on, Lighthouse, Teku and Nimbus follow.
     Like  Bookmark
  • Build Nethermind git clone --recursive -b feature/shanghai-eip-4895-withdrawals https://github.com/NethermindEth/nethermind.git cd nethermind/src/Nethermind dotnet build Nethermind.sln -c release export NETHERMIND=$(pwd)/Nethermind.Runner/bin/Release/net6.0 Build prysm git clone --recursive -b capella https://github.com/prysmaticlabs/prym.git cd prysm bazel build //cmd/beacon-chain
     Like  Bookmark
  • Work in progress. Abstract We study a modification to Ethereum forkchoice algorithm by which we allow any tip descending from the last justified checkpoint to become head of the chain. We show that accountable safety and plausible liveness continue to hold. We also show that there are no possible attestations deadlocks and that none of the currently known FFG-induced reorgs are exploitable. Introduction This document grew out of discussions with M. Khalinin and the teachings of F. Damato. It is the outcome of trying to understand a proposal by F. Damato (hereby called Francesco's hack) in the hopes that it is equivalent to a formulation we discussed with M. Khalinin. The document is laid out as a story instead of a research article. We start by giving some basic definitions and recalling the basic attacks in Preliminaries, we then establish some desired properties of forkchoice in Wish List and introduce the main proposed change to remove the block filtering in Remove Justification Filtering. After that we show that this change has a series of unintended consequences that need to be dealt in order to maintain the desired properties of forkchoice. We describe the discrepancy between the store's point of view and the head state's point of view with regard of justification in Store and State separation, this in turn leads to a redefinition of the notion of a supermajority link, by allowing multiple sources towards justifying a new checkpoint. We then show that Accountable Safety still holds in this context and describe an implementation problem that arises with Plausible Liveness. In Process Justification we describe a possible implementation mechanism in order to correctly track validators' votes towards multiple-sources supermajority links and in Weak Liveness we describe the probabilistic nature of Plausible liveness in this setup. We end this document showing that this forkchoice algorithm is not susceptible to any of the known attacks.
     Like 4 Bookmark
  • The mechanism is the same as in our design doc. I'm summarizing here implementation details. At threshold x (think 4") we decide that any block after this is subject to be reorged At threhold y (think 10") we evaluate our chances. Final decisions are always done at second 0 Changes to Forkchoice Forkchoice will emit a message whenever justified checkpoint has changed. Forkchoice will have a new setter func (f *Forkchoice) SetJustifiedBalances([]uint64)
     Like  Bookmark
  • Witholding attacks on Goerli: Here's a short summary of the main points All the attacks I've analyzed are vanilla withholding attacks. At the start of epoch N, epoch N-1 has not been justified. A late block from epoch N-1 triggers an epoch long reorg during epoch N. The chain advances on the attacker's branch until the next epoch transition into N+1, and during on_tick the chain reorgs back to the canonical chain due to pulled tips. The attacks are occurring more often than our predictions for the size of the stake of the faulty LightHouse node. Some possible causes that help in the last point are: During epoch N honest validators on the attackers branch and on the canonical branch are voting in a different way towards justification of N. Those in the attackers' branch vote source: N-1 --> target: N. Those in the canonical branch vote source: N-2 --> target N thus it is difficult to justify N and therefore it's more likely to set the attack again during the next epoch.
     Like  Bookmark
  • The following is a design document allowing a beacon node to track more detailed information about a set of validators performances that what currently can be extracted from a BeaconState object. The problem: One of the main tools to debug p2p/forkchoice/performance problems is attestation/proposal timing information. With Altair's fork, we lost easy access to the attestation inclusion distance as the PendingAttestation object is no longer needed. Stakers need to rely currently on a centralized entity (like an explorer) to obtain this information, or they need to parse themself the beacon blocks. Currently, to obtain information about validator performance, the validator needs to request the beacon node to fetch it from the objects it already has, or compute it if it is something that can only be obtained by processing these objects. Since performance information may be required on a per-epoch basis, on multiple validators, this ends up in an unnecessary burden of the beacon operation. I believe that in order to alleviate this problem, it is unavoidable to break (albeit minimally) the separation of duties between validators and beacons: I propose two extra beacon node CLI flags, paralleling what is already supported by LightHouse: --track-validator-auto and --track-validator-indices. The second flag would take a list of validator indices to track performance-related information. The first flag would track this information for every validating key connected via RPC. The beacon will have some information through these flags, that a validator client will be interested in its performance parameters. A naïve solution: first iteration
     Like  Bookmark
  • Currently when prysm receives an attestation for which it hasn't seen the target block, it gets put in a pending attestation queue. This queue is being emptied twice per slot, at even intervals. The purpose of the queue handler is two-fold: when called to process the pending attestations, it calls to request the blocks that we haven't seen and it validates the attestations for blocks that we have seen. In the current design we have For most blocks, we do not need to process nor call pending blocks as the queue is empty.
     Like 1 Bookmark
  • We collect some measurements made on the pyrmont and mainnet ETH 2.0 network related to delays in block production/propagation at epoch transition. The problem: Validators that are assigned the first slot of an epoch to attest are 20% likely to vote incorrectly on it. On each epoch a validator has 1/32 chances of attesting on this epoch, and a bad vote penalizes the validator for both head and target in this case. This accounts for at least (assuming perfect inclusion distance and a good vote on source) $$ \frac{1}{32}\cdot \frac{1}{5} = 0.65% $$ of the validator rewards. The measured data:
     Like 1 Bookmark