Prysmatic Labs

@prysmaticlabs

eth2 development team

Public team

Joined on Feb 2, 2021

  • A fundamental property of blockchains is the notion of "finality," which roughly means that after a certain period of time, transactions that have been included in the canonical chain are extremely difficult or almost impossible to revert. Eth2 has an "explicit" mechanism of chain finality enshrined within the protocol, which is different from the "probabilistic" finality found in proof-of-work chains such as Ethereum or Bitcoin today. In proof of work, consensus is fundamentally a global race that a lucky miner wins by being the first to produce a valid block: a result of finding a mathematical solution to a computationally difficult problem. As such, block times are probabilistic. The more blocks are added to the chain, the harder it is to revert, as each represents a cumulative sum of electrical and computational power required to create it. Given there are real, physical guarantees that would prevent an attacker from being able to revert the Bitcoin and Ethereum blockchains today, we can consider transactions past a certain amount of time included on-chain as "finalized." The objective of Stateless Ethereum RPC is to enable the client to operate without the need to store the entire state tree. Ethereum proof of stake, however, does not function on the concept of probabilistic finality. Instead, it enshrines finality into the protocol by saying "If > 2/3s of validators have voted correctly on the chain head for a long period of time, we can consider everything before a specific checkpoint as finalized". Finality is explicit, and nodes that follow the protocol will not be able to revert the finalized checkpoint as it is fundamentally impossible regardless of consensus weight. How Finality Works in ETH2 ETH2 is a synchronous protocol that operates on a "checkpoint" mechanism for bookkeeping. Essentially, a set of validators is assigned to either produce blocks or vote on blocks in a window of 32 potential slots. Each slot is a period of 12 seconds. Those 32 slots constitute an epoch. In an epoch, there are 32 assigned block proposers and everyone else is known as an attester, assigned to vote on proposed blocks once per epoch. There is only one block proposer assigned per slot, but many "attesters" can be assigned per slot For example, Alice is selected as a block proposer for slot 4 in the epoch and Bob, James, Charlie, and Susan are assigned as attesters at slot 4, meaning they should be voting on Alice's produced block as canonical.
     Like 6 Bookmark
  •  Like  Bookmark
  • logger := slog.New(slog.NewJSONHandler(os.Stdout, nil)) slog.SetDefault(logger) slog.Info("message") slog.Error("message") slog.Debug("message") slog.Warn("message") Contextual logging: Instead of logrus.WithFields, contextual logs can be added in a key=>value style. A problem is it not type-safe and can mess up with unbalanced keys.
     Like  Bookmark
  • This guide will set you up to perform BOLD challenges on Ethereum Sepolia. Sepolia endpoint (keep internal): export SEPOLIA_ENDPOINT=https://sepolia-geth.arbitrum.io/bold8c7987d041065bf04de03d19ba50 Honest validator priv key (keep internal): export HONEST=ee3c0bf39d962a78dba87aee083cae443cabc814f93677f302cbabde844237db Evil validator priv key (keep internal):
     Like  Bookmark
  • Generalized History Commitment Backend Background Today in BOLD, challenges can have subchallenges, and we have a total of 3 levels. We do this because at each challenge, validators have to commit to N state roots of a block's history, which can be very expensive to compute. For example, computing 2048 machine hashes for the execution of a block and collecting them takes around 5 minutes on a modern MacBook Pro. The way BOLD interfaces with a Nitro backend is via an interface called an L2StateProvider, which gives BOLD Merkleized commitments to an L2 history at a specific message number, and optionally, at specific opcode ranges or opcodes within that message. For example: if we want to get a history commitment at message number 1 to 2, and for big step range 100 to 101, and for opcodes 0 to 1024 in that range, we would call: SmallStepCommitmentUpTo( ctx context.Context,
     Like  Bookmark
  • If you knew what you know now, and you were starting a new CL from scratch, what structures would you use? - Potuz Answers by @rauljordan Language of Choice Would I still use Go if I were starting a client by myself? No – I would 100% rewrite everything in Rust for a variety of reasons: No gc: In Prysm, latency is critical, memory is critical, and management of the lifecycle of a complex application is currently out of our control as gc can only be tuned so much in Go Memory-safety first: in a paradigm that forces us to think about ownership while giving us full control Extreme performance: Rust is fast, and can be much faster than Go. The runtime is not in the way, and gc does not exist. Further, we can write even more extremely-performant Rust code if we want to abandon some of its guardrails. That is, we can still use unsafe Rust in key paths, but compared to unsafe Go, that flavor of Rust can still be sound (read more about this here)
     Like  Bookmark
  • Make invalid states unrepresentable - someone Some problems: We have redundant nil checks everywhere We forget to populate fields of structs when doing raw initialization What can we do? Leverage the compiler more: Compiler does the heavy lifting
     Like  Bookmark
  • Before ChalManager->Ethereum: create edge `E` ChainWatcher->Ethereum: listen for edge creation event for `E` Ethereum->ChainWatcher: `E` creation event sent back ChainWatcher->Nitro: Ask if `E` is honest Nitro->ChainWatcher: `E` is honest ChalManager->ChainWatcher: Can `E`, an honest edge, be confirmed? ChalManager->ChainWatcher: Yes, `E` is honest and has 3 days to confirmation After
     Like  Bookmark
  • graph LR Genesis-->A A-->B B-->C C-->D D --> E D --> F
     Like  Bookmark
  • Lighthouse stores forkchoice to disk, and during block import, it will revert atomically to the last version on disk in case there was a problem: pub struct PersistedForkChoiceStore { pub balances_cache: BalancesCacheV8, pub time: Slot, pub finalized_checkpoint: Checkpoint, pub justified_checkpoint: Checkpoint, pub justified_balances: Vec<u64>, pub best_justified_checkpoint: Checkpoint, pub unrealized_justified_checkpoint: Checkpoint,
     Like  Bookmark
  • Goals Now that we have 215 open issues and over 60 PRs, with many still in draft form, it's worth doing a quick refresh over what we want to prioritize and which ones we should close. There are some interesting open issues from a while back and PR ideas that have not yet materialized. Old PRs If a PR is obsolete or is not as necessary, mark it as checked and Raul can help close. For @Nishant Drafts [ ] Process slots in missed places
     Like  Bookmark
  • Introduction One of our current OKRs is to implement Beacon API endpoints using an HTTP server. This will remove a lot of complexity, because currently we have to transform HTTP requests/responses, which conform to OpenApi specification, into gRPC-compliant structures. This results in complex middleware code that we have to maintain. To make this task easier, it would be nice to be able to automatically generate server code from the Beacon API specification. I spent some time searching for the right tool and unfortunately couldn't find anything suitable. Tool overview https://github.com/go-swagger/go-swagger go-swagger does not support OpenApi 3.0. It provides a flatten utility, which I hoped to use before applying another tool, but unfortunately flattening a 3.0 spec does not produce a valid format. https://github.com/deepmap/oapi-codegen
     Like  Bookmark
  • Background In Prysm, we have two main kinds of tests: e2e and unit tests. For the most part, unit tests have a very complex setup especially in critical files such as the state transition, forkchoice, and the sync service. Even these complex examples tend to mock a lot of the blockchain's functionality by populating the database themselves or using mock versions of services for simplicity. Most recently, we had a situation where we had a bug at runtime in one of the caches used during block processing here, and the resolution was pretty simple but actually almost impossible to test with our current testing suite. This triggered a discussion about why it was hard to test this code and if we could do anything about it. It turns out we don't have test harnesses powerful enough to give us a meaningful test aside from our end-to-end suite, and that immediately triggered some concerns. This document designs a new test harness for testing chain functionality in Prysm with minimal mocking, and helps us test different behaviors of block processing that are hard to set up otherwise outside of e2e. Existing blockchain test harnesses When testing blockchain behavior, we care about the following situations:
     Like  Bookmark
  • This documents the necessary steps to run Prysm beacon client and validator client for MEV extraction. It's important to keep in mind that these are prototypes so expect breakages and bugs. The instructions will likely change later and we'll update them along the way. With that said, we are months away from the merge, and now is a crucial time to begin testing MEV as an end-to-end product from validator to beacon node to mev-boost to relayer and to a builder. Optional readings on the why what and how https://writings.flashbots.net/writings/beginners-guide-mevboost/ https://writings.flashbots.net/writings/why-run-mevboost/ lightclient - Extracting MEV After the Merge Bulder API spec Prysm images To build from source. Please use latest develop branch
     Like  Bookmark
  • Background In EIP4844, BlobsSidecar was introduced to accept "blobs" of data to be persisted in the beacon node for period of time. These blobs bring rollup fees down by magnitude and enable Ethereum to remain competitive without sacrificing decentralization. Blobs are pruned after ~1 month. Available long enough for all actors of a L2 to retrieve it. The blobs are persisted in beacon nodes, not in the execution engine. Alongside these blobs, BeaconBlockBody contains a new field blob_kzg_commitments. It's important that these kzg commitments match blob contents. Note that on the CL, the blobs are only referenced in the BeaconBlockBody, and not encoded in the BeaconBlockBody. Instead of embedding the full contents, the contents are propagated separately in BlobsSidecar above. The CL must do these three things correctly: Beacon chain: process updated beacon blocks and ensure blobs are available P2P network: gossip and sync updated beacon block types and new blobs sidecars Honest validator: produce beacon blocks with blobs, publish the blobs sidecars In this doc, we explore the spec dependencies between BlobsSidecar and BeaconBlockBody loosely coupled and implementation complexity due to them being loosely coupled.
     Like 1 Bookmark
  • Background Terence first notifies us of an interesting bug he observes with the --enable-only-blinded-beacon-blocks feature flag Prater node, v2.1.4.rc1 No EL client Post bellatrix, but before the merge {"error":"could not fetch execution block with txs by hash 0x0000000000000000000000000000000000000000000000000000000000000000: timeout from http.Client","message":"Could not get reconstruct full bellatrix block from blinded body","prefix":"sync","severity":"ERROR"} hash=0 does not mean empty payload. We check with two means: TTD (which we don't have due to the EL being unavaibale) and empty payload (which we don't have due to the block being blind)
     Like  Bookmark
  • Config merged JWT merged Testing plan Checkpoint sync Migration
     Like  Bookmark
  • This document is experimental and subject to change This document describes the changes being made to existing flags to support the Builder API and custom block builders such as MEV. Summary of changes New suggested fee recipient builder flag Name change for fee-recipient-config-file and fee-recipient-config-url flags to validator-config-file and validator-config-url Update fee-recipient-config to include optional gas limit, and builder-fee-recipient-override field Include Yaml parsing for validator-config-file
     Like  Bookmark
  •  Like  Bookmark
  • This document is a temporary location to talk about how to use the fee recipient feature in the Prysm Kiln Testnet. Fee Recipient is the feature that allows users receive gas fees when proposing blocks for their specific validators post merge. Fee Recipient Config based on Teku Implementation JSON file for defining validator public key to eth address This will allow you to map your validators to corresponding eth addresses or generally cover the remaining keys with a default. { "proposer_config": {
     Like  Bookmark