Ethen Pociask

@epociask

Blockchain engineering at EigenLabs.

Joined on Mar 6, 2024

  • After managing a fork of Arbitrum for the past six months, there were many gotchas and "aha" moments which I wish clicked earlier on to avoid an overwhelming feeling of stupidity. One of those things being chain version management. Here I'll try to better demystify the different core components that are versioned and how they interplay. High level At a high level, there are four interwoven versioned entities: Nitro software - L2 backend or node software Nitro contracts - parent chain system contracts ArbOS version - refers to core L2 execution engine Consensus version - Versioned hash used to refer to a specific arbitrator machine artifact
     Like  Bookmark
  • [WIP] Arbitrum Validation Server The Arbitrum validation server is a JSON RPC server used for executing the arbitrator or Prover VM software. It interoperates via uni-directional communication from an Arbitrum Validator (either BoLD or Legacy). Under the hood, the golang validation server leverages FFI calls into the Arbitrator's rust functions using cgo to interact with stateful prover machines. Machine Types Three machine types of the arbitrator are supported: JIT: Just-in-time compilation, compiles WAVM interpreter instruction to machine code in real-time. Standard: Uses the custom interpreter for executing WAVM.
     Like  Bookmark
  • The Arbitrum batch poster is responsible for aggregating and confirming l2 messages constructed by the sequencer. There are many optimizations, encoding shemas, and security countermeasures implanted to ensure that the component can always reliably run; assuming a securely expressed node configuration. Technical Overview ** Last updated to reflect Batches are built using the following general sequence: Fetch latest batch position from SequencerInbox or pre-provided meta and establish current l1 bounds Starting building batch from provided on-chain position if unconfirmed messages exist in the sequencer feed Sequence all messages from latest_confirmed --> latest_unconfirmed into a buildingBatch until notified to do otherwise
     Like  Bookmark
  • Purpose Design subsystem logic for rolling over to other DA providers in the event that EigenDA is deemed unavailable Evaluate system tradeoffs to optimize for a best fit solution Considerations Security Each DA layer has it's own respective MaxBatchSize for which it can support; exceeding this could result in failure:Liveness: DA layer could constantly reject batch if max batch size isn't reset accordingly Each DA has its own confirmation latency (t) (i.e, the time for a batch to be accredited on the DA). Simply, max_throughput = DA.MaxBatchSize / t. When falling back, max_throughput could decrease, creating backpressure on the existing unsafe sequencer backlog. With high enough backpressure or a growing sync head delta (i.e, # of messages in backlog - # of messages confirmed), a reorg could occur between the unsafe and confirmed rollup chain heads.
     Like  Bookmark