Try โ€‚โ€‰HackMD

Arbitrum version management

After managing a fork of Arbitrum for the past six months, there were many gotchas and "aha" moments which I wish clicked earlier on to avoid an overwhelming feeling of stupidity. One of those things being chain version management. Here I'll try to better demystify the different core components that are versioned and how they interplay.

High level

At a high level, there are four interwoven versioned entities:

  • Nitro software - L2 backend or node software
  • Nitro contracts - parent chain system contracts
  • ArbOS version - refers to core L2 execution engine
  • Consensus version - Versioned hash used to refer to a specific arbitrator machine artifact

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More โ†’

The tabled summary below better captures key categorical attributes:

component upgrade enactor domain programmatic scheduling
nitro node operators L2 no
nitro-contracts upgrade executor owner L1 yes
consensus artifact node operators && upgrade executor owner L1, L2 no
ArbOS node operators && L2 chain owner L2 yes

โ€“
where:

  • node operators - refers to the parties operating or running roles using the L2 node software (e.g, sequencer, validator, verifier)
  • upgrade executor owner - refers to the permissioned onchain owner of the UpgradeExecutor contract
  • l2 chain owner - refers to the address on the L2 granted system admin rights or the ability to arbitrary execute sensitive functionality on the ArbOwner precompile.

There's a giant threat matrix that can be made here outlining different attack scenarios between roles - sadly that's out-of-scope of this current analysis :(

Nitro software

The nitro software is versioned through github releases (eg). Releases can be made intermmitently and don't have to always correspond with other release components but often do! ArbOS lives as part of the nitro software and change sets can include other components irrespective of L2 execution logic (e.g, sequencer P2P, validation server changes, updates to AnyTrust aggregation server).

Single binary

Under the hood Arbitrum's node software is just one giant binary that can be instrumented in different ways to perform different role based operations across the L2 domain. Where this gets interesting is that micro-service architectures can be achieved where you can e.g have batch posting and l2 sequencing as seperate service that inter-communicate. But it's also possible to have a single service or instance performing both sequencing, batch posting, and validation!

โ€“
How microservice architectures can be achieved and how the services interplay might be worth researching as well

Build types

Nitro builds are delineated between "production" and "dev" with the main difference being that development builds can reference the latest consensus artifact built within the same image vs using pre-embedded consensus artifacts fetched via a download-machine.sh script within the image; i.e:

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More โ†’

nitro/Dockerfile

These script calls proceed to download the release artifacts corresponding to the relevant (version, wavm root) and persist them into a /target subdirectory on the image that's referenced by production validators when doing stateless validations or block disputes.

Development builds only differ by one key difference in that they generate a local unversioned consensus artifact which is also persisted into the image's state and file system.

Nitro contracts

Nitro contracts refer to the parent chain system contracts which guard:

  • Bridging funds into L2 and accrediting L2 -> L1 withdrawals
  • Permissioning of sensitive roles (e.g, batch poster)
  • Assertion chain representation and proving

Contract versions are managed via github releases (e.g) with each version typically requiring an upgrade. Upgrades are typically performed using a contract action and only actionable by the owner of the UpgradeExector contract.

In some instances, contract upgrades are a silent event that the nitro software can be oblivious to (e.g, v2.1.3 which adds post-pectra hardfork support).

In other instances, contract upgrades can introduce forward breaking changes that require the L2 to be implicitly aware of contracts versioning.

For example, for BoLD activation, the node intervally polls the RollupUserLogic contract on the parent chain to see if a ChallengeGracePeriodBlocks field exists in storage. If so then it assumes BoLD validation mode:

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More โ†’

nitro/staker/multi_protocol/multi_protocol_staker.go

ArbOS Version

ArbOS is the L2 execution engine and uses versioning to switch between different control flow logics. It's best analagous to a hard fork since upgrades require all nodes to have latest execution logics.

One example of ArboOS usage is overriding geth's internal upgrade point-in-time chain checks to instead determine activation based on the version!

eg:

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More โ†’

go-ethereum/params/config.go

Outside of EVM execution, it's also used for managing things like L2 pricing model expressions, 4844 blob activation, stylus execution, etcโ€ฆ This version is only known in the L2 or child chain domain and lives as a field of an ArbOSState config which lives as entry in the global L2 StateDB. Within the ArbOSState config, there is an ArbitrumChainParams type which is used as a global source of truth for L2 system level access and feature management.

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More โ†’

go-ethereum/params/config_arbitrum.go

Updating the ArbOS version requires invoking a special l2 system transaction which interacts with the native ArbOSState precompile. Updates are only permissible via the ChainOwner role and can preemptively be scheduled to execute at a specific L2 timestamp.

Consensus artifacts

WAVM module roots are used for versioning the replay artifacts used in fraud proofs. Each consensus version includes:

  • machine.wavm.br - compiled arbitrator binary
  • replay.wasm - Stateless execution script of L2 compiled to wasm for proving

The module root is a conception known only to validators; but more specifically is known to the:

  • Assertion chain. RBlocks or claim nodes hashes are computed using the module root for versioning
  • validation pipeline. Each module root is maintained by the validation server and is mapped through to different machine artifacts.

โ€“

Typically a module root maps 1:1 with an ArboOS version but doesn't have to. For example, minor version increments can be done which allow for upgrading the machine logic irrespective of ArbOS.

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More โ†’

nitro/Dockerfile

Module roots are validated when doing container builds to remove a trust assumption on the release initiator where they could forge arbitrary module roots irrespective of the actual one:

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More โ†’

nitro/Dockerfile

Publishing and upgrading consensus versions

Upgrading a consensus version requires a few steps of action that spans across both the L1 and L2 domains:

  1. Cutting new a github release which contains references to a new machine.wavm.br and replay.wasm (e.g)
  2. Updating the nitro Dockerfile to install latest artifact for production software builds
  3. (sometimes) Upgrading the parent chain's ChallengeManager to use a new OneStepProver - typically reusing the exsiting one is sufficient. But there are circumstances where this could be mandatory to upgrade (e.g, a new arbitrator VM opcode is introduced)
  4. Calling a setWasmModuleRoot(newWavmRoot) function on the RollupAdmin contract to force L2 validators to start usin the new root

Cross version proving

It's worth noting that after an upgrade there will be unconfirmed L2 state claims that correspond to the older consensus version in the assertion chain. Typically it's expected that some newOSP will be a superset or a vulnerability patch of the prior one or prevOSP - meaing that using newOSP for one step proving claims that were initially mapped to prevOSP is sufficient.

However in certain circumstances you could have prevOSP have logical subsections that aren't contained within newOSP. The pre-BoLD challenge manager maintains a condOSP setting which allows for prior one step prover artifacts to be referenced when one step proving the prior claims; i.e:

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More โ†’

nitro-contracts/src/challenge/ChallengeManager.sol