Beacon chain pyspecs are a non-trivial software - in order to comprehend them one should decompose it into smaller parts - an important design principle in software engineering called separation of concerns (modularization). However, the best way to do such decomposition may depend on a purpose.
Currently, pyspecs are split in phases, each phase split in md files, related to high level concerns like core functionality
, fork choice
, validator
, p2p-interface
, etc. And each file contains a number of python definitions (constants, classes and methods) along with some prose (a method can be seen as a module too).
Such decomposition may work well for pyspecs developers and implementers, however, it is not enough from testing or formal method perspective.
Currently, Formal Method developments (Modeling and Verifying Gasper in Coq, Formalization of the Beacon Chain Phase 0 Specification in K, Eth2.0 spec in Dafny) do not work with pyspecs directly. There are several reasons. First, there is lack of formal methods, tools and semantics for Python. Second, Formal Methods arre hard and labor-intensive, so Formal Method developments focus on parts of pyspecs and/or simplified models (of such parts). But even if these problems were overcome, the way pyspecs are currently structured still poses a problem for application of formal methods.
As pyspecs are a moving target, such dissociation of pyspecs and formal developments poses a problem from development process point of view, as it creates a cyclic dependency: bugs can be found during formal verification, which will require pyspecs code changes, which in turn require corresponding updates of Formal Method developments. Moreover, some steps here are far from trivial and can take significant amount time to perform, resulting in a risk of multiple iterations and unpredictable delays.
Software Quality is paramount for blockchains - once deployed, it can be very difficult to modify. At the same time, open-source code is available to attackers. So it's hardly possible to avoid and/or ignore such risk.
For the purpose of the post, we define module as an independent, interchangeable piece of software which contains everything necessary to execute an aspect or group of related aspects of the desired functionality.
The definition adapted from here. We allow group of related aspects to (explicitly) support hierarchical modules.
As already noted, there is already some structure of pyspecs, related to modularization. The specs are split in phases. Each phase is split in number of files, each corresponding to a particular high level concern (like core concepts, fork choice, validator, etc).
Additionally, there are supporting libraries, which defines aspects like BLS signatures, SSZ (data-structures, encoding, decopding), hashing (hash
, hash_tree_root
). Such libraries have clear interfaces and are easy to interchange with other implementation, e.g. to support testing.
Here is a rough list of aspects/concerns, which are good candidates to constitute a module, form Formal Verification perspective.
supporting
Resource consumption
Mid level
High level
Formal Methods typically require finer-grained modules. A method in a production code often comprises of several aspects/concerns intertwined in the method body.
From Formal Method perspective, there are several problems:
In traditional Software Engineering, an objective is often to reduce efforts to develop code, which favors passing information in an implicit form. One example are destructive updates. A destructive update can create a dependency between modules, if there is aliasing, i.e. both modules can point to the same location (and at least one of them updates it). Formal Verification languages and frameworks have means to express such dependencies (e.g. dynamic frames, separation logic), but in regular languages it's often implicit. In addition, impure code often more efficient from resource consumption (CPU, memory) point of view.
Other examples are exceptions, locking, etc.
Without a mechanized proof checker, it's quite easy to break abstraction principles. Often engineers are not aware of "rules". And other times it can be non trivial to prove (either formally or informally) that some rule actually holds.
It can be difficult to deal with such problems in presence of a mechanized proof checker, which is unable to "think" like a human.
"Leaky" abstraction can arise when destructive updates are used. Let's consider an interface, which allows mutation, e.g. MutableSequence
. Let's consider now two implementations of the interface, one which makes a copy of an object when it's added to a collection and one, which doesn't. The behavior of two implementations is different from aliasing point of view. In the context of Formal Methods, it will be tricky to write a specification, which allows both implementation and valuable in Formal Verification Context (i.e. excluding trivial options, like an empty specification).
Pyspecs developers deliberately avoid following OO paradigm. From Formal Verification's perspective this is actually not bad, since Object-Oriented is known to be a problem in formal verification context, at least, when destructive updates are present.
However, one problem with Object-Oriented paradigm being implementation inheritance, while interface inheritance is generally considered as a good concept. It's especially true in Formal Verification context. Basically, with implementation inheritance there is a higher risk of contradicting subtyping.
Note, however, that currently pyspecs themselves use inheritance only to define data-structures. However, the way phase definitions are combined can be seen as code inheritance too, though not explicitly expressed using Object-Oriented constructs.
From Formal Verification point of view, interface inheritance, which conforms subtyping principle is required. Basically, Software Engineers can afford departure from strict subtyping, while it can be much more difficult to convince a mechanical proof checker that a methods behaves okay.
One way to deal with the problem (that traditional software engineers and formal method practitioners have different requirements to how specs should be decomposed) is to have two versions of specs and synchronized them. We assume that at least at the moment (and some future) traditional software engineering perspective is more important, we'll call specs developed with Formal Verification in mind "shadow" specs.
The main question is how to efficiently synchronized such specs?
We assume that from Formal Verification point of view, specs should be finer-grained, but it general it refines and details "Traditional Software" point of view. I.e. "Traditional Software" spec can in principle be obtained by coarsening "Formal Verification" specs, e.g. by inlining of some calls or definitions.
After inlining, specs can still differ, due to details of concrete inline procedure, for example variable and definition names can differ, auxiliary variables can be introduces, complex expressions can be split in simpler ones and vice versa.
In many cases, such specs can be proved equivalent with relatively simple means, i.e. symbolic execution, which performs global value numbering.
As we mentioned above, some Formal Method developments deal with simplified models of parts of specs, i.e. Gasper, which combined LMD-GHOST and Casper FFG, but ignores balance calculations.
In some cases, one can strip pyspecs from details, which are irrelevant to a particular formal development (e.g. code which deals with balances).
Such stripped-down version of spec can then be an input to a Formal Method development. For example, if specs changes do not affect the stripped-down version then corresponding F/M development doesn't need to be updated.