# Formal Engineering for Blockchains, part 1 In the write up, we are discussing theoretical foundations of [Onotole](https://github.com/ericsson49/research/tree/master/onotole) - a transpiler for a subset of Python used for [beacon chain specs](https://github.com/ethereum/eth2.0-specs/tree/dev/specs) development. It turned to be an effective tool in practice: several problems in the py-specs have been found and it's been used to regularly translate the py-spec to an [implementation in Kotlin](https://github.com/ericsson49/research/tree/master/beacon_kotlin_generated), which is the base for a [Phase1 simulator](https://github.com/txrx-research/teku/tree/phase1/phase1/src/main/kotlin/tech/pegasys/teku/phase1/simulation) developed by @mkalinin, which is, in its turn, is used in Eth1-Eth2 merger research. Before extending the Onotole to support translations to other languages, we want to formulate (and clarify) the principles, it's based on, which can be useful in a more general context, for formal engineering of blockchain protocols. The Onotole tool is a (simple) example of four principles we will discuss: - it's a tool, built using **meta-programming** approach - for a Python subset used for the beacon chain specs development (**tailoring formal methods to a problem domain**) - which is used to translate the py-specs (**formal spec engineering**) to a statically typed language - so that a **light(er)-weight formal method** (static type checking) can be employed Our main goal is to develop a tool for formal engineering of the beacon chain protocol, but the idea are applicable to blockchain protocols, in general. So, we first describe a high level overview in the write up and will discuss the beacon chain protocol specifics in the following up posts. ## Intro Formal Software Verification is often perceived as a Holy Grail of computer science/software validation. While testing can prove presence of a bug, verification can prove absence of (certain classes of) them. For the reason, Formal Verification is highly desired (or even required) in projects requiring higher levels of software quality. Despite tremendous progress both in theory and practice, applying formal verification to an industrial projects still remains a difficult problem. One reason is that traditional software development as well as tools supporting it, mostly ignores formal verification specific. Considering a wider problem of application of formal methods (formal verification being a particular case), the only notable exception is static type checking, which can be seen as a lightweight formal method. Thus, one way to pursue is to adapt existing software engineering approaches, so that incorporating formal methods and enjoying their benefits become an easier endeavor. This won't happen without problems: there will be a natural "resistance" from the traditional software engineering practitioners. One problem is that incorporating formal methods will restrict the ways code can be written, as some of them can be much more difficult to formally reason about. Another problem is that there will be additional burden associated with formal methods: annotations and theorem proving. We believe that the contradiction can be resolved in a longer term: indeed, serious software engineers do care about software quality and seek efficient ways to improve it. So, changes required to introduce formal methods in the development process - or to simplify further formal reasoning - can be readily accepted, if it brings significant overall benefits. The main problem can be formulated as reducing costs, associated with formal methods, which is very difficult to achieve, in general. However, exploiting specifics of a problem domain can reduce such costs significantly. We therefore want to confine our discussion to the domain of blockchain protocols engineering. Actually, our immediate goal is to focus on the beacon chain protocol engineering problems, but the ideas are applicable to a wider class of systems. For simplicity, we call it 'blockchain protocols', but distributed ledgers or BFT protocols can be a better term. We discuss several principles, which can help to introduce formal methods into a traditional software engineering process in a not-so-intrusive manner. This is not the only possible approach, for example, formal-methods-first approach is possible. However, we believe that in the case of beacon chain development, a gradual introduction is required. So, we discuss an approach, which gives higher priority to the traditional s/w development process, but aims to employ formal methods to find bugs and reduce engineering costs. In the write up, we concentrate on a high level description, applicable to blockchains in general, while in the following write-ups, we will discuss beacon chain protocol specifics. The principles are: - start form lighter-weight formal methods: full-blown verification can be resource-consuming, while lighter-weight methods can bring benefits at lesser costs (though the benefits are lesser too). - exploit problem domain specifics: there are several areas, notoriously difficult to formal methods, but usual to s/w engineers, like destructive updates, object-oriented constructs, pointers, concurrency, bounded-precision arithmetic, etc. If a domain of interest doesn't involve some of them or if it can be avoided with reasonable costs, then formal reasoning can become much easier. - engineer formal specification: formal specification is the central concept in formal methods. However, traditional understanding of formality is not formal enough. Clarify semantics for ambiguous places can unlock benefits. - employ meta-programming. Formal methods introduce additional views and representations. Manual synchronization of different views can become a problem. To the rescue comes meta-programming. # Formal Method benefits Formal methods can assist in bug hunting, however, there are other benefits. A more general statement would be working with properties of a specification and/or its implementations and code transformations. We discuss it in more details in the section. Software quality is not the only benefit of formal method application (while it's arguably the central one). There are many useful properties that can be formally established or mechanically inferred. Which can important to improve software beyond bug hunting, e.g. may justify certain optimizations. An in-between case is code refactoring/re-organization - it's easy to introduce a bug, during the process. Which can be a serious reason to avoid it. However, refactoring is very important from software engineering point of view. Basically, our understanding of problems evolves as we develop tools for solving the problems. So, early decision are regularly ceased to be adequate. Another fruitful direction can be mechanical code generation, which can drastically reduce amounts of human labor involved, e.g. in a multi-language context. While it's rather related to meta-programming, again it's easy to introduce a bug here. So, formal methods can (and should) come to the rescue. Generated software can be accompanied with proof terms generation, resulting in certified derivations. Though, even incomplete formalization (e.g. clarifying language semantics) can be quite helpful here to reducing chances of introducing a bug. ## Reasoning about spec properties Perhaps the main purpose of formal methods regarding s/w engineering is to prove that certain properties of interest hold for a piece of software (e.g. specification, implementation, a particular algorithm, etc). One example is high-level protocol properties, like [liveness](https://en.wikipedia.org/wiki/Liveness), [safety](https://en.wikipedia.org/wiki/Safety_property), invariants, etc. A safety property can be seen as an invariant that always hold (in any possible trace) or, alternatively, which is never violated ("something bad will never happen"). A liveness property can be seen as a property that eventually holds ("something good will eventually occur"). Both types of properties are important to evaluate software and protocols from a high level perspective. However, they can be useful during implementation, as they can justify or reject certain optimizations. One example is [distributed garbage collection](https://en.wikipedia.org/wiki/Distributed_garbage_collection). Protocol implementations should keep some state in memory, which can be referenced by future messages. If such state grows unboundedly, it can crash or malfunction at some point, as memory resources are constrained. There can be conditions, which indicate that a particular piece of data won't be ever needed and so the space it occupies can be safely reclaimed. Another example is using fixed-width integers. In theory, some counters can grow unboundedly, so arbitrary precision integers should be used in theory, however, fixed-width integers can be faster and occupy less space. A proof that some bounds will be never exceeded will help to choose an appropriate fixed-width data type. There can be also security considerations, like DoS-attack prevention. E.g. an attacker can try to exhaust node's resources using carefully constructed messages. So, knowing resource consumption under worst case conditions is important clarify whether a protocol implementation can withstand DoS attacks. ## Equality proofs The same desired functionality can be expressed in various forms, which suit different goals. For example, a reference implementation should be readable and understandable by human users, rather than optimal from performance point of view. Another example is that an imperative-style implementation, which uses destructive updates will be different from a declarative-style implementation. One more example is code refactoring. E.g. code can be reorganized to express component boundaries explicitly. Or code can be re-written to avoid certain language constructs (or employ them). Some forms can be easier to reason about. There can be also multiple implementations of the same original code, obtained via compilation process or various code transformations. So, it's a completely normal situation in software engineering, when there are multiple variants of the same piece of code. Obviously, the variants are likely to disagree on certain inputs. Formal Methods, i.e. software verification can be used to prove that the variants agree on every possible input. An important use case, in the context of beacon chain spec, are optimized spec implementations, like (links??) ## Code generation Sometimes, correct-by-construction approach can be an easier way to ensure that desired properties are hold. If one needs implementations in different languages, it might be easier to generate them from a common codebase (e.g. reference implementation). That can be beneficial, if the codebase is updated often or when there are heavily optimized versions of code. ## Test generation Verifying that different implementations correspond the specification can be resource hungry, especially, if there are many implementation specific optimizations and variations in implementation style. We do not hope that it will be ever achieved. Testing seems to be the only reasonable approach here, at the moment. However, developing tests with good coverage can be difficult. Automatic test generation, based on formal models can come to the rescue. # Formal method challenges We briefly overview s/w concepts, which are difficult for formal methods and how they relate to blockchain protocol specifics. However, the concepts are popular in traditional s/w engineering, as they typically improve performance of resulting code and/or performance of software engineers. I.e such code may be easier to read/write. In longer term, situation can be different, though, for example, when a code is to be ported to a new hardware, like multi/mani-core CPUs, etc. ## Bounded-precision arithmetic One problem area is floating point claculations. We assume that blockchain protocols do not use floating-point calculations explicitly, e.g. when they are needed, fixed point or rational numbers can be used. Fixed-width integer arithmetic is also problematic for formal methods, as an over-flow or under-flow is possible. Arbitrary-precision integers can be used, however, fixed-width require less CPU resources and occupies less memory. Last factor is critical, as there are lots of data generated in a popular blockchain system. Therefore, data compression is critical. Thus, dealing with over- and under-flow is inevitable or at least quite difficult to avoid. ## Destructive updates Destructive updates create implicit dependencies between modules, which should be somehow expressed in a formal description. There are several ways to do that, e.g. destructive updates can be translated to a non-destructive form (returning an updated copy of an object or heap). Another variant is to track updates explicitly. The problem with the approaches is that they can easily make resulting descriptions difficult to read and to reason about. For the reason, it can be much easier to reason about pure functions. Immutability is quite natural in blockchain protocols, as by their nature, they should resist changes after a consensus is reached. Forgery and unauthorized modification by adversary should be prevented too. Unintended mutation (e.g. due to a bug in implementation) can break consensus too. For the reason, significant portion of data structures tend to be immutable. Blockchains themselves can be viewed as persistent (distributed and replicated) data structures, where immutability is achieved using cryptography tools. However, many operations, e.g. state updates, are much easier to express using mutable updates. ## Pointer arithmetics We assume that pointer arithmetic and low-level tricks like unsafe casting are not required. They may be needed and/or useful for certain low-level optimizations (e.g. compression or cryptography primitives), however, we ignore the topic, as it's far from blockchain specifics. Therefore, memory model can be simplified significantly. ## Aliasing [Aliasing](https://en.wikipedia.org/wiki/Aliasing_(computing)) arises when a memory location can be accessed via different variables. When destructive updates are possible, that can complicate reasoning, as the same symbolical expression may result in different values, if an update happened in-between evaluations. However, such problems can often be prevented by avoiding dangerous aliasing. One example is [Rust](https://www.rust-lang.org/)'s [ownership type system with borrowing](https://doc.rust-lang.org/1.8.0/book/ownership.html). In Rust, each name owns a value assigned to it, so if the value is assigned to another name, then a compiler either make a copy, transfer ownership (move) or reports an error. To avoid excessive copying and moving, a value can be borrowed by taking a reference to it. Rust type checker makes sure (during a particular scope) that either a single mutable reference exists or several immutable references, but not both. That means, static type checker guarantees absence of dangerous aliasing, when a destructive update can invalidate an expression involving an alias to the updated location. The approach is quite expressive, while allowing destructive updates in a safe way. It's a very natural fit to beacon chain protocol, according to our experience. We therefore propose to use the approach as the base, while defining memory model semantics. With the ownership type system, a mutator is either an (exclusive) owner of a value or borrows it via an exclusive reference, which means updates are easy to track and such code can be relatively easily transformed to a non-destructive form, to simplify formal reasoning about it. ## Object-oriented features Some object-oriented features can be difficult for formal methods, when destructive updates are possible. For example, there can a subclass, which code is unknown at the time of verification. Unknown subclasses thus should be replaced with an abstract specification, which can be tricky to express. We believe, such OO features should be avoided in applications aiming at high level of quality. At least, until formal methods able to deal with such OO features becomes mature. # (Formal) Specification definitions Specification plays the central role in the story - it's the main communication venue between the Specs Developers, Specs Implementers and Specs Verifiers. We assume a bi-directional communication among the roles: Spec Developers express their view in the form of beacon chain specs, while Specs Implementers and Verfiers communicate back problems they found. We want to clarify want a specification means and which varieties of a specification are possible (among suitable for beacon chain protocol specification needs). ## Specification definition We take as a starting point a specification definition from [here](https://www.astm.org/FormStyle_for_ASTM_STDS.html#definitions): > specification, n— an explicit set of requirements to be satisfied by a material, product, system, or service. In the case of a blockchain protocol specification it will be > beacon chain protocol specification — an explicit set of requirements to be satisfied by a blockchain protocol implementation. ## Specification forms There are different kinds of blockchain specs consumers: mere decent human beings, software engineers and computers. We therefore believe three forms of specs are necessary: - a prose - a human readable description - a code in a (popular) programming language, preferably, an executable spec - a more concrete and less ambiguous form, the best way to communicate to software engineers - a formal specification - to allow mechanized reasoning and transformations (and the best way to communicate to formal method practitioners) Which roughly correspond to the three user kinds. Actually, it's hardly possible to describe the complete specification in a single form. For example, a prose form is ambiguous, but still necessary to communicate a general idea to people. A code is great for Specs Implementers, however, specification development is a process and there hardly ever be a complete executable specification. So, a prose is needed as a source of information to clarify missing parts. A formal specification is difficult to develop and likely to be done for most important parts only. Multiple forms are also great for cross validation: while in an ideal world, one source of information would be the best (as there is less place for contradictions), in practice, 'Errare humanum est', so different forms help to reveal problems. ## 'Declarative' specification vs reference implementation It's often presumed that a specification should be declarative rather than imperative, i.e. describing 'what should be done' rather than 'how it should be done'. The latter is presumed to be an 'implementation'. However, sometimes it might be the best way to express a specification in a form of 'reference' implementation, especially if it's executable, i.e. can be executed (== necessary runtime libraries and execution environment are provided too). Speaking more formally, a declarative specification defines a set of possible implementations, while a reference implementation defines the set by presenting a member. It thus might be more difficult to express acceptable implementation variations (e.g. allowed optimizations). However, it's still possible, for example, several options for some functions can be provided, including non-deterministic ones. In practice, an executable code is provided for parts of specs only. So, such form is not very restrictive by itself. However, formal methods often need an explicit declarative specification to deal with various admissible implementations. One way which allows execution along with declarativity is to write a function(s) that checks whether a result is okay or not. ## Formal specification We define a formal specification as a specification defined using a [formal logic](https://en.wikipedia.org/wiki/Logic#Formal_logic), which is a variation of a formal specification definition from [here](https://en.wikipedia.org/wiki/Specification_(technical_standard)#Formal_specification): > A formal specification is a mathematical description of software or hardware that may be used to develop an implementation. Typically, a formal specification constructs a predicate, which defines a set of programs, for which the predicate holds true, i.e. $\{p | p \in \mathcal P \land S(p)\}$, where $\mathcal P$ is a set of all programs. An implementation is any program belonging to the set. However, in practice the situation is more complicated, as programs are expressed using different languages, which often differ from the specification language. So, programs in the implementation languages should be somehow translated to the specification language. Which is typically done by assigning a formal semantics, so that programs expressed in the implementation language can be formally transformed to the target (specification) language. In our case, we mostly concentrate on the implementation language, which is used to express the reference implementation (of a blockchain protocol). So, if one defines a formal semantics for the implementation language (a statically typed subset of Python with additional restrictions), targeting some formal specification language, then a formal specification can be constructed. We do not fix a particular specification language here, we rather assume that there can be different languages, depending on a particular goal. While the setup can be somewhat confusing from a formal verification point of view, we consider this to be *formal engineering* - by developing mechanized transpilers, one can obtain formal specifications for different formal frameworks, languages and eco-systems. The same holds for transpilers targeting implementation languages, which actually can be used as formal method tools, as statically-typed languages are accompanied with type checkers, which can be quite expressive. [Part 2](https://hackmd.io/mq7a6mOPTYacVux7fssL-Q)