François THIRE

@p-cUv0l5RNaDKBCowZ0IzA

Joined on Sep 21, 2021

  • Prerequisite Network DAL is currently active on: Dailynet (doc): reset everyday Weeklynet (doc): reset every wednesday Account Provide an account with money. On the documentation page for each network above, there is a faucet you can use. For example, the one of this week for weeklynet is here.
     Like  Bookmark
  • In this section, we explain the data availability problem, present well-known solutions to it, and give an overview of the various data availability solutions proposed by different blockchains. In some contexts, the solutions proposed for data availability are referred to as sharding solutions. The Data Availability Problem One way to increase the scalability of blockchain is by giving the possibility for the users of the blockchain to post the content of their operation off-chain. This enables us to go beyond the limit of the bandwidth for the L1, which cannot be too high. But if the data are posted off-chain, how can we reach a consensus about those data being available without requiring all the L1 nodes to download those data? This is the data availability problem. More particularly, the problem of data availability is to ensure that some data has been published at some given point in time. This point in time generally is the same as the time a block proposal is made containing commitments to those data. The naming "data-availability" can be a bit misleading: ensuring data-availability does not ensure that data can be retrieved after the point of publication.
     Like  Bookmark
  • Various roles need to run the DAL node, each requiring a unique configuration based on their responsibilities. These roles are: Baker Bootstrap Node Slot Producer Slot Consumer In this document, we will explain the responsibilities of each role and guide you on how to configure the DAL node appropriately for each specific role. Basic commands
     Like  Bookmark
  • Don't fear the shrinking François Thiré Software Engineer at Nomadic Labs (working for the Tezos blockchain) Context Writing Property-Based Tests allows to gain more confidence than unit tests This was applied successfully on many components of the Tezos blockchain and allowed to detect many bugs (poke Julien Debon & Clément Hurlin) We are using QCheck2 (thanks Julien :pray:) and Tezt How this journey started
     Like  Bookmark
  • :warning: THIS DOCUMENT IS A DRAFT. Everything is subject to change. :warning: :warning: THIS DOCUMENT MAY NOT BE UP TO DATE. :warning: Overview The DAL node is the main component of the Data-Availibility Layer (DAL). Objectives of the DAL node can be split into two: a P2P protocol to communicate an API to communicate with the various users
     Like  Bookmark
  • The Mumbai protocol, introducing the mechanism of smart rollups, has indeed marked the inception of Layer-2 solutions in the blockchain landscape of Tezos. Layer-2 is fundamentally designed to enhance the scalability of blockchain systems by amplifying transactional throughput, often denoted as TPS or Transactions Per Second. In an earlier announcement, we unveiled the potential to achieve a staggering 1 million TPS via these smart rollups. However, in order to facilitate such elevated throughput, the requirement was to transition the content of operations off-chain. Indeed, if we consider every transaction of the
     Like  Bookmark
  • The ambitious goal of achieving 1 million transactions per second (TPS) for Tezos through smart rollups requires an effective data-availability solution. Presently, two prominent solutions exist: Data-availability Committee Data-availability Layer The primary inquiry we address in this document is: how can we achieve low latency (under a second) while employing a data-availability solution? The prevalent solution currently is to rely on a sequencer. A sequencer is an entity that determines the order in which operations must be executed.
     Like  Bookmark
  • Overview State of the artThe data availability problem Solutions for the Data Availability Problem Approaches of various blockchains DAL design Overview DAL/P2P design Topological constraints
     Like  Bookmark
  • Use case For Layer-2 Smart rollups Validity rollups (aka zk rollups) A decentralised DB Bandwith: ~ 10Mo/s Scalability of blockchains Computation
     Like  Bookmark
  • The Data-Availibility Layer (DAL) is a decentralize data-base for Tezos. The use-case for such a data-base are the rollups solutions provided on Tezos: Smart-rollups and Epoxy. Such a decentralised data-base would allow to discharge the current Layer 1 that contains rollup operations. Using such a decentralised database, it is possible to reach 1 million of TPS in a decentralised way. The current alternative being the DAC project (Data-availibility Committee) which provides also a storage solution but backed only by a committee. With data posted off-chain, we can jeopardize the refutability of rollups. Indeed, the refutability of rollups relies crucially on the fact that any honest player can get those data. When those data are on the Layer 1, it is sufficient to track the Layer 1. When data are posted on the DAL, the Layer 1 should be able to attest whether those data where indeed published on the DAL. This is what is called data-availibility. :::info The data availiblity only guarantees that the data where published at least once. It says nothing with respect to the availibility of the data over the time.
     Like  Bookmark
  • For the P2P protocol, having a DAL committee changes at every level could create performances issues due to connections and disconnections. A way to overcome would be to draw the DAL committee every n levels where n is a constant that remains to be defined. A period of time of n blocks will be called an epoch in this document. :::info Likely, we would like to have that the number of blocks in a epoch divides the number of blocks in a cycle.
     Like  Bookmark
  • The DAL committee for a level $l$ decides which attestor must attest the availibility of a shard $s$ for all slot indices $s_i$. :::info The property desired for this committee is that the number of shards assigned to an attestor should be proportional to its stake. ::: If an attestor is assigned to at least one shard, it should be able to attest whether the data is available or not. To do so, either we reuse the current attestion operation (formerly known as endorsement), or we use a new one. :::warning
     Like  Bookmark
  • For various reasons, demos, tests, ... We may want to run scenarios on hundreds of machines spawning nodes. This is something that cannot be done on the CI nor on a personal computer. Instead, we may want to deploy machines running nodes on the cloud, connect them and run the test on those machines. This was already done in the past via Kubernetes. However, several issues was raised: Reproducibility was hard to do Developers are not familiar with the technology so very hard to write/deploy. With DevOps, it was hard to write a scenario and get the observable useful for the developer. Cannot be maintained easily Since more than two years now, the Tezt framework has been used successfuly to write integration tests for Tezos. Tezt contains also many high-level functions that help to write integration tests, which are harder to write in bash, or any script language using Tezos API. Core developers of Tezos being familiar with OCaml, and Tezt tests being written in OCaml, it makes the maintenance of those scenarios easier. Moreover, even if tests are not run everytime, putting the code into the CI allows to ensure the code still compiles. Finally, Tezt contains a Runner module that allows to write integration tests where nodes are node run on the local machine, but via an ssh connection.
     Like  Bookmark
  • [toc] References KR 2.1/DAL KR 2.3/P2P P2P design Objectives
     Like  Bookmark
  • Specification of an infinite inbox with SOL/EOL Given $\mathbb{M}$ a set that aims to represent messages, we define $\mathbb{M}^{\dagger}$ as: $$\mathbb{M}^{\dagger} = \mathbb{M} \sqcup \mathrm{SOL} \sqcup \mathrm{EOL}$$ where $\sqcup$ represents the disjoint union. An inbox is defined as a function $\mathcal{I} : \mathbb{N} \times \mathbb{N} -> \mathbb{M}^{\dagger}$ such that: $\mathcal{I}(l,i) = \mathrm{EOL} \Leftrightarrow \forall j, j< i, \exists m', \mathcal{I}(l,j) = m'$
     Like  Bookmark
  • The current implementation 6239 of EOL is not good because it adds a lot of code in the trusted code-base and is really hard to ensure the correctness. This document aims to understand the current difficulties and also elaborates a high-level specification for the EOL feature that would lead to an easier implementation. This new specification should also simplifies other parts related to the inbox. The problem Context Each rollup maintains an inbox. This inbox is implemented by the protocol via a skip list which is a sparse data-structure to represent L2 messages posted onto the L1. (The protocol only keeps the head of this skip-list while the rollup node maintains the whole skip-list. This makes the skip-list off-chain.) A cell of the skip list is a list of messages posted at some level (i.e. L1 block) and some metadata. If there is no message for a particular level, there is no cell. Consequently, proving the absence of message at some level $l$ requires to prove the existence of messages at some levels $l_1$ and $l_2$ such that:
     Like  Bookmark
  • Considering the new DAC proposition, we can rethink a bit the current design of DAL for the interaction with SCORU. The aim is to: Simplify the L1 refutation game Give more freedom to the kernel to import DAL data It relies on features that are desired for SCORU Implement a mechanism similar to DAC enabling to share code with DAC if the design is validated The design ensures the following invariant
     Like  Bookmark
  • Purpose: To agree on what should be implemented for a first PoC Various remarks: Long shard assignements should not change security The scheme $\alpha=16, r=1$ may not be enough for the P2P? There is no something easier to implement than sampling that gives the same properties
     Like  Bookmark
  • This document aims to give specification of the TORUs. :::warning This document is not complete. For the moment, mainly we focus on the commitment, commitment_finalisation and rejection operations. ::: We try to give both an informal and formal semantics for operations. All the formal parts are hidden by default to not disturb the reading of this document. :::spoiler Notations used throughout this document (mainly used for the formal part of this document)
     Like  Bookmark
  • In STTFA, the the fact there is som subtyping and impredicativity makes proofs incompatible in Agda a priori. Hence the question: Can we get rid off the subtyping and impredicativity in those proofs? Mainly, the problem of cumulativity (or subtyping) as it is in STTFA is a matter of code duplication. For impredicativity, it is a bit different: An impredicative proof cannot always turned into a predicative proof. And at the time of writing there is no static check to know whether an impredicative proof can be turned into a predicative one (notice that I do not really define impredicative/predicative in this setting). Going from STTFA into Agda requires some tricks to do on the universes. We know that Universo can already be used to check whether an impredicative proof can be turned into a predicative one. Given that Universo already exists, the question is whether we could avoid the code duplication and let Universo does that for us by using universe polymorphism? This would have two benefits: Agda already knows universe polymorphism, so it would be a lighter translation than code duplication which would also make the library more idiomatic This would allow us to reuse Universo as a tool and would avoid us to write new machinery
     Like  Bookmark