Robert Habermeier

@Xo-wxO7bQkKidH1LrqACsw

Joined on Mar 2, 2018

  • 07-03 Here's a futuristic idea for a merkle tree. We commit to an unordered binary merkle tree, where items are inserted to the tree based on a deterministic ordering. Leaf node preimages provide a kind of run-length encoding for proving nonexistence of keys, and deleted leaf nodes are overwritten to form a linked list of all deleted free positions to be overwritten. Leaf hashes are H(key ++ value hash ++ distance), where distance is the the next key (lexicographical order) minus the key. Deleted leaves form a linked list.
     Like  Bookmark
  • Hash-Table: Triangular Probing We have chosen to use triangular probing as our preferred hash-table algorithm instead of 3-ary cuckoo or Robinhood. We wanted the following properties in our algorithm: Minimal probes required for present and absent values Minimal writes on insertions and deletions. Graceful performance degradation up to at least a load factor of 0.8 With meta-data bits in memory, Triangular probing fulfills all 3. It degrades gracefully beyond a .9 load factor, with fewer than 2 average probes per existing key query, and fewer than 1 average probes per absent key query even at that load factor with 4 or more meta-bits.
     Like  Bookmark
  • Benchmark: FIO with 50Gb file, randrw workload, 4K blocksize, direct, iodepth 32 System: Corsair MP600 PRO LPX 4TB Linux 6.1.0-17amd64 AMD Ryzen 9 7950X 16-Core Processor Filesystem IO Interface Read
     Like  Bookmark
  • Polkadot Block Authorship has too much enshrined coordination logic: analysis. The point of this write-up is to argue that the majority of the logic which determines the contents of the next Polkadot block should be expunged from the core protocol. The largest issue is that the protocol prescribes what the validators should do in sourcing work packages rather than describing the desired end result: a block containing reports of valid, or at least available work packages, each with an accompanying bond. The random sorting of validators into groups to pre-validate work packages is just one of many approaches which may be used to deliver new block contents to the Polkadot network. It is unlikely to be the best of all possible approaches. Polkadot's core protocol should be altered to make the construction of new blocks a concern of the free market. Let's compare the status quo[^1] to a hypothetical approach. In this alternative, any account with enough tokens can provide the security bond for work packages. the protocol provides no guidance on where the validator building a block ought to source its work packages.
     Like  Bookmark
  • Project Description: Polkadot Containers Background (Kernel/Userland): https://hackmd.io/uncbykXKTX2h8xde-zGNWw . I have the underlying goal to decouple Polkadot core utilization from the advancement of chains. Cores effectively provide two resources: data and computation. Both are secure under a standard blockchain crypto-economic model. To address data, we have created https://github.com/thrumdev/blobs as a data availability interface which will, in the future, scale to an arbitrary number of cores. To address computation, We could define Containers.
     Like  Bookmark
  • Next-generation Scheduling aims to enable parathreads and to unify parachains and parathreads by allowing chains to acquire 'burst' capacity in the form of additional temporary Cores. Conceptually, a parathread will be a parachain with zero 'reserved' Cores which acquires temporary Cores on-demand. Parachains can have one or more reserved cores and can also acquire additional Cores on-demand. This enables much more efficient blockspace and security markets for Polkadot and a simpler path to onboard into the ecosystem. Motivation: The amount of time that validation of a parablock can take is determined by the amount of time we are willing for finality of the relay-chain to take. Due to the architecture of Polkadot, it is possible for sequential blocks to be validated in parallel. Furthermore, the amount of Cores managed by the system should always be scaled to the maximum amount of backing, availability, messaging, and approval work the network can support. The network may choose to allocate fixed amounts of cores to long-term slots via parachain auctions and leave behind the rest as 'burst' capacity. In the case that those cores allocated to long-term slots are not allocated via slot auctions, they should also be utilizable as 'burst' capacity. The aim is to make these resources, via the abstraction of Cores, accessible to all parachains through architectural changes that allow one or multiple Cores to be claimed by any parachain via slot auction (long-term) or 'burst' auction (short-term). Requirements: Allow paras to occupy multiple availability cores at the same time, and allow dependencies between blocks to be expressed between cores. When a block times out due to lack of availability, its dependents should be erased from the cores they occupy. In the case that a dependent core occupant is vacated due to its dependency, the temporary core claim should be refunded. (This may introduce a DoS vector, bears further refinement) Temporary core claims should be possible to acquire either by user accounts or by parachain's account itself. This is a soft requirement, in the sense that it would also be acceptable for claims to only be acquirable by user accounts as long as they had a reasonable way of reliably being reimbursed by a parachain or smart contract. Parachains should be able to acquire temporary core claims in a way that fits with their collator selection strategy, like Aura or SASSAFRAS
     Like  Bookmark