Louis Thibault

@lthibault

Joined on Feb 3, 2023

  • How indexed queries to a node's routing table are performed remotely, and how the results are efficiently streamed. Preliminaries: Cluster Membership and the Routing Table Wetware nodes form a cluster in which peers maintain a mutual awareness of each other. This mutual awareness is provided by the cluster.RoutingTable interface. The routing table maintains an indexed set of routing.Records, each of which is periodically broadcast by a peer as a form of heartbeat and contains routing information and metadata about its originator. For convenience, here is routing.Record: // Record is an entry in the routing table. type Record interface { // Server ID is a globally unique identifier for
     Like  Bookmark
  • Core components [x] Cap'n Proto [x] Libp2p [x] Process management[ ] wasmer [ ] Aurae [x] Routing table Handled by both foca and memberlist
     Like  Bookmark
  • This memo summarizes my preliminary work on supporting WebAssmebly-based pre-compiled contracts in Flashbots' SUAVE implementation. Two separate features are provided. These are (Demo 1) demonstrating the ability to compile an existing contract (MEV-Share's ExtractHint) to WASM and execute it (see commit be61c5) , and; (Demo 2) demonstrating ability of compiled WASM code to call a method on suave.ConfidentialStoreBackend (see commit 9fdf66). I review each of these features below, and include a short anaylysis of current capabilities, limitations, design choices and future work. These features should be considered fit for demonstration purposes only, and are not production-ready. Demo 1: Compile ExtractHint to WASM and call in unit test At a high level, there are two parts to this demo:
     Like  Bookmark
  • Decentralized Cloud architecture for Web3. Matrix Godoc Reference What is Wetware? Wetware is a portable environment for writing secure, scalable and performant distributed applications. It aspires to be Web3's answer to Cloud Computing, providing a thin cluster abstraction over physical and virtual hardware, and supplementing this with standardized services like pub-sub, blob storage, process management and synchronization primitives. In keeping with the Web3 ethos, Wetware is built with distribution and decentralization in mind. It sports a fully peer-to-peer architecture, which avoids single points of failure, and allows it to scale effortlessly. It makes judicious use of distributed ledger technology to host security-critical functions like access control away from centralized platforms like Amazon Web Services (AWS), which represent unacceptable risks to system integrity, sovereignty and the founding principles of the Open Web. Crucially, Wetware resists the urge to overcommit to emerging technologies, reserving these for a carefully-considered selection of use cases. In other words, we believe that the way forward is through reasoned, prudent and judicious engineering, not hype. In practice, this means we adopt blockchain technologies cautiously, and only after exhausting alternatives. Wetware does not include a blockchain in its core protocol, for example, and never will. Where blockchains are employed, we confine them to the upper layers of the protocol, where they are but one of many pluggable implementations for supporting services like identity management, access control, audit logging and inter-cluster routing.
     Like 4 Bookmark
  • Overview Two-step project: Implement a distributed, replicated log data-structure on top of Wetware. Implement web-crawling framework on top of replicated log. Rationale. The goal of Wetware is to facilitate the development of distributed systems, so it makes sense to start by showing that we can easily reproduce a widely used data-structure out of ww primitives. From there, we show that it's straightforward to build a non-trivial application in the usual way. System Architecture The system comprises a set of worker peers, each of which consumes URLs from a replicated queue, fetches the resource, and appends the resource to a separate, replicated "results" queue. From this description, we identify XXX core components:
     Like  Bookmark
  • To mitigate entire class of attacks exemplefied by Bonnie, simple fix: Relay does not deliver payload until it directly witnesses x% of attestations for its signed header In eclipse scenario, attacker ends up waiting forever In practice, proposer is incentivized to help deliver attestations to Relay, because it will speed up delivery of payload Paper structure: Explain the above
     Like  Bookmark
  • This repository demonstrates how to use Wetware to write distributed systems. It contains a simple application that schedules jobs for execution in a distributed worker pool. A single scheduler node exposes a REST API for scheduling "tasks", and distributes them pseudorandomly to workers via Cap'n Proto RPC. The scheduler discovers worker nodes using Wetware's zero-conf clustering API, and can adapt to works joining and leaving the network dynamically. Running it locally Installation Clone the repository:git clone https://github.com/evan-schott/ww-scheduler.git First run Build the single application binary:cd ww-scheduler && go build -o ww-sched cmd/scheduler/main.go Start the gateway node:./ww-sched --gateway
     Like  Bookmark
  • NOTE: as per #79, we are first pursuing a single-producer/single-consumer (SPSC) channel implementation, which we will extend to a MPMC channel in a later effort. Therefore, this note assumes exactly one sender and one concurrent receiver. What is sync.Cond? Reference #1 says it best: If I had to summarize sync.Cond in one sentence: A channel says “here’s a specific event that occurred;” a sync.Cond says “something happened, idk, you go figure it out.” Thus, sync.Cond is more vague, which means it’s also more flexible. It also hints at how you should use it: It’s kind of like a chan struct{} paired with a sync.Mutex: after you receive from the channel, you lock the mutex, then go investigate what happened.
     Like  Bookmark
  • Two relevant kinds of system calls for Wetware: "Classic" syscalls to the OS Syscalls to the cluster Cluster Syscalls Cluster syscalls are handled through Wetware's Host capability. Outstanding Questions How do we pass the Host capability int the WASM process?
     Like  Bookmark