# Computational Data Markets. ## Abstract. Problems this addresses: * Decentralized blockchain RPC (aka decentralized Infura). * Decentralized high-speed CDN's / BitTorrent. ## Prototype. - take eth-verifiable-rpc https://github.com/liamzebedee/eth-verifiable-rpc - generate a STARK proof of each result - setup a two-way payment channel between nodes - route this through a relayer who measures bandwidth ## Mechanisms * **Proof-for-payment**. Nodes run software atop a ZK-provable runtime (Cairo, RISC Zero). RPC and network communication occurs over this substrate. When a client requests data from a server, their request is well-defined in terms of computational expense (gas units), and is evaluated by the server. The server produces a proof of correct computation. The proof unlocks a deposit placed by the client, thus completing a payment. * **Proof-of-bandwidth**. Bandwidth is defined as the ratio of data over time. Proof of bandwidth is a scheme involving 3 parties - the client, server and relayer. The relayer is a third-party adjudicator. The client commits to their request/challenge ahead-of-time - and then sends the request. The server does some work and replies, sending the reply back through the relayer. By the relayer receiving the preimage, they can claim their fee, which also claims the server's fee too. The information on the bandwidth of this request is also revealed in this game. * **2-of-2-scorched-earth escrow**. https://ethresear.ch/t/list-of-primitives-useful-for-using-cryptoeconomics-driven-internet-social-media-applications/3198 ## Marketplace Each node runs a service. For example, nodes can run the `geth` program as an Ethereum RPC service. To use this service as a client, the following flow occurs: 1. Client connects to a directory server to lookup servers. 2. Client finds the server nodes for this program, and connects to their relay. 3. Client creates an on-chain payment channel to the server (see contract below). 4. Client sends the **request** to the relayer. 5. The relayer commits the client request on-chain. 7. The relayer begins the VDF timer. 6. The request is relayed to the **server**. 8. The server processes the request and computes the response. 9. The server generates a ZK-STARK proof of processing the request. 10. The server sends the response (output and proof) to the relayer. 11. The relayer stops the VDF timer upon receiving this. 12. The relayer relays the output to the client. 13. The relayer posts the proof and VDF time on-chain, closing the service channel, and unlocking payment. 14. The relayer and server are paid. Diagramatically: 1. Client -> `DirectoryServer.LookupRelaysFor("Ethereum RPC")` 2. Client <- [,] <- DirectoryServer 3. Client -> 0.1 eth -> `ServiceChannel(` 4. Relayer -> `ServiceChannel(` 5. Client -> `Request { input: eth_call(DAI.balanceOf(0x)) }` -> Relayer 6. Relayer -> `Request { input: eth_call(DAI.balanceOf(0x)) }` -> Server 7. Server -> `Response { output: 0x, proof: 0x }` -> Relayer 8. `Relayer.stopVDFTimer()` 9. Relayer -> `Response { output: 0x, proof: 0x }` -> Client 10. Relayer -> `ServiceChannel(` 11. `ServiceChannel` -> 0.001 eth -> Relayer. 12. `ServiceChannel` -> 0.099 eth -> Server. ### On-chain game. #### Naive model - measuring bandwidth of a server, fixed price. ```py class ServiceChannel: # input to the program. # e.g. for geth, it's a JSON-RPC message for `eth_call`. input = 0x def start(): client.deposit(0.2 ether) input = _input def end(proof, vdf_time_proof): verify(proof, true) verify(proof.program == program) verify(proof.input == input) verify(vdf_time_proof) elapsed = end - start data = proof.size + proof.output.size bandwidth = data / elapsed # rewards. relayer.transfer(1%) server.transfer(99%) # update stats on leaderboard. avg_bandwidth = (avg_bandwidth + bandwidth) / 2 ``` #### Sophisticated model - measuring bandwidth of a server, automatic market. Based on the naive model, we can know important service characteristics of services: 1. Average speed - in terms of bandwidth (which incorporates latency). 2. Number of requests per period. These map nicely onto the concepts of "supply" and "demand" where our market is around a software service. 1. Supply - number of nodes running the service 2. Demand - number of users paying for computation We can design an automatic market maker based on these statistics. What does this enable? - efficient price discovery for the market, which fuels efficient resource allocation - how many RPC servers do we need to fulfill demand? and likewise, "surge pricing" like mechanisms to incentivise new servers to fulfill demand. - incentives to compete on bandwidth - ie. servers that fill requests faster earn more. ```py contract ServiceMarket: price(service, server) = 30% * quality_of_service_weight + 70% * demand_weight ``` ## Practicality. Factors which affect this: * Security of verifiable clocks. * VDF clocks are new. * Possible to construct even more granular clocks. * Proof size and proving time. * SNARK's are ~300 bytes and 30s to prove. * STARK's are ~2kB and 3s to prove. * Distributed SNARK proofing is coming soon. * Throughput/cost of public blockchains. * L2's are more than efficient for this. * Constraint of 3rd party for routing (relayer) on client-server model. * Every web service incorporates a load balancer which performs this function. ## References. [2] https://www.utupub.fi/bitstream/handle/10024/154495/ProofOfLatency_final_fixed.pdf ## Appendix. ### Proof-of-bandwidth. This can be implemented by a simple commit-reveal incentive mechanism: 1. sender commits to message hash ahead-of-time, along with reward. 2. sender sends message to receiver. 3. receiver gets the message, reveals the preimage, and claims its reward. The incentive mechanism: a node is incentivised to report the message it received, in order to claim the reward. A byproduct is that we can measure the latency. What substrate do we run this rewards mechanism on? If we used a blockchain, it would be on the order of ~1s (Ethereum L2) or 800ms (Solana). This isn't granular enough. There are two ideas: 1. What if the message size was matched against the tick rate of the verifiable clock? ie. the tick rate is 1s, the message takes 1s to transmit, over an x KB/s link. 2. What if we found a more granular verifiable clock? How big is the message, and does that change the latency? Yes. We have to take into account the MTU (maximum transmission unit) of the internet. A 2MB message will have different latency than a 2KB message. Why? The message is chunked to fit within the MTU of the network. More chunking increases the probability of a chunk being lost, which increases the latency. If we use a verifiable clock, we can measure the latency in terms of ticks. Let's imagine we use VDF's for [proof of latency](https://www.utupub.fi/bitstream/handle/10024/154495/ProofOfLatency_final_fixed.pdf) [2]. The VDF takes 1s to compute, and the message takes 1s to transmit. The blocktime provides an "anchor" to the time the node claims the reward. Subtracting the tick count of the VDF from the tick count of the blocktime gives us the latency. Alternatively, we can use two VDF's, and construct a 'synthetic' idea of latency in pure VDF ticks. ### Lightweight state sync on global messages sent/received. A verifiable state machine is akin to a clock, because the state transition $S_{n+1} = F(S_n, T)$, applied for the state $S$, transition function $F$ and transaction $T$ naturally gives a monotonically increasing time $n$. The naive solution, which is used in Axon, is to gossip all of the messages we send/receive from other nodes to a central aggregator. This is $O(N^2)$ cost, since every sent/receive is gossipped twice. It is a fully online protocol. An alternative is each node runs their own VDF clock, mixing the current "hashtime" with the hash of every message they send and receive. This creates a commitment to networking, though nodes can equivocate and commit to two conflicting views of network history. To counteract this, they mix their clocks together. The algorithm here is maintaining an accumulator of messages sent/received, committing to it regularly with a VDF-based proof-of-work, and then intermixing the causality with other node's clocks. In order to do this, you broadcast your latest clock commitment at a regular interval (100ms). The gossip is attested to by other nodes, according to something like Narwhal's atomic broadcast. Using each node's commitment to messages networked, as well as the inherenty causality graph constructed by intermixing a per-node VDF, other node's clocks, and the latest block hash of different public blockchains, we can have some view as to the ordering of events and their timings in a P2P network. Using this data, we can verify networking work done by nodes - latency, bandwidth, etc. - by requesting the logs (revealing leaves in their accumulator), fully asynchronously. Meaning it doesn't have to be online, only the commitment to it. ### Input-addressable computation. There are two types of routing on the internet: 1. **Location-based routing**. I send requests to a specific IP address, to get any response. e.g. HTTP 2. **Content-addressable routing**. I send messages to any IP address, to get a specific response about a piece of content. e.g. BitTorrent, IPFS Content-addressability comes from the address being the hash of the content you're requesting. A ZK proof is like a generalised hash. Instead of `H(lolcat.jpg)`, you can also have `H(google("the oldest meme around", current_clock))` and route based on that.