Try   HackMD

Enabling Efficient Caching of Accounts and Contracts in Eth1.x and Eth2 using Referrers

“All problems in computer science can be solved by another layer of indirection, but that will usually create another problem.” David John Wheeler (1927 - 2004)

Contributors: David Hyland-Wood, Raghavendra Ramesh and Horacio Mijail Anton Quiles (Team X, PegaSys, ConsenSys)

Problem Statement

Piper Merriam, Trinity Team Lead at Ethereum Foundation, informed Team X on 15 April 2020 that the initial implementation of Stateless clients on Ethereum 1.x are not anticipated to include full support for the Ethereum JSON RPC API. This is because Stateless clients will not have access to sufficient information to answer queries from users. Instead, initial Stateless clients are to merely track the head of the blockchain, and to be able to validate new blocks as they arrive.

This document suggests the addition of Referrer nodes to Eth1.x networks and perhaps Eth2 shards. The purpose of a referrer node is to suggest the most relevant information Stateless clients should cache, and how they may acquire uncached information as required to answer user queries. If implemented, this solution may be able to extend the capabilities of Stateless clients in earlier incarnations than currently planned.

Literature Review

The Eth1.x community created a survey of proposals to reduce block witness size, in which caching of recent block witnesses was considered and eventually rejected on complexity grounds. The team determined, “caches may be at the networking-layer until we become desperate for consensus witness size reductions”, leaving room for a networking-layer caching proposal to assist Stateless clients.

There are several existing technologies beyond blockchains that have developed useful patterns to address caching in distributed systems. Bittorrent, for example, uses a redirection service known as a tracker to keep track of which nodes in a distributed network hold relevant information. Trackers are then used to dynamically point requesters to nodes to spread network traffic. Persistent URLs (PURLs) provide a similar function in a Web context by redirecting clients to the current location of a Web resource. An Ethereum Referrer node borrows some of the concepts previously implemented by trackers and PURLs.

ID Query Redirection

The operation of Bittorent’s Tracker is summarised in Figure 1.

Image Not Showing Possible Reasons
  • The image file may be corrupted
  • The server hosting the image is unavailable
  • The image path is incorrect
  • The image format is not supported
Learn More →

Figure 1. Summary of Bittorrent Tracker Actions

A Bittorrent user acquires a Bittorrent file out-of-band, such as via a Web search engine pointing to a Web page containing the file, an ftp archive, or passed from person-to-person via email (shown in step 1 of Figure 1). A Bittorrent file is very small and contains little information beyond a unique identifier for a Bittorrent resource. A Bittorrent file does not contain the content of the resource.

A user’s client software uses a Bittorrent file to ask a Tracker for the location of the content associated with the unique identifier. The Tracker responds with a redirection to a location holding the content. In the case of Figure 1, User A is directed to a seed node in an online Bittorrent network (shown in steps 2 and 3).

Since Bittorrent is a peer-to-peer network, User A may act as a source for the content once it holds it. User B may be directed to acquire the content from User A via a later Tracker redirection (shown in steps 4 and 5).

Redirection may be similarly used in an Ethereum 1.x environment or an Ethereum 2 shard to assist Stateless clients to resolve JSON RPC requests via Referrer nodes. Figure 2 summarises that concept.

Image Not Showing Possible Reasons
  • The image file may be corrupted
  • The server hosting the image is unavailable
  • The image path is incorrect
  • The image format is not supported
Learn More →

Figure 2. Using a Referrer Node to Optimise Stateless Client Caching

In Figure 2, an Ethereum user acquires a unique identifier for an account, contract or other state out-of-band. Identifiers may be provided to a user as the result of a transaction submission, found on the Web or passed person-to-person as with Bittorrent identifiers (shown in step 1 of Figure 2).

The user would make a JSON RPC request to a Stateless client and receive an immediate response if the request can be satisfied with information contained within the client’s cache (shown in steps 2 and 3).

If the information necessary to satisfy the user’s request is not contained within the client’s cache, the Stateless client makes a request for the information from one of many possible Referrers in a Referrer network (shown in step 4).

Referrer Node Functions

The following explains the different functions of a Referrer node.

As an important side effect of their referral operations, Referrers gain information that a requested identifier is associated with an account, contract or other state is of interest to some user on the network. The ability to track such information as other requests are made over time is the central purpose of the Referrer network. The Referrer passes that information to other Referrers to maintain a consistent state across the Referrer network.

The Referrer redirects the Stateless client to a seed node (e.g. a full or archive node) that contains the required information (shown in step 5).

The Stateless client now has the ability to resolve the user’s request, but the process has been inefficient and slow. The Stateless client has a choice to make whether to cache the information received (shown in step 6). The choice is not obvious: If the Stateless client caches all information received it might as well be a full node. Fortunately, the Referrer network is able to provide hints necessary for Stateless clients to make optimal caching decisions independently.

Note that any state related to an identifier would need to be validated (e.g. by Merkle proof) by a node. It is not the purpose of a Referrer to provide such a proof.

Referrer State Consistency

We say that the Referrer network is consistent when all the metrics necessary for caching on all stateless clients evaluates to the same on all Referrer nodes. Note that the Referrers update their measure of metrics based on client requests as well as based on other Referrer nodes. A consistent Referrer network requires a fast synchronisation protocol among the Referrer nodes. More so, it is necessary for the latency of this synchronisation to be orders of magnitude less than the world state synchronisation. Otherwise by the time the Referrer nodes synchronise, the world state could change, necessitating a rerun of Referrer synchronisation protocol, ending up in a vicious cycle, crashing the network.

An alternative is to maintain a weak consistency of the Referrer network. Here, a Referrer node runs a synchronisation protocol on a part of the world state only when that state is accessed more than a threshold number of times by the clients querying this Referrer node. We gain a faster Referrer network trading off strict consistency. A delay in the turnaround time for requests on the parts of the world state is expected in a Referrer network that has lost the strict consistency.

A further alternative is not to have any form of consistency at all.

A good set of simulations and experiments can only determine the threshold for weak consistency as well as the impact of loss of any form of consistency.

Stateless Client Caching

Stateless clients will generally wish to cache “busy” accounts, contracts or other state that are most frequently used. Further, they will wish to make caching decisions based on their own knowledge of their local resource availability, such as RAM and disk availability.

Stateless clients can receive information on the relative frequency of identifier access either in the headers of a Referrer response or via a separate API in a manner to be separately determined.

We envision two axes for cache on a stateless client side.

  1. Read Caches and Write Buffers. It is not necessary that the ‘busy’ parts of contract states (or in general, the world state) that are read match with the parts of the state that are written. Hence it is best suited for a Referrer node to maintain different metrics against reading states versus writing states separately. This separation can flow to stateless client caches as well. By having two different caches, one for maintaining oftenly read parts of the state, and another maintaining oftenly written parts of the state, we can potentially apply different algorithms at different time intervals for pulling updated data to read cache and pushing written cache contents to the network.
  2. Hot: User-specific state vs. world state. The grand vision of statelessness is to encourage users to connect to the main network with as little resources as the computation power, memory and storage sizes of a mobile phone. So, there exist two classes of clients: one who works only for a single user, and another who is actively validating on the main network. The first class requires to cache the parts of the world state accessed by the served user. The second class requires the parts of the world state that are “hot” on the network. Thus, depending on the class of a stateless client the metrics to measure “hotness’ are different, one is user-specific and the other is on the world state. Because the Referrers by construction are required to cater to any stateless client, they necessarily rely on metrics related to the world state only.

Referrers in Push Model of Network Synchronisation

Typically there are two kinds of nodes in a stateless network, apart from Referrers: seeders and leechers. Seeders maintain large parts of the state, many full. Leechers request state from the seeders and cache locally the relevant parts of the state. The stateless clients come under the category of leechers. In order to maintain a consistent world state, the seeders need to synchronise with other seeders. At a very high level, there are two kinds of synchronisations that can happen in such a network.

  1. Pull Model: A peer requests a part of the world state from another peer.
  2. Push Model: A seeder keeps broadcasting parts of the state. An example of this is Merry-Go-Round sync that uses schedules to broadcast different parts of the state stored in a consistent seeder.

Hitherto, the role of Referrers in a pull model has been described. Referrers can play a vital role in push models too. They can introduce some ‘smarts’ to schedules of a push model. The bandwidth reserved for each part of the state to be pushed as well as the order on the parts of the state to be pushed can be determined by the metrics resident on Referrer nodes. When the seeders push frequently used data to the parts of the network where it is most accessed, the synchronisation happens at a faster rate than a dumb schedule. A fast sync for a seeder implies faster turnaround time for all the leechers depending on that seeder, resulting in an agile network.

Proposals

Three options are presented for Referrers to track the relative frequency of identifier usage. Option A consists of a naïve frequency tracking technique and Option B attempts to improve the ability for a Stateless client to request a range of the most frequently used identifiers via the use of locality-preserving hashing.

Option A. Naive frequency tracking

Referrers may keep an internal database of account, contract and other state identifiers inclusive of a frequency count of the number of transactions mentioning each identifier. Referrers may provide a frequency-ordered list of identifiers upon request.

The most obvious way for a Referrer to obtain both a list of identifiers and their associated uses is for a Referrer to either be part of a full node or for a standalone Referrer to query a full node periodically for such information. The latter approach obviously entails an extension to the JSON RPC API to enable such a query to be performed.

Option B. Optimal frequency tracking via Locality-preserving hashing

Referrers could potentially optimise their own internal data structures using locality-preserving hashing. Identifier hashes (or alternatively their values) would be constructed using axes based upon (e.g.):

  • Time since last access
  • Number of times accessed
  • Geographical location of the nearest node
  • Ethereum Account ID / Ethereum Address

The advantage to such an approach would be that Referrers may easily identify a simple range of identifiers with high frequencies of access. Since both the time since last access and the number of times accessed are both used, the most frequently used recently would naturally sort to the top. Clients would then be able to request a number of identifiers in frequency order, and Referrers would be able to efficiently return exactly the number requested.

Option C. Machine Learning

The idea is to gain insights into the dataset from MainNet using machine learning algorithms. The candidate features could be the ones listed in Option B. A model can then be built using the results of the machine learning and this can be supplemented with expert knowledge about the network. When a stateless client queries a Referrer, the Referrer executes the model to find the state provider, and to answer whether or not to cache the data on the client, whether or not to run Referrer synchronisation. Additional models can be built to assist with seeder scheduling and identification of state hotspots which can be used as a starting point to initiate a ‘Merry Go Round’ sync cycle.

Economic Incentives to Operate a Referrer

Operating a Referrer should be relatively simple and inexpensive compared with other types of Ethereum network components. Nevertheless, some form of economic incentive is probably necessary for operators to create and sustain Referrer operations.

Nodes are currently incentivised to answer JSON RPC requests in some operators’ networks (e.g. Infura). It may be possible for Referrers to share in the incentives for any JSON RPC request that they assist in satisfying.

Additionally, Referrers may be incentivised via gas when responding to requests by full or archive nodes for information that would lead to cache optimisation. Such nodes may find the ability to optimise their cache sufficiently valuable to undergo such costs.

We believe much more thought needs to go into economic incentives for this proposal to be effective.

Some open questions / thoughts

This section is a holding space for some questions that need to be thought through:

  • What state would be cached? Would this be whole contracts or just contract tiles? Would it be the entire state for a contract, or just parts of the state of a contract? (Thing to be investigated: what is the biggest single contract trie?
  • EF operates all of the Boot nodes for free. There is no economic incentivization. Could a similar model work for Referrer nodes, where EF and other organisations operate them for the common good?
  • There are many copies of the same contract in the Ethereum system. For example there are hundreds of thousands of ERC20 contracts deployed on Ethereum MainNet. Perhaps this knowledge could be integrated into this proposal such that contract code is identified by its message digest, and the request would them come down to "which node has code that matches this message digest"? Not part of this proposal, but perhaps widely used of common contracts (ERC20, ENS to name bit two) could come hard coded into Ethereum Clients.