changed 5 years ago
Published Linked with GitHub

Fitting predictive models to runtime

While memory usage of a program on a distributed network can be objectively quantified, its runtime depends on the hardware configuration of each computer. Our end goal is to standardize metering of runtime for all machines in the network. For that reason, we will use models that predict runtimes on an "average network node".

The context of this research is metering resource usage in smart contract platforms. A transaction on such a platform generally consist of repetitions of

  • opcodes to be executed by the underlying VM,
  • or calls to functions that are implemented externally, but can be called within the VM.

Their runtimes differ from call to call and from machine to machine-however, by fitting predictive models in a testing environment that represents an average network node, we can determine how much time these operations should take. Using these models, everyone can agree on a single runtime, despite measuring different values individually.

Moreover, we realize that we have a general computer science problem at hand, where there is a need to predict the runtime of a program with a certain degree of confidence. For that reason, we keep our nomenclature abstract, and consider programs consisting of certain operations, defined as:

  • Program: A process that accepts an input, performs some computation, and produces an output on a computer. We assume a sequential model of computing, and that we can determine the beginning and end of execution of each program in time.
  • Operation: Any part of a program whose runtime can be approximated reliably by a simple model, and which does not require any deeper[1] metering.

Such a model can be constant, linear, or of arbitrary computational complexity, depending on whether the related operation accept inputs or its execution depends on some external factors. To assign these models and respective parameters, we benchmark the operations in a testing environment. A requirement is that the actual runtime used in production will not surpass the value predicted by the model with a certain degree of confidence, e.g. 95%.

Here is an overview of the methodology we use for assigning a model to a single operation:

  1. For non-constant functions, a set of inputs is either generated or sampled from already deployed transactions/programs. They are run in the testing environment, and the resulting runtime for each input is recorded. We try to take a large enough sample of the population (of operation calls), so that we can safely assume that it represents the whole population.
  2. A specific model is chosen by analyzing the operation. This is simply a function of the factors that affect the operation's runtime, e.g. size of the input, or length of the blockchain[2]. If there are no such factors, we can assume a constant function.
  3. Then we try to fit the model to the data in an optimal way. A "worst case" fit is where all points lie under the curve, and a "best case" fit is where all points lie above the curve. See best, worst and average case time complexities. We generally target a certain degree of confidence \(D\), which is defined as the ratio of the number of points below the fitted curve to the number of all points.
  4. This way, if the assumption in (1) holds, we can guarantee that runtime will not exceed the value given by the model \(D\%\) of the time.

If we underestimated runtime, the specified round length may not be enough to execute all transactions in a block, hurting consensus and the UX. Similarly, if we overestimated runtime, it could cause validators to be unnecessarily idle between rounds, which is inefficient. For that reason, we try to aim for the parameters that are "just right".

Below is a graph that demonstrates this for SHA1, which is implemented in pure Python. Since SHA1's time complexity is roughly \(O(n)\) in terms of input size, we choose the runtime model as \(f(x)=mx+n\). We then generate random sequences of bytes with a uniform distribution between 100B30kB, measure the runtime for each input, and use least squares regression to find the parameters which yield \(D=98\%\).

Algorithms for targeting a certain degree of confidence

A generic least squares regression only minimizes the squares of the differences of the data points from the fitted curve, and does not offer any way to enforce that the curve satisfies certain criteria. Our criterion reads: the ratio of the number of points below the curve to the number of all points should be equal to some degree of confidence \(D_\text{target}\). In order to enforce it, we have to use least squares in tandem with other methods.

Let \(\boldsymbol{X}\) be the finite \(n\)-tuple of input vectors and \(T:\boldsymbol{X}\to \mathbb{R}_{\geq 0}\) be a function that maps the input vectors to the runtimes observed in the benchmarks. Then, given a nonlinear model \(f(\boldsymbol{p}, \boldsymbol{x})\), which predicts runtime given model parameters \(\boldsymbol{p}\) and input vectors \(\boldsymbol{x}\), the resulting degree of confidence is calculated as

\[ D(\boldsymbol{p}) = \frac{\lvert\{\boldsymbol{x} \in \boldsymbol{X} \mid f(\boldsymbol{p}, \boldsymbol{x}) \geq T(\boldsymbol{x}) \}\rvert}{\lvert\boldsymbol{X}\rvert} \]

We denote by \(\boldsymbol{P}\) be the set of all possible configurations of parameters.

Since we assume a nonlinear model, we also use a nonlinear least squares algorithm. The algorithm minimizes the L2 norm of a residual vector \(\lVert\boldsymbol{r}\rVert\) which is element-wise defined as

\[ r_i = f(\boldsymbol{p}, \boldsymbol{x}_i)-T(\boldsymbol{x}_i) \]

for \(i=1,\dots,n\).

Algorithm 1

The simplest solution to enforce a desired degree of confidence is to

  1. solve the nonlinear least squares problem for a model \(\bar{f}\), and
  2. add a constant term \(c\) to the model, \(f(\boldsymbol{p}, \boldsymbol{x}) = \bar{f}(\boldsymbol{p}, \boldsymbol{x}) + c\), such that \(D(\boldsymbol{p})\approx D_\text{target}\).

The constant term \(c\) can simply be found by keeping \(\boldsymbol{p}\) constant, obtaining an array of values \(\bar{f}(\boldsymbol{p}, \boldsymbol{x})\) for all \(\boldsymbol{x}\in\boldsymbol{X}\) sorted in ascending order, and retrieving the element at the index \(\text{ceil}(D_\text{target}\times |\boldsymbol{X}|)\).

In simpler terms, we move the surface defined by \(\bar{f}\) up or down, until it gives us the ratio of numbers below to ratio of numbers above that we want.

Algorithm 2

Another algorithm I came up with works by modifying the residual vector. In order to target \(D_\text{target}\), we separate \(\boldsymbol{r}\) into its positive and negative parts \(\boldsymbol{r}^+\) and \(\boldsymbol{r}^-\), such that \(\boldsymbol{r} = \boldsymbol{r}^+ - \boldsymbol{r}^-\). Then, the new residual vector is defined as

\[\tilde{\boldsymbol{r}}(\alpha) = \boldsymbol{r}^+ - \alpha \boldsymbol{r}^-\]

where \(\alpha\) is a constant. We denote by

\[ \boldsymbol{p}^\ast(\alpha) = \mathop{\text{argmin}}_{\boldsymbol{p}\in \boldsymbol{P}} \lVert \tilde{\boldsymbol{r}}(\alpha) \rVert \]

a function \(\mathbb{R}\to \boldsymbol{P}\) which yields the parameters that minimize the modified residual vector, given a certain \(\alpha\).

Then the goal is to find an \(\alpha^\ast\) such that the difference from \(D_\text{target}\) is minimized:

\[ \alpha^\ast = \mathop{\text{argmin}}_{\alpha\in \mathbb{R}} |D(\boldsymbol{p}^\ast(\alpha))-D_\text{target}| \]

After solving this secondary optimization problem, we finally obtain the parameters \(\bar{\boldsymbol{p}} = \boldsymbol{p}^\ast(\alpha^\ast)\) which give us the desired degree of confidence. In practice, we achieve this by placing the call to the nonlinear least squares solver-which is trying to solve for the parameters-into another solver-which is trying to solve for \(\alpha\). The outer solver can be running any algorithm that can minimize nonlinear functions of a single variable. A proof of concept can be found in this github repository.

We already demonstrated this above for a linear model. To demonstrate it for a nonlinear model, we choose selection sort. Its best, average and worst case time complexities are all \(O(n^2)\), where \(n\) is the size of the array. Therefore, we choose the model \(f(x) = ax^2+bx+c\). The input set consists of shuffled arrays whose lengths range from 10 to 1000 in a uniform distribution. Targeting \(D=98\%\), we obtain the following fit:

TBD


  1. Deeper meaning the metering of lower level instructions that comprise the operation. ↩︎

  2. See The Economics of Smart Contracts. ↩︎

Select a repo