While memory usage of a program on a distributed network can be objectively quantified, its runtime depends on the hardware configuration of each computer. Our end goal is to standardize metering of runtime for all machines in the network. For that reason, we will use models that predict runtimes on an "average network node".
The context of this research is metering resource usage in smart contract platforms. A transaction on such a platform generally consist of repetitions of
Their runtimes differ from call to call and from machine to machine–-however, by fitting predictive models in a testing environment that represents an average network node, we can determine how much time these operations should take. Using these models, everyone can agree on a single runtime, despite measuring different values individually.
Moreover, we realize that we have a general computer science problem at hand, where there is a need to predict the runtime of a program with a certain degree of confidence. For that reason, we keep our nomenclature abstract, and consider programs consisting of certain operations, defined as:
Such a model can be constant, linear, or of arbitrary computational complexity, depending on whether the related operation accept inputs or its execution depends on some external factors. To assign these models and respective parameters, we benchmark the operations in a testing environment. A requirement is that the actual runtime used in production will not surpass the value predicted by the model with a certain degree of confidence, e.g. 95%.
Here is an overview of the methodology we use for assigning a model to a single operation:
If we underestimated runtime, the specified round length may not be enough to execute all transactions in a block, hurting consensus and the UX. Similarly, if we overestimated runtime, it could cause validators to be unnecessarily idle between rounds, which is inefficient. For that reason, we try to aim for the parameters that are "just right".
Below is a graph that demonstrates this for SHA1, which is implemented in pure Python. Since SHA1's time complexity is roughly \(O(n)\) in terms of input size, we choose the runtime model as \(f(x)=mx+n\). We then generate random sequences of bytes with a uniform distribution between 100B–30kB, measure the runtime for each input, and use least squares regression to find the parameters which yield \(D=98\%\).
A generic least squares regression only minimizes the squares of the differences of the data points from the fitted curve, and does not offer any way to enforce that the curve satisfies certain criteria. Our criterion reads: the ratio of the number of points below the curve to the number of all points should be equal to some degree of confidence \(D_\text{target}\). In order to enforce it, we have to use least squares in tandem with other methods.
Let \(\boldsymbol{X}\) be the finite \(n\)-tuple of input vectors and \(T:\boldsymbol{X}\to \mathbb{R}_{\geq 0}\) be a function that maps the input vectors to the runtimes observed in the benchmarks. Then, given a nonlinear model \(f(\boldsymbol{p}, \boldsymbol{x})\), which predicts runtime given model parameters \(\boldsymbol{p}\) and input vectors \(\boldsymbol{x}\), the resulting degree of confidence is calculated as
\[ D(\boldsymbol{p}) = \frac{\lvert\{\boldsymbol{x} \in \boldsymbol{X} \mid f(\boldsymbol{p}, \boldsymbol{x}) \geq T(\boldsymbol{x}) \}\rvert}{\lvert\boldsymbol{X}\rvert} \]
We denote by \(\boldsymbol{P}\) be the set of all possible configurations of parameters.
Since we assume a nonlinear model, we also use a nonlinear least squares algorithm. The algorithm minimizes the L2 norm of a residual vector \(\lVert\boldsymbol{r}\rVert\) which is element-wise defined as
\[ r_i = f(\boldsymbol{p}, \boldsymbol{x}_i)-T(\boldsymbol{x}_i) \]
for \(i=1,\dots,n\).
The simplest solution to enforce a desired degree of confidence is to
The constant term \(c\) can simply be found by keeping \(\boldsymbol{p}\) constant, obtaining an array of values \(\bar{f}(\boldsymbol{p}, \boldsymbol{x})\) for all \(\boldsymbol{x}\in\boldsymbol{X}\) sorted in ascending order, and retrieving the element at the index \(\text{ceil}(D_\text{target}\times |\boldsymbol{X}|)\).
In simpler terms, we move the surface defined by \(\bar{f}\) up or down, until it gives us the ratio of numbers below to ratio of numbers above that we want.
Another algorithm I came up with works by modifying the residual vector. In order to target \(D_\text{target}\), we separate \(\boldsymbol{r}\) into its positive and negative parts \(\boldsymbol{r}^+\) and \(\boldsymbol{r}^-\), such that \(\boldsymbol{r} = \boldsymbol{r}^+ - \boldsymbol{r}^-\). Then, the new residual vector is defined as
\[\tilde{\boldsymbol{r}}(\alpha) = \boldsymbol{r}^+ - \alpha \boldsymbol{r}^-\]
where \(\alpha\) is a constant. We denote by
\[ \boldsymbol{p}^\ast(\alpha) = \mathop{\text{argmin}}_{\boldsymbol{p}\in \boldsymbol{P}} \lVert \tilde{\boldsymbol{r}}(\alpha) \rVert \]
a function \(\mathbb{R}\to \boldsymbol{P}\) which yields the parameters that minimize the modified residual vector, given a certain \(\alpha\).
Then the goal is to find an \(\alpha^\ast\) such that the difference from \(D_\text{target}\) is minimized:
\[ \alpha^\ast = \mathop{\text{argmin}}_{\alpha\in \mathbb{R}} |D(\boldsymbol{p}^\ast(\alpha))-D_\text{target}| \]
After solving this secondary optimization problem, we finally obtain the parameters \(\bar{\boldsymbol{p}} = \boldsymbol{p}^\ast(\alpha^\ast)\) which give us the desired degree of confidence. In practice, we achieve this by placing the call to the nonlinear least squares solver–-which is trying to solve for the parameters–-into another solver–-which is trying to solve for \(\alpha\). The outer solver can be running any algorithm that can minimize nonlinear functions of a single variable. A proof of concept can be found in this github repository.
We already demonstrated this above for a linear model. To demonstrate it for a nonlinear model, we choose selection sort. Its best, average and worst case time complexities are all \(O(n^2)\), where \(n\) is the size of the array. Therefore, we choose the model \(f(x) = ax^2+bx+c\). The input set consists of shuffled arrays whose lengths range from 10 to 1000 in a uniform distribution. Targeting \(D=98\%\), we obtain the following fit:
TBD
Deeper meaning the metering of lower level instructions that comprise the operation. ↩︎