owned this note
owned this note
Published
Linked with GitHub
# Goal-oriented minting
### Introduction
In these notes we present a goal-oriented, adaptive minting mechanism. In particular, the core idea behind this mechanism is to
1. propose a set of observable, measurable (on-chain), quantities of interest — commonly known as Key Performance Indicators (KPIs), together with some time-dependent target values for these metrics, and
2. Adjust the minting rate according to how close (or far) these network parameters are from their target, in such a way that a *positive sum game is created*. That is, in a way so that token recipients benefit whenever the network benefits.
We formalize these ideas in the following section.
### Formalized Mathematical Model
**Purely goal-driven.** Suppose we have $N$ different KPIs. Let $\Theta \subset \mathbb{R}^N$ be the set of all possible states of our KPIs, let $\theta_t \in \Theta$ represent a specific state of these indicators at any given time $t$, and let $\theta_t^*\in\Theta$ be the vector of these target values at time $t$. We consider that to each KPI $i$ there corresponds a (relative) level of importance $w_i\geq0$ with $\sum^N_iw_i=1$. This gives us a way of *favoring* one KPI over the other (or weight them all equally as $w_i=1/N, \ \forall i=1,..,N$).
Furthermore, for any $i=1,2,\dots,N$, let $\delta_i:\mathbb{R}^2\to[0,1]$ denote an arbitrary measure of *distance* between $\theta_{i,t}$ and $\theta^*_{i,t}$ at time $t.$ Notice that here I am using the term “distance” in a very loose way. Here each $\delta_i$ is non-decreasing with $z=\theta_{i,t}-\theta_{i,t}^*$* and has the property that $$\delta_i( \theta_{i,t}, \theta^*_{i,t})=0,\ \forall \ \theta_{i,t}\geq \theta^*_{i,t}$. Furthermore, we define the N-dimensional distance $\delta:\Theta^2\to[0,1]$ as
$$
\begin{align}
\delta(\theta_t,\theta^*_t):=\sum_{i=1}^N w_i\delta_i( \theta_{i,t}, \theta^*_{i,t})
\end{align}
$$
Notice then that under our formulation each specific KPI has its own distance function \delta_i and (time-dependent) target value $\theta^*_{i,t}.$
Lastly, let $\rho_m,\rho_M\in \mathbb{R}_{\geq 0}$ denote the minimum (possibly 0) and maximum minting rates at any moment in time. These upper and lower bounds provide some *safety* rails so that (i) *at least some* tokens are minted when things are not going well or (ii) we don’t over mint when things are going well. For any fixed $$ $t$, we can then define the *instantaneous minting rate* as:
$$
\begin{align}\rho(\theta_t,\theta_t^*) = \rho_m + \left[\rho_M - \rho_m\right] \cdot\left( 1 - d(\theta_t, \theta_t^*) \right)\end{align}
$$
Notice then that, as the network hits its KPIs (i.e., $\delta\approx 0$), then we have that $\rho\approx \rho_M$. Conversely, when the network is lagging behind its KPIs, $\rho\approx\rho_m.$
The formulation above thus induces a (simple) goal-adaptive minting mechanism: If tokens are being minted at every epoch $\tau$ (e.g., every $M\geq1$ blocks), thus, the network mints $\rho(\theta_\tau,\theta_\tau^*)$ tokens until it runs out of tokens to distribute (if at all).
**Goal-and-time.** The previous formulation only depended on how close the network is to achieving its goals. Notice that, in general, one can have this adaptive minting rate to be a function also of time; this is desirable if one wants to explicitly include a dependency of time (to account for, e.g., adoption times, etc). In order to account for this, let us introduce a sigmoid decay function $\sigma:\mathbb{R}_{\geq 0}\times \Theta^2\to[0,1]$ defined by:
$$
\begin{align}\sigma(t, \theta_t,\theta_t^*) = \left(1-\frac{1}{1 + e^{-\kappa(1- \delta) (t-\tau(1-\delta))}}\right)\sigma_0 \end{align}
$$
where we used $\delta=\delta(\theta_t,\theta_t^*)$ above. Here, $\sigma_0$ is a normalization factor, $\kappa,\tau\in \mathbb{R}_+$ the shape and decay parameters of the decay function, as shown below. Thus, intuitively, if the network is taking longer to "take off" for example, the decay parameter (how long is it subsidized for) decreases

Finally, we can combine Eqs (2) and (3) to define the overall token distribution rate as:
$$
\begin{align}r(t, \theta,\theta_t^*) := \rho(t) \cdot \sigma(t, \theta_t,\theta^*_t)\end{align}
$$
This function, $r$, encapsulates the dynamic, responsive nature of our token distribution model, adjusting both the level and decay of distribution based on real-time performance against set targets
### Example Metrics
These metrics are pretty specific to each protocol and depend on their core KPI. they can be e.g., amount at stake (e.g., Ethereum), onboarded data (e.g., Filecoin), TVL, etc.
Notice that these goals can also be given in terms of rates (data growth rate, etc.)
**A cautionary tale:**
It is pretty optimistic to assume that the goals set at a (pre-product) protocol design stage will hold as true goals indefinitely without need for change. Furthermore, once these metrics are hardcoded it becomes *very tricky* to change them —usually requiring some community consensus and a network upgrade. An example of this is Filecoin, which uses a goal-oriented mechanism for its block rewards. Their KPI is onboarded data at time t. As a target they chose a function called *baseline power,* which is a curve that grows exponentially. While this level of growth is attainable at the earlier stages of the protocol such an onboarding of data cannot continue forever, which means that eventually that target will become unattainable (indeed, several results in the literature suggest that technology adoption curves are sigmoid). This makes their minting rate *stuck* at a relatively underperforming level, even though onboarded data has remained relatively stable (*network QA Power below).* This is shown in the figure below.

**Figure:** Filecoin’s historical storage power (blue and green) vs goal (*baseline power, light blue). Source: [Starboard Ventures](https://dashboard.starboard.ventures/capacity-services).*
**Take-home message**: goals need to be adaptive (i.e., they can’t be “set and forget”); otherwise they might lead to unwanted behaviors.
**Goal adaptability.**
In our model, goal adaptability could be implemented in two main ways:
1. **Flexible KPI Reassessment**: The KPIs themselves can be reassessed periodically to ensure they are still relevant. This might require governance mechanisms that allow for adjusting the weightings or even introducing new metrics if the old ones no longer capture the state of the network adequately.
2. **Dynamic Target Values**: Rather than fixing the target values of KPIs at the outset, we propose a dynamic approach where these targets evolve based on historical data, market trends, and the network’s overall health. This ensures that the minting mechanism remains responsive to changing conditions. For example, the target values could follow non-linear trajectories such as sigmoid or logarithmic functions to account for adoption saturation or technical plateaus.
One such way of doing so is to combine a 1559-like adaptivity mechanism for $\theta^*_{i,t}$ together with an [AIMD](https://en.wikipedia.org/wiki/Additive_increase/multiplicative_decrease) mechanism (a TCP congestion control algorithm).
$$
\begin{align}\theta^*_{i,t+1} &= \theta^*_{i,t} \cdot \left(1 + \eta_t \cdot \frac{\theta_{i,t} - \theta^*_{i,t}}{\theta^*_{i,t}}\right) \\\eta_{t+1} &=\begin{cases} \eta_t + \Delta_\eta & \text{if } \theta_{i,t} > \theta^*_{i,t} \\ \eta_t \cdot \gamma & \text{if } \theta_{i,t} \leq \theta^*_{i,t}\end{cases}\end{align}
$$
where $\gamma, \Delta_\eta$ are arbitrary parameters related to the speed of adaptation in the AIMD mechanism . Notice that this needs to be instantiated with a minimum value as well as an initial condition.

**Remark:** this idea has been applied by [Reijsbergen et. al (2022)](https://ieeexplore.ieee.org/document/9680496) in the context of proposing an adaptive EIP-1559 transaction fee mechanism.