# Acxyn GGV ### Gross Game Value ![](https://i.imgur.com/rQ7qEZw.png) The **Gross Game Value (GGV)** score, will initally be made up of 4 different dynamic metrics which when summed from all its parts will provide Acxyn with a clear indication of the value of a game. $$x1 = GMV = Number of transactions * Avg. Order Value $$ $$x2 = CVF = Total Value Locked / (NVT Ratio * Daa) $$ $$x3 = DM = Average of the metrics / Number of metrics $$ $$x4 = GM = Average of the metrics / Number of metrics $$ $$a = The weighted value of each to the GGV $$ $$ GGV = y =(a1*x1+a2*x2+a3*x3+a4*x4)/4 $$ Each valuation metric is specifically choosen to indicate and correlate with the lifecycle of a company / game. 1. **Gross Merchandise Value** * (GMV) * Is the total value of merchandise sold over a given period of time through a C2C exchange. The GVM is ideally suited to value e-commerce stores and is applicable to Web3, as a majority of games monetise through royalties, marketplace fees that have business models involving player-to-player transactions. 2. **Crypto Valuation Framework** * Total Value Locked (TVL) * Total Value Locked is an important indicator that shows the popularity or otherwise of a lending or DeFi swapping platform and the magnitude of attention and interaction it draws from active users and monthly transactions. * NVT Ratio (Network Value-to-Transaction Ratio) * NVT = network value / daily trx volume. NVT is a valuation ratio that compares the network value (equals the market cap) to the network’s daily on-chain transaction volume. * Daily active addresses/users (DAA) * Similar to daily active users (DAU) for software and apps, DAA can provide information about the number of users on a network, which can inform trends and complement other indicators such as NVT and on-chain transaction volume. * Crypto J Curve ![](https://hackmd.io/_uploads/SJQky9Vtj.png) * Token price is further broken down into two components whose contributions evolve over time: “current utility value” (CUV), which represents value driven by utility and usage today, and “discounted expected utility value” (DEUV), which represents value driven by investment speculation. As the project develops, CUV and DEUV take turns driving token prices as the projects and the market perceptions of them stabilize and mature. When a token is first launched, DEUV dominates as holders are excited about the tech and expect future price appreciation. When enthusiasm wanes with inevitable technical roadblocks, the price declines and is driven more by CUV from technical users and early adopters. As the team overcomes challenges, CUV quietly grows as the token becomes more widely adopted. DEUV then catches up as speculation and excitement follow developer interest. Ultimately in the steady state, CUV should drive token price. * Token inflation vs. $ACX Token inflation * Leveraging the [Fisher Equation](https://corporatefinanceinstitute.com/resources/knowledge/economics/fisher-equation/#:~:text=%CF%80%20%E2%80%93%20the%20inflation%20rate) formula, we can calculate the inflation of one protocol against Acxyn's token inflation: $$ (1 + i) = (1 + r) (1 + π) $$ where: i – the nominal token inflation rate r – the real interest rate π – the Acxyn Protocol inflation rate When re-arranged: $$ r = (1+i)/(1+π) - 1 $$ Token emissions (inflation) is generally used as a go-to-market and user acquisition strategy. By understanding the inflation ratio in comparision to ACX token inflation, we will be able to deduce the current stage of the game and better analyse the residual cohort value (RCV) of game economies. 3. **Game Metrics or Attention Economy** Include metrics such as : * Avg. Player Retention * Avg. Player Count * Avg. Player Growth * Avg. Play Time * Avg. Ad Revenue * Avg. Revenue per Player (ARPU) 4. **Developer Metrics** Activity Metrics include: * Avg. development activity * Avg. no. of developers * No. of remote configurations * No. of campaigns * No. of A/B tests Financial Metrics included: * DCF * Burn Rate * Entropy / Chaos Metric Uniquely Acxyn will have the capacity to analyse developer activity on game in real time. Taking this data and comparing it to historical developer activity of more successful game development studies will allow us to establish a baseline value to indicate whether developers are positively contributing to their business or are stagnating. It will also allow Acxyn to nudge developers to more actively and precisely pinpoint where the largest opportunity costs lie, when optimising their games. ## How to value the Data with GGV The GGV of each game and dev team will act as a credit score for how we justify the amount of time per warrents contract, premiums for put protections, futures, burning Acxyn tokens to mint Acxyn stable coins that they can use on secondary markets and in protocol to buy back more Acxyn. We will create this score based off of this equation. $$ w_t = (1-\lambda)DV_t^* + \lambda\displaystyle\frac{\Delta DV_t^*}{D_{t-1}^*} $$ In this equation we have * The $*$ operator indicates a moving average over some time period (our choice) * $t$ represents the current time * $DV$ is **data volume** * $D$ is **data worth** * $\lambda$ represents a weight We can understand this equation by breaking it up into two parts: * **Current Data Activity**. This is the $DV_t^*$ term, averaging the data value over our agreed-upon time period. This represents data provided to the investor. This gives the **current position** of the project. * **Trend of Data Efficiency**. This is the $\displaystyle\frac{\Delta DV_t^*}{D_{t-1}^*}$ term. It gives the change in (moving average of) data activity between this timestep and the previous timestep, divided by the average value of the data given by the project. It gives a **trend** for the project it's changing relative to the data that it is producing; we can think of it as analogous to **velocity**. * **Weight** The $\lambda$ parameter gives **our value** of how much we care about **current data** vs. **trend**. * When $\lambda$ is close to 1, we will want more data from projects which are showing rapid growth in their relative returns, even if they are small. * When $\lambda$ is close to 0, we will give more funds to projects that are have high data volume, even if they aren't showing much growth. We will determine what those weights are to understand the value of the data. As the data is understood we can adjust on the fly and for what is changing in game development and game play. ## The GGV is a PID controller [![](https://i.imgur.com/VMr1IDH.png) ](https://) Think of the GGV as the master control of the Acxyn system. Everything that goes on in Acxyn with defi options and data deterimation has to do with the results from the GGV. The distinguishing feature of the PID controller is the ability to use the three control terms of **Proportional**, **integral** and **derivative** influence on the controller output to apply accurate and optimal control. The block diagram above shows the principles of how these terms are generated and applied. It shows a PID controller, which continuously calculates an error value $$ {\ e(t)}$$ as the difference between a desired setpoint $$ {\displaystyle {\text{SP}}=r(t)}$$ and a measured process variable $$ {\displaystyle {\text{PV}}=y(t)}: {\displaystyle e(t)=r(t)-y(t)}$$ and applies a correction based on proportional, integral, and derivative terms. The controller attempts to minimize the error over time by adjustment of a control variable $$ {\displaystyle u(t)} $$ such as the opening of a control valve, to a new value determined by a weighted sum of the control terms. **In this model**: Term P is proportional to the current value of the $$ {SP − PV error \ e(t)} $$ * For example, if the *error* is large, the control output will be proportionately large by using the gain factor "Kp". Using proportional control alone will result in an *error* between the set point and the process value because the controller requires an *error* to generate the proportional output response. In steady state process conditions an equilibrium is reached, with a steady SP-PV "offset". * Term I accounts for past values of the SP − PV *error* and integrates them over time to produce the I term. For example, if there is a residual SP − PV *error* after the application of proportional control, the integral term seeks to eliminate the residual *error* by adding a control effect due to the historic cumulative value of the *error*. When the *error* is eliminated, the integral term will cease to grow. This will result in the proportional effect diminishing as the *error* decreases, but this is compensated for by the growing integral effect. * Term D is a best estimate of the future trend of the SP − PV *error*, based on its current rate of change. It is sometimes called "anticipatory control", as it is effectively seeking to reduce the effect of the SP − PV *error* by exerting a control influence generated by the rate of *error* change. The more rapid the change, the greater the controlling or damping effect. **Tuning** – The balance of these effects is achieved by loop tuning to produce the optimal control function. The tuning constants are shown below as "K" and must be derived for each control application, as they depend on the response characteristics of the complete loop external to the controller. These are dependent on the behavior of the measuring sensor, the final control element (such as a control valve), any control signal delays and the process itself. Approximate values of constants can usually be initially entered knowing the type of application, but they are normally refined, or tuned, by "bumping" the process in practice by introducing a setpoint change and observing the system response. **Control action** – The mathematical model and practical loop above both use a direct control action for all the terms, which means an increasing positive error results in an increasing positive control output correction. The system is called reverse acting if it is necessary to apply negative corrective action. For instance, if the valve in the flow loop was 100–0% valve opening for 0–100% control output – meaning that the controller action has to be reversed. Some process control schemes and final control elements require this reverse action. An example would be a valve for cooling water, where the fail-safe mode, in the case of loss of signal, would be 100% opening of the valve; therefore 0% controller output needs to cause 100% valve opening. ## What this means for Acxyn This is how Acxyn will control and decide not just the value of that data but it will also know what to do with that data in the system. How it will affect the GGV score itself for each game dev team and what can be made availible to them in defi instruments and rewards. The proporational control will reflect the current value of their data. It will also engage the valve to create a greater response in reaction of the GGV function. The Intergral accounts for past value of the data from that game dev team. It looks for the amount of time that team has supplied data. The longer the data has been growing after the proportional control it will seek to limit what data is regarded as non-usable or non-determinate when that is elimated the data stops growing and effects the GGV score. Therefore the proportional control stops. The time of the data is still being collected however as part of the integral effect. The derivative control is an estimate of the future data's trend from each game dev team. It is based on its current rate of change from the proportional and integral history. It acts as an actipatory control. The more the radical the changes in the data being gathered from each previous control it dampens the effect. This results in a more controlled collection of data for the GGV. ## How to tune the system The balance of these effects is achieved by loop tuning to produce the optimal control function. The tuning constants are shown below as "K" and must be derived for each control application, as they depend on the response characteristics of the complete loop external to the controller. These are dependent on the behavior of the measuring sensor, the final control element (such as a control valve), any control signal delays and the process itself. Approximate values of constants can usually be initially entered knowing the type of application, but they are normally refined, or tuned, by "bumping" the process in practice by introducing a setpoint change and observing the system response. **Mathematical form** The overall control function $$ {\displaystyle u(t)=K_{\text{p}}e(t)+K_{\text{i}}\int _{0}^{t}e(\tau )\,\mathrm {d} \tau +K_{\text{d}}{\frac {\mathrm {d} e(t)}{\mathrm {d} t}},}$$ Where,${\displaystyle K_{\text{p}}}, {\displaystyle K_{\text{i}}}, and {\displaystyle K_{\text{d}}}$ all non-negative, denote the coefficients for the proportional, integral, and derivative terms respectively (sometimes denoted P, I, and D). In the standard form of the equation ${\displaystyle K_{\text{i}}} and {\displaystyle K_{\text{d}}}$ Are respectively replaced by ${\displaystyle K_{\text{p}}/T_{\text{i}}} and {\displaystyle K_{\text{p}}T_{\text{d}}}$ The advantage of this being that ${\displaystyle T_{\text{i}}} and {\displaystyle T_{\text{d}}}$ have some understandable physical meaning, as they represent an integration time and a derivative time respectively. ${\displaystyle K_{\text{p}}T_{\text{d}}}$ is the time constant with which the controller will attempt to approach the set point. ${\displaystyle K_{\text{p}}/T_{\text{i}}}$ determines how long the controller will tolerate the output being consistently above or below the set point. So Axcyn will concentrate on the loop tuning by adjusting its control parameters (proportional band/gain, integral gain/reset, derivative gain/rate) to the optimum values for the desired control response. Stability (no unbounded oscillation) is a basic requirement, but beyond that, different systems have different behavior, different applications have different requirements, and requirements may conflict with one another. Acxyn will always be looking for new ways to better its response to its system and data collection in effort to always be perfecting its GGV score for each individual game dev team. ## Proof of Efficiency The PoE consensus mechanism is ideated to solve some of the challenges of decentralized and permissionless validators in L2 for zk-rollups. It defines a two-step model where it enables: Permissionless Sequencers as benefited participants in the protocol, also as a source of scalability of the network. A data availability model perfectly compatible with Volition (zk-rollup and Validium) schemas which could enable different tiers of service for users. The calculation of a “virtual” state from the data availability and a “final” state based on the validity proofs. This architecture can save a lot of cost for decentralized zk-rollups by settling validity proofs frequency based on different criteria, but not as the only solution to confirm transactions. Space for permissionless Aggregators as the agents to perform the specialized task of cryptographic proof generation, expected to be costly for zkEVM protocols. It provides a very simple and straightforward model for them to manage their incentives and returns. Native protection against L2 network problems such as attacks from malicious actors or technical problems of selected validators. Incentives model to maximise the performance of the network finality. ## The PID Controller and managing Acxyn Index with AI and creation of XYN The approach Acxyn will use for AI to predict the future liquidity needs of the index and adjust the PID parameters accordingly in real-time. This can be accomplished by training a machine learning model on historical trading data to identify patterns and trends in liquidity behavior of the Acxyn Index. The AI system can then feed these predictions into the PID controller to optimize its output and ensure that the index maintains sufficient liquidity without overshooting or undershooting its target. This approach can potentially improve the responsiveness and accuracy of the PID controller, leading to better management of liquidity within the Acxyn index. The P in the controller or proportional will represent incoming liquidity in any magnitude or velocity. The I in the PID controller represents the integral component, which is responsible for monitoring the accumulated error between the set and actual liquidity levels over time. In the case of managing liquidity inside of an index, the integral component would be watching for any sustained deviations from the target liquidity levels, regardless of whether the deviations are positive (i.e., surplus liquidity) or negative (i.e., insufficient liquidity). By continuously accumulating the error between the set and actual liquidity levels, the I component can gradually adjust the PID output to bring the system back into balance, even in the event of longer-term disruptions to the liquidity landscape. This can help prevent sudden shortfalls or excesses in liquidity that could impact the performance of the index. The D in the PID controller represents the derivative component, which is responsible for monitoring the rate of change of the error signal between the set and actual liquidity levels. In the case of managing liquidity inside of an index, the derivative component would be watching for any sudden changes in liquidity levels, either positive or negative, and adjusting the PID output accordingly. For example, if the index experiences sudden inflows of liquidity that rapidly push it above its target level, the derivative component will detect the rapid increase in the error signal and help to quickly reduce the output of the controller to maintain stable levels of liquidity. Similarly, if the liquidity levels experience sudden outflows, the derivative component will detect the rapid decrease in the error signal and help prevent overshooting or undershooting target levels. In essence, the derivative component serves to dampen any sudden changes or oscillations in the liquidity levels, ensuring that the system remains stable and responsive to changing market conditions. ## SIRE ![](https://i.imgur.com/Mb5w6Vj.png) Investment professionals rely on extrapolating company revenue into the future (i.e. revenue forecast) to approximate the valuation of scaleups (private companies in a high-growth stage) and inform their investment decision. This task is manual and empirical, leaving the forecast quality heavily dependent on the investment professionals' experiences and insights. Furthermore, financial data on scaleups is typically proprietary, costly and scarce, ruling out the wide adoption of data-driven approaches. To this end, we propose a simulation-informed revenue extrapolation (SiRE) algorithm that generates fine-grained long-term revenue predictions on small datasets and short time-series. SiRE models the revenue dynamics as a linear dynamical system (LDS), which is solved using the EM algorithm. The main innovation lies in how the noisy revenue measurements are obtained during training and inferencing. SiRE works for scaleups that operate in various sectors and provides confidence estimates. The quantitative experiments on two practical tasks show that SiRE significantly surpasses the baseline methods by a large margin. We also observe high performance when SiRE extrapolates from short time-series and predicts for long-term. The performance-efficiency balance and result explainability of SiRE are also validated empirically. Evaluated from the perspective of investment professionals, SiRE can precisely locate the scaleups that have a great potential return in 2 to 5 years. Furthermore, our qualitative inspection illustrates some advantageous attributes of the SiRE revenue forecasts.