# Briefing: The Avalanche Economic Research Project **Prepared for:** Shawn (Ygg) Anderson **Meeting:** Friday, January 16, 2026 with Eric (Avalanche Foundation Economist) **Reading Time:** ~12 minutes --- ## Part I: The Players and the Stakes You're walking into a collaboration that's been building for months. Let me orient you to the room you'll be entering. **Eric** is an economist at the Avalanche Foundation based in Hong Kong. He holds real decision-making power over the future of AVAX tokenomics—a network with roughly eighteen billion dollars in market capitalization. Hash describes him as "exceptionally professional" and "good at math." His background is in formal economic optimization: continuous-time asset pricing, stochastic calculus, equilibrium theory. He thinks in Greek letters and maximization problems. When he looks at a system, he asks: "What is the agent optimizing? What are the first-order conditions? Where is the fixed point?" **The Bonding Curve Research Group (BCRG)** is you, Hash, Jeff Emmett, and Jessica Zartler—an independent research collective focused on blockchain economics. Your strength lies in *modeling*: translating economic intuition into simulation code, running scenarios, stress-testing assumptions. Where Eric writes equations on a whiteboard, you build cadCAD models that evolve over thousands of timesteps. Your differential specification has 53 state variables tracking everything from gas prices to governance proposals. **The dynamic** that Hash has identified is comfortable: Eric is good at math, you are good at modeling. These are complementary, not competitive. Eric provides the theoretical rigor—the "why this equation"—while BCRG provides the computational muscle—the "what happens when we run it." His equilibrium framework tells you where prices *should* clear; your behavioral model tells you how agents *actually* decide to stake. Together, you form a complete picture. **The stakes are real.** Eric wants to understand the system deeply enough to make recommendations that will shape Avalanche's next decade. The $150,000 grant you're pursuing would fund six months of rigorous simulation work—backtesting against historical data, Monte Carlo stress testing, parameter optimization. But more than the grant, this is about credibility. Hash and Jeff have been deflecting hard questions with "that's a Shawn question" for weeks. Now it's time to deliver. --- ## Part II: Avalanche Foundations Before we dive into the economics, you need to understand the machine you're modeling. ### The Architecture Avalanche is a Layer 1 blockchain with a unique multi-chain architecture. At its heart sits the **Primary Network**, which hosts three built-in chains: the Exchange Chain (X-Chain) for asset transfers, the Contract Chain (C-Chain) for smart contracts, and the Platform Chain (P-Chain) for staking and subnet coordination. But Avalanche's real innovation is **L1s** (formerly called subnets)—application-specific blockchains that can define their own rules while still benefiting from Avalanche's consensus infrastructure. The network runs on **Avalanche Consensus**, a novel protocol that achieves finality in under two seconds by having validators repeatedly sample each other's opinions until confidence reaches a threshold. Unlike proof-of-work chains, there's no mining—security comes from staked capital. ### The Token AVAX is the native token with a hard cap of **720 million** tokens. At genesis, 360 million were minted. Today, roughly 429 million circulate, with about 212 million (45-46%) currently staked. New AVAX enters circulation through staking rewards; AVAX leaves circulation when transaction fees are **burned** (permanently destroyed). This creates a fundamental tension: staking rewards are inflationary, but fee burning is deflationary. The network's long-term sustainability depends on fee activity eventually exceeding issuance. ### Staking Mechanics **Validators** run nodes that participate in consensus. They must stake a minimum of 2,000 AVAX (maximum 3 million) for periods between 14 days and 1 year. Longer stakes earn higher rewards—a mechanism called "time-weighted rewards" that incentivizes commitment. **Delegators** don't run nodes but can stake their AVAX to validators, earning a share of rewards minus the validator's commission (minimum 2%, typically 2-20%). Delegation has a minimum of 25 AVAX. The reward formula is elegant but complex: ``` Reward = (MAX_SUPPLY - current_supply) × (staked / total) × (duration / 365) × consumption_rate(duration) ``` Two critical implications: (1) rewards decrease as supply approaches the cap, and (2) rewards scale with how much is staked and for how long. This creates feedback loops. **Crucially, Avalanche does not use slashing.** If your validator misbehaves, you don't lose your stake—you just don't earn rewards. This makes staking "safe" compared to networks like Ethereum where capital can be confiscated. ### The Avalanche 9000 Upgrade In December 2024, Avalanche deployed its biggest upgrade ever: **Avalanche 9000**. The centerpiece was **ACP-77**, which fundamentally changed how L1s pay for security. Previously, running an L1 required validators to stake 2,000 AVAX on the Primary Network—a large upfront capital commitment. ACP-77 replaced this with a **continuous fee model**: L1 validators now pay a small subscription fee (~1.33 AVAX/month) to the P-Chain. This reduced L1 launch costs by 99.9% and decoupled L1 validators from Primary Network validation. Why does this matter for economics? Because **L1 fees are burned**, creating a new source of deflationary pressure. The more L1s that launch and thrive, the more AVAX gets burned. L1 adoption becomes the primary value creation engine. --- ## Part III: Eric's Mathematical World To collaborate with Eric effectively, you need to understand how he thinks. His approach is rooted in **neoclassical optimization theory**—the same framework used to analyze stock markets, interest rates, and monetary policy. ### Eric's Three Images — Decoded Eric shared three equation screenshots that reveal his entire framework. Let's walk through each one so you can reference them fluently in conversation. --- #### IMAGE 1: "Agent Problem" **What you see:** A slide titled "Agent Problem" with a bullet point stating that agents are "trading-off" between three things: convenience utility, convenience loss, and staking rewards. **What it means:** This is the **foundation** of Eric's entire model. He's defining what agents care about—their objective function. Every economic model starts with "what are people trying to maximize?" | Trade-off | What It Is | Economic Intuition | |-----------|------------|-------------------| | **Convenience utility** | The utility flow from holding *liquid* tokens | You can use them: swap on a DEX, provide liquidity, react to opportunities, pay for things | | **Convenience loss** | The disutility flow from *locking* tokens in staking | Your capital is frozen, you can't react to market moves, you face illiquidity risk | | **Staking rewards** | The APR earned from staking | Direct financial incentive—the protocol pays you to secure the network | **The optimization:** Agents choose how to split their holdings between liquid (x_t) and staked (θ_t) to maximize utility. This creates a continuous-time optimization problem where agents weigh the benefit of flexibility (convenience) against the benefit of yield (rewards). **Why it matters to you:** Eric is building from first principles. He's not assuming behavior—he's deriving it from rational utility maximization. Your tanh-based behavioral functions are an *approximation* of what emerges from this optimization. When he asks "why tanh?", the deep answer is: "It's a reduced-form approximation of the bounded response that emerges from agents solving this trade-off under constraints." --- #### IMAGE 2: "Solution" **What you see:** A slide titled "Solution" with several bullet points about first-order conditions, representative agents, fixed-point equilibrium, and a price equation at the bottom. **What it means:** This is **where the math happens**. Eric solved his optimization problem and derived the equilibrium. **The key points in the image:** 1. **"First order conditions for holding amount x_t and θ_t from agent's optimization problem. Functions of reward rate r_t and expected appreciation μ_t."** Translation: By taking derivatives of the utility function and setting them to zero, Eric derived how much agents want to hold liquid vs. staked. Their choices depend on the reward rate and expected price changes. 2. **"With homogeneous agents, individual staking ratio θ_t equals aggregate staking ratio Θ_t"** Translation: The **representative agent assumption**. If everyone is identical, we can study one agent and know the whole economy. Individual θ = aggregate Θ. This simplifies the math enormously. 3. **"Equilibrium staking ratio Θ*_t and reward rate r*_t are thus determined by a fixed-point problem."** Translation: Θ affects r (more staking dilutes rewards), and r affects Θ (higher rewards attract more staking). They determine each other simultaneously. The equilibrium is where these circular dependencies resolve—a **fixed point**. 4. **"Price dynamics is determined by the market clearing condition"** followed by the equation: $$P_t = \frac{S_t}{Q_t} \int_0^1 x^*_{i,t} \, di = \frac{(1-\Theta_t) S_t A_t}{Q^*_t} \left( \frac{1-\alpha}{r^f - \mu_t - \Gamma_t} \right)^\alpha$$ Translation: This is **THE PRICE EQUATION**. Price equals supply divided by demand, adjusted for equilibrium holdings. **Decoding the price equation:** | Symbol | Meaning | Where It Comes From | |--------|---------|---------------------| | P_t | Token price | **This is what we're solving for** | | (1-Θ_t) | Liquid (unstaked) fraction | **BCRG's behavioral model provides this** | | S_t | Total token supply | Protocol parameter (known) | | A_t | Demand factor | Exogenous or endogenous (models outside demand) | | Q*_t | Aggregate quantity | Market clearing condition | | r^f | Risk-free rate | External benchmark (e.g., US Treasuries) | | μ_t | Expected price appreciation | State variable—what people expect | | Γ_t | Risk/discount factor | Model parameter | | α | Elasticity parameter | Calibrated from data | **The critical insight:** The price depends on **(1-Θ_t)**—the liquid fraction. This is **THE BRIDGE** between Eric's model and yours. He needs Θ_t as an input; you compute how Θ_t evolves over time. **Why it matters to you:** When Eric asks "how do we calculate s [staking]?", he's asking: "How do I get Θ_t to plug into my price equation?" Your answer: "Our behavioral response functions (the tanh-based flows) determine dΘ/dt. Integrate over time to get Θ_t." --- #### IMAGE 3: "Demand Shock" **What you see:** A slide titled "Demand Shock" with a stochastic differential equation and two bullet points about time-varying parameters. **The equation:** $$\frac{dS_t}{S_t} = \mu^S dt + \sigma^S dZ^S_t$$ **What it means:** This is **Geometric Brownian Motion (GBM)**—the standard model for asset prices in finance. It's the same math behind the Black-Scholes options pricing formula. **Breaking it down:** | Component | Meaning | Intuition | |-----------|---------|-----------| | dS_t / S_t | Percentage change in demand | How much demand grows or shrinks | | μ^S dt | Drift term | Average growth rate over time | | σ^S dZ^S_t | Diffusion term | Random shocks (Brownian motion) | | dZ^S_t | Wiener process increment | Pure randomness—unpredictable noise | **The bullet points:** - "μ^S and σ^S can be time-varying" - "Can depend on aggregate endogenous or exogenous state variables" Translation: The drift (μ) and volatility (σ) aren't constants—they can change based on what's happening in the system. This allows the model to capture: - **Bull/bear markets**: High μ during bull runs, negative μ during crashes - **Volatility regimes**: σ spikes during uncertainty, calms during stability - **Feedback effects**: If μ or σ depend on Θ_t or P_t, you get endogenous dynamics **Why it matters to you:** Eric is acknowledging that crypto markets are **stochastic**—you can't predict them perfectly. His model incorporates randomness via GBM. This is why BCRG's Monte Carlo simulations (running thousands of scenarios with different random paths) are valuable: they explore how the system behaves across many possible demand realizations, not just one. --- ### Putting the Three Images Together Eric's framework forms a complete story: 1. **Image 1 (Agent Problem)**: Define what agents optimize → utility over liquidity, staking, rewards 2. **Image 2 (Solution)**: Solve the optimization → derive equilibrium Θ* and price P_t 3. **Image 3 (Demand Shock)**: Add uncertainty → demand follows stochastic GBM **The gap Eric needs you to fill:** His model takes Θ_t as given and derives P_t. But **how does Θ_t evolve?** That's where your behavioral dynamics come in. You compute dΘ/dt based on APR differentials; he plugs the resulting Θ_t into his price equation. --- ### What Eric Wants Eric's questions cut to the core of model validity: 1. **The Math Question**: What *exactly* are your equations? (He wants to see them.) 2. **The Data Question**: How do you calibrate parameters? (He wants empirical grounding.) 3. **The Validation Question**: How do you know your assumptions are correct? (He wants falsifiable predictions.) He's not challenging you—he's doing proper due diligence. An economist's job is to distinguish assumptions that *matter* from those that don't, and to stress-test conclusions against alternative specifications. This is exactly the rigor the grant work would provide. --- ## Part IV: BCRG's Modeling World Your strength is different from Eric's, and that's the point. ### The Differential Specification Over months of work, BCRG built a comprehensive **differential specification**—a system of coupled equations describing how all the state variables evolve. There are 53 state variables tracking staking, supply, fees, L1s, and governance. The specification maps every flow: staking inflows, unstaking outflows, fee burning, L1 creation, validator entry and exit. The document at `content/milestone3/Differential_Specification.md` is designated **THE SOLE AUTHORITY** on the mathematical model. Everything else—simulation code, analysis reports, meeting notes—derives from this specification. ### Behavioral Response Functions Here's where BCRG's approach diverges from classical optimization. Instead of deriving behavior from utility maximization, you model it directly using **bounded sigmoid functions**: ``` staking_inflow = STAKING_SENSITIVITY × circulating_supply × max(0, tanh(staking_apr - OPPORTUNITY_COST)) ``` The tanh function is key. It maps any input to a value between -1 and +1, creating **bounded rationality**. Agents don't respond infinitely to incentives—their reaction saturates at extremes. When APR is far above opportunity cost, everyone who wants to stake already has. When it's far below, everyone who wants to leave already left. **Why tanh specifically?** Several reasons: 1. **Bounded output**: Prevents unrealistic infinite responses 2. **Smooth and differentiable**: Important for stability analysis 3. **S-curve shape**: Weak response to small differentials, strong response to large ones 4. **Symmetric**: Treats positive and negative deviations equally Could you use logistic functions, arctangent, or piecewise linear? Yes. The choice of tanh is somewhat arbitrary among bounded functions—what matters is the boundedness property itself. Future work would calibrate the exact functional form against data. ### cadCAD and Simulation **cadCAD** (Computer-Aided Design for Complex Systems) is a Python library for building economic simulations. You define state variables, policies (how decisions are made), and state update functions (how variables change). Then you run the simulation forward in time, optionally with Monte Carlo sweeps across parameter ranges. The BCRG model has been refactored into a clean architecture: - `model/logic.py`: The behavioral policies including elastic restaking - `model/state.py`: Initial conditions - `model/params.py`: Parameters to sweep - `scripts/run_basic_model.py`: Execute the full simulation - `scripts/scenario_comparison.py`: Compare scenarios (e.g., high vs. low issuance) ### The Math Catalog BCRG has documented a **menu of alternative specifications** (see `docs/math_catalog.md`): **Emission Schedules:** - Time-weighted rewards (current) - Bitcoin-style halving - Burn-replacement targeting **Staking Behavior:** - Sigmoid sensitivity (current: tanh-based) - Elastic power law - PID controller with history dependence **Price Dynamics:** - Linear supply/demand - Exponential inventory - Stochastic GBM (matches Eric's approach) This modularity means you can swap components and test how robust conclusions are to specification choices—exactly what Eric wants. --- ## Part V: The Bridge Between Models Here's the key insight: **Eric's model and BCRG's model are complementary, not competing.** | Eric's Framework | BCRG's Framework | Connection | |------------------|------------------|------------| | Takes Θ_t (staking ratio) as input | Computes how Θ_t evolves over time | BCRG provides the behavioral dynamics | | Finds equilibrium price given Θ | Simulates Θ trajectory under different scenarios | Eric provides the target (where should price settle) | | Assumes rational optimization | Captures bounded rationality via tanh | Complementary assumptions about agents | | Continuous-time closed-form math | Discrete-time numerical simulation | Different tools for different questions | When Eric asks "how do we calculate s (staking)?", the answer is: > Your price equation needs Θ_t as input. Our behavioral model determines how Θ_t evolves based on APR differentials vs. opportunity cost. We compute: **dΘ/dt = staking_inflow - unstaking_outflow**, where each flow is a bounded sigmoid function of the APR spread. Integrating these flows over time gives you the staking ratio trajectory that feeds your price equation. This is the bridge. Eric provides the equilibrium framework (market clearing). You provide the behavioral dynamics (how agents actually decide). Together: endogenous price from first principles. --- ## Part VI: The Critical Discovery — The Equilibrium Trap This is the finding that changes everything. BCRG's simulation work uncovered what they call **The Equilibrium Trap**: no matter what you do to tokenomics, staking APR converges to approximately 5-6%. Market forces arbitrage away any attempt to "pay stakers more." Here's the data: | Scenario | Issuance Strategy | Final Staking Ratio | Final APR | What Happened | |----------|-------------------|---------------------|-----------|---------------| | Base Case | Standard | 41.45% | 5.06% | Baseline | | High Incentives | +15% Issuance | 41.85% | **5.01%** | More security, same APR | | Reduced APR | -20% Issuance | 41.39% | **5.07%** | Less security, same APR | The "High Incentives" scenario paid more tokens but resulted in **lower** APR than "Reduced APR." Why? Because capital flooded in, diluting the yield until the edge disappeared. **The strategic implication:** You are not "setting rewards." You are **buying security.** - High issuance = Buying more security (higher staking ratio) at higher inflation cost - Low issuance = Buying less security at lower inflation cost - APR is an output, not an input **The real threat:** Since APR is pegged to ~5% by market forces, the only risk is if **external opportunity cost** (US Treasuries, ETH staking) rises above 5%. If interest rates hit 6%, stakers will leave regardless of your issuance schedule. **Avalanche is a price-taker in the market for capital.** **The recommendation:** Stop optimizing for yield (it gets arbed away). Focus on **demand generation through L1 utility**. If demand exceeds supply, price rises. If price rises, 5% APR is great. If price falls, 20% APR is trash. --- ## Part VII: Answering Eric's Questions Here's your cheat sheet for Friday. ### "Why tanh specifically for behavioral functions?" > "We use tanh because it bounds output between -1 and +1, functioning as a sigmoid variant. This prevents unbounded behavioral responses—agents can't infinitely increase staking in response to APR differentials. Real agents have bounded attention and capital. > The bounded sigmoid creates realistic dynamics: flat at extremes, exponential response in the middle, smooth and differentiable throughout. We borrowed this pattern from Danilo's Subspace model. > Could we use logistic, arctangent, or piecewise linear? Yes—what matters is the boundedness property. Future work would calibrate the functional form against on-chain data." ### "How are the behavior parameters calibrated?" > "Currently, parameters are provisional educated estimates: > - STAKING_SENSITIVITY: 0.1 (moderate response) > - OPPORTUNITY_COST: 5% (DeFi benchmark) > - RESTAKE_RATES: 70% validators, 50% delegators (industry observation) > **This is a known gap.** We haven't done empirical calibration against historical on-chain data. That's exactly what the grant would fund: regression analysis of staking flows vs. APR changes, validation against observed patterns." ### "How do you model validator/delegator behavior?" > "Staking flows respond to APR differential vs. opportunity cost via bounded sigmoid: > `staking_inflow = SENSITIVITY × circulating × max(0, tanh(APR - OPPORTUNITY_COST))` > `unstaking_outflow = SENSITIVITY × staked × max(0, tanh(OPPORTUNITY_COST - APR))` > Net staking change: `dΘ/dt = inflow - outflow` > When APR > opportunity cost → net inflow. When APR < opportunity cost → net outflow. Response magnitude bounded by sensitivity." ### "How do we calculate 's' for your price equation?" > "Your price equation needs Θ_t. Our behavioral model provides how Θ_t evolves: > 1. Your model: P_t = f(Θ_t, S_t, μ_t, ...) > 2. Our model: dΘ/dt = g(APR, opportunity_cost) > 3. Together: Complete dynamics > We're complementary. You provide the equilibrium framework (market clearing). We provide the behavioral dynamics (how agents decide). Together: endogenous price from first principles." ### "What conditions sustain participation incentives?" > "Participation is sustained when: `staking_apr > OPPORTUNITY_COST + risk_premium` > The risk premium compensates for illiquidity (locked tokens), price risk (AVAX might decline), and validator risk (downtime penalties). As more people stake, APR decreases due to dilution, creating a natural equilibrium." ### "Are incentives robust with endogenous pricing?" > "This is the key question. There are competing feedbacks: > **Positive loop:** High staking → reduced supply → price up → more staking > **Negative loop:** High staking → lower APR (dilution) → unstaking → price pressure > Which dominates? Our simulation suggests the system is stable within certain parameter bounds but could become unstable under demand shocks. The grant work would map these stability boundaries precisely." --- ## Part VIII: Your Positioning Hash has set up the dynamic: **Eric is good at math, BCRG is good at modeling.** Lean into this. Don't try to out-math Eric. Instead, position your value as: 1. **Translation**: You turn theoretical frameworks into runnable simulations 2. **Stress-testing**: You explore what happens under scenarios, not just at equilibrium 3. **Sensitivity analysis**: You identify which parameters matter and which don't 4. **Visualization**: You produce dashboards that make results interpretable When acknowledging gaps (which you should do proactively), frame them as **what the grant would address**: > "We know the parameter calibration is provisional. That's exactly what the cadCAD simulation grant would address—empirical validation against on-chain data." The $150K grant is a natural next step: data ingestion, backtesting, Monte Carlo infrastructure, stress testing, and strategic parameter recommendations. Eric gets the rigor he needs; you get the funding to deliver it. --- ## Sources and Further Reading - [Avalanche Staking FAQ](https://support.avax.network/en/articles/6235660-staking-faq) - [How to Stake | Avalanche Builder Hub](https://build.avax.network/docs/primary-network/validate/how-to-stake) - [Avalanche (AVAX) Overview](https://www.avax.network/about/avalanche-avax) - [AVAX Tokenomics | Tokenomist](https://www.tokenomist.ai/avalanche-2) - [Validator FAQ](https://support.avax.network/en/articles/6187511-validator-faq) --- *Good luck on Friday. You've got this.*