# MEV Inerter rewards pool > [!WARNING] > Draft Proposal > [!NOTE] > See <https://notes.ethereum.org/@fradamt/ryJ7fTyeF> for a in depth review of MEV Smoothing approaches ### Existing Smoothing Mechanisms **Off-Chain Pools** Voluntary smoothing pools implement reward sharing through smart contracts. Dappnode Smooth requires minimum participation and charges 0.5% fees. Lido automatically smooths all MEV for stETH validators without opt-out options. Rocket Pool creates a two-tier system where node operators keep 15% while 85% flows to token holders. **Committee-Driven Approaches** Committee-driven smoothing leverages Ethereum's attestation committees to enforce reward sharing. Committee members refuse to attest blocks unless proposers accept the highest available bid, then distribute proceeds equally among committee members. This creates game-theoretic pressure for honest behavior but requires coordination among committee members. ### The Variance Problem Solo validators face extreme reward variance. Analysis of historical data reveals the following probability distribution for MEV rewards: - 99.7% probability of no block proposal (no MEV) - 0.2% probability of normal blocks (0.01-0.5 ETH) - 0.09% probability of good blocks (0.5-5 ETH) - 0.01% probability of lottery blocks (>5 ETH) The largest historical MEV block contained 584 ETH, representing over 100 years of normal staking returns for a solo validator. This distribution creates a fundamental choice: accept guaranteed but modest smoothed rewards versus maintain lottery tickets with minimal expected value but life-changing potential. ## The MEV Inerter ### Core Principles The MEV Inerter operates on three principles: **State Dependence**: Unlike memoryless smoothing, the system maintains complete validator performance history through the Validator Performance Index (VPI). **Momentum Rewards** (discrete rollover rewards) and **Merit Amplification** (continuting to use the protocol) ### Mathematical Framework **Validator Performance Index** Each validator maintains a VPI that ranges from 0.5 (maximum penalty) to 3.0 (maximum momentum), with 1.0 representing neutral state: $$ VPI_{i,e} ∈ [0.5, 3.0] $$ where i indexes validators and e indexes epochs. **Force Generation** When validator $i$ proposes a block with XGA bid $B_b$, the system calculates an MEV force: $$ F_MEV(B_b) = B_b - B̄_w[^1] $$ where #B̄_w# represents the #w-epoch# moving average of all XGA bids. **VPI Updates** The VPI changes according to: $$ ΔVPI_{i,e} = k × F_MEV(B_b) / m_i $$ where $k$ controls system sensitivity and $m_i$ represents the validator's "mass" (typically effective balance). > [!NOTE] > This is naive in the sense that its for the purposes of detailing the mechanis, not the implemtnation details (i.e. we won't be using effective balance) **Temporal Decay** VPI decays exponentially toward baseline: $$ VPI_{i,e+1} = 1 + (VPI_{i,e} - 1) × e^{-λ} $$ The decay parameter λ determines momentum half-life. Setting λ = 0.003 creates a 24-hour half-life, while λ = 0.01 results in 7-hour decay. **Reward Amplification** All consensus rewards scale with VPI: $$ R_{total} = R_{base} × VPI_i $$ This includes attestation rewards, sync committee payments, and block proposal base rewards. Inactivity penalties scale inversely with VPI, providing additional protection for high performers. [^1]: Alternative force functions include logarithmic scaling to reduce outlier impact: `F_MEV(B_b) = log(1 + B_b) - log(1 + B̄_w)` ### Python Implementation Algorithm > [!Caution] > The code below is generated with chatgpt, free plan. I have not even read it. ```python def update_validator_state(validator, epoch, builder_bid): """Update validator VPI and calculate amplified rewards""" # Calculate MEV force mev_average = calculate_moving_average( network.mev_history, window=AVERAGING_WINDOW ) force = builder_bid - mev_average # Apply mass scaling mass = validator.effective_balance delta_vpi = SENSITIVITY_K * (force / mass) # Update VPI with bounds checking new_vpi = validator.vpi + delta_vpi validator.vpi = max(0.5, min(3.0, new_vpi)) # Apply decay for non-proposers if not validator.is_proposer(epoch): validator.vpi = 1 + (validator.vpi - 1) * exp(-LAMBDA) # Calculate amplified rewards base_rewards = calculate_base_rewards(validator, epoch) validator.rewards = base_rewards * validator.vpi return validator ```