# $DDX$ - v1 Technical Specifications This document is the authoritative reference for every derivative analyzed in the DDX funding-rate options research. It defines the mathematical structure, parameters, calibration methodology, and design rationale for each instrument. --- ## Table of Contents 1. [Conventions and Setup](#1-conventions-and-setup) 2. [Core Products](#2-core-products) - 2.1 [Vanilla Funding Floor (Benchmark)](#21-vanilla-funding-floor-benchmark) - 2.2 [Distress-Activated Floor (DAF)](#22-distress-activated-floor-daf) - 2.3 [Aggregate Stop-Loss (ASL)](#23-aggregate-stop-loss-asl) 3. [Benchmark: Funding Rate Swap](#3-benchmark-funding-rate-swap) 4. [Appendix Product: Soft-Duration Cover (SDC)](#4-appendix-product-soft-duration-cover-sdc) 5. [Premium Decomposition](#5-premium-decomposition) 6. [Risk Metrics](#6-risk-metrics) - 6.4 [Hedge Ratio and Efficiency Frontier](#64-hedge-ratio-and-efficiency-frontier) - 6.5 [Capital Efficiency Metrics](#65-capital-efficiency-metrics) 7. [Frozen Baseline Parameters](#7-frozen-baseline-parameters) 8. [Calibration Methodology](#8-calibration-methodology) 9. [Design Rationale Summary](#9-design-rationale-summary) --- ## 1. Conventions and Setup ### 1.1 Index and Perspective | Property | Value | |----------|-------| | **Underlying** | Perpetual futures funding rate (BTC/USD) | | **Primary venue** | Bybit inverse BTCUSD perpetual | | **Additional venues** | BitMEX XBTUSD, Deribit BTC-PERPETUAL, Binance COIN-M BTCUSD_PERP | | **Buyer perspective** | Short-perp holder (e.g., Ethena-like delta-neutral stablecoin) | | **Sign convention** | $f_i > 0$ means the buyer (short-perp) **receives** funding; $f_i < 0$ means the buyer **pays** | | **Units** | $f_i$ is the per-interval funding fraction (e.g., $+0.0001 = 1$ bp per 8h) | | **Interval** | 8 hours (standard across all four venues) | **Worked example:** If $f_i = -0.0002$, the buyer pays 2 bp of notional in that 8-hour interval. The per-interval loss is $l_i = \max(0, -(-0.0002)) = 0.0002$. In APR terms: $0.0002 \times 1095 \times 100 = 21.9\%$. ### 1.2 Key Derived Quantities **Per-interval loss:** $$l_i = \max(0,\; -f_i)$$ This is the cashflow outflow from a single interval. It is zero when funding is positive (buyer receives) and equals $|f_i|$ when funding is negative (buyer pays). **Annualized Percentage Rate (APR) of a per-interval rate:** $$\text{APR}(\%) = f_i \times \frac{365 \times 24}{8} \times 100 = f_i \times 1095 \times 100$$ **Aggregate loss over a window of $n$ intervals (Lambda):** $$\Lambda = \sum_{i=1}^{n} \max(0,\; -f_i)$$ Lambda is expressed as a **fraction of notional per window**, not as an annualized rate. To display as a percentage: $\Lambda \times 100$. **Scale anchor:** On Bybit, a typical 30-day $\Lambda$ is on the order of 0.1–0.3% of notional (benign windows). Tail events (worst 5–10% of windows) reach 1–3% of notional. See Section 7.2 for exact quantiles. ### 1.3 Horizons | Label | Intervals ($n$) | Calendar | Year fraction ($T$) | |-------|-----------------|----------|---------------------| | 7d | 21 | 1 week | $7/365 \approx 0.0192$ | | 30d | 90 | 1 month | $30/365 \approx 0.0822$ | | 90d | 270 | 1 quarter | $90/365 \approx 0.2466$ | The year fraction $T$ is used in the capital charge component of the premium decomposition: $\text{Capital charge} = k \cdot \text{CVaR}_{\text{right}} \cdot T$. ### 1.4 Unit Convention (Critical) | Quantity type | Examples | Display format | Conversion from raw | |:---|:---|:---|:---| | Per-interval rate | $f_i$, $l_i$, $d$, $b$, $k$ | APR (%) | $\times\, 1095 \times 100$ | | Per-window cumulative sum | $\Lambda$, episode total loss, payoff | % of notional per window | $\times\, 100$ | **Rule:** Never apply the annualization factor ($\times 1095$) to a cumulative sum. A $\Lambda$ of 0.005 over a 30-day window means 0.5% of notional was lost — not 547.5% APR. ### 1.5 Microstructure Notes Three of the four venues (BitMEX, Bybit, Binance) share two important structural features: 1. **Discrete base rate at $f_i = 0.0001$** (10.95% APR). When the market is balanced, funding settles at exactly this value. This creates a discrete probability mass (point mass) in the distribution — the median of funding on these three venues is pinned at 0.0001. 2. **Hard caps at $\pm 0.00375$** (BitMEX/Bybit) or $\pm 0.003$ (Binance). Per-interval funding cannot exceed these bounds. Deribit is structurally different: no base rate, no hard cap, and a left-skewed distribution. Parameters calibrated on base-rate venues do not transfer directly to Deribit. --- ## 2. Core Products These three option-style derivatives form the mainline analysis set for all frontier, premium, and event-study results. Each targets a different dimension of funding-rate risk. ### 2.1 Vanilla Funding Floor (Benchmark) **Purpose:** Full insurance against negative funding while preserving all upside when funding is positive. Used primarily as a **benchmark** — the upper bound on insurance cost that persistence-gated products aim to beat. **Payoff:** $$\Pi_{\text{floor}} = \min\!\Big(L,\;\sum_{i=1}^{n} \max(0,\; -f_i - d)\Big)$$ **Parameters:** | Symbol | Name | Units | Description | |--------|------|-------|-------------| | $d$ | Deductible | per-interval CF | Per-interval loss threshold below which no payout occurs. Only losses exceeding $d$ contribute to the payoff. | | $L$ | Cap | fraction of notional | Maximum aggregate payout over the window. $L = \infty$ (i.e., `None`) means uncapped. | **How it works:** For each interval, compute the excess loss beyond the deductible: $\max(0, -f_i - d)$. Sum these over the entire window. Cap the total at $L$. **Special case:** When $d = 0$ and $L = \infty$, the floor pays exactly $\Lambda$ (the total reserve draw). This is the "full insurance" upper bound. **Configurations:** | Variant | $d$ | $L$ | Role | |---------|-----|-----|------| | Benchmark | $0$ | $\infty$ | Full insurance upper bound | | Realistic | $0.0001$ (10.95% APR) | $\infty$ | Filters small losses (below 1 bp per 8h) | | Realistic alt | $0.0003$ (32.85% APR) | $\infty$ | Excludes moderate losses | **Example: how the deductible works on a 3-interval path** Consider a window with $f_1 = +0.0001$, $f_2 = -0.00005$, $f_3 = -0.0004$: | Interval | $f_i$ | $l_i = \max(0, -f_i)$ | Payoff ($d=0$) | Payoff ($d=0.0001$) | Payoff ($d=0.0003$) | |----------|-------|----------------------|----------------|---------------------|---------------------| | 1 | $+0.0001$ | $0$ | $0$ | $0$ | $0$ | | 2 | $-0.00005$ | $0.00005$ | $0.00005$ | $0$ | $0$ | | 3 | $-0.0004$ | $0.0004$ | $0.0004$ | $0.0003$ | $0.0001$ | | **Total** | | | **$0.00045$** | **$0.0003$** | **$0.0001$** | At $d = 0$, every loss contributes. At $d = 0.0001$, interval 2's small loss is fully absorbed by the deductible. At $d = 0.0003$, even most of interval 3's loss is absorbed. **Why these deductibles?** - $d = 0$ gives the maximum possible payoff — every negative interval contributes. This is unrealistically expensive but defines the upper bound. - $d = 0.0001$ follows the **severity-strike approach**: choose the deductible at a "natural scale" in the exchange microstructure. The value 0.0001 per 8h (1 basis point) is the fundamental rate unit on base-rate venues — it is the magnitude of the discrete base-rate spike that appears in the positive funding distribution. Using this as a deductible means "insure only losses whose magnitude exceeds 1 bp per interval." Calibration (NB03) validated this choice: the conditional median loss on Bybit is 0.000077 per 8h, so $d = 0.0001$ filters out ~58% of conditional loss intervals (the smaller, routine ones) while still covering the heavier tail. - $d = 0.0003$ is a higher severity strike, also at a natural scale already used as an analysis threshold in NB02. It filters ~84% of conditional loss intervals, paying only on genuinely distressed funding. **Empirical result (NB06 §4):** Floor d=0.0001 achieves the highest absolute tail-risk reduction among all products (residual CVaR$_{1\%}$ = 0.53% vs 3.74% unhedged) but at the highest premium cost (1.31% per 30d), giving it the lowest capital efficiency (Eff$_A$ = 2.45) of the option products. --- ### 2.2 Distress-Activated Floor (DAF) **Purpose:** Cheaper than the vanilla floor by only insuring **persistent** bad regimes. Expresses the thesis that brief negative funding is tolerable (reserves can handle it), but sustained distress requires hedging. This is the "DDX wedge" — the product that directly expresses the persistence thesis that differentiates DDX from a simple floor. **Intermediate computations:** Define the **bad state** indicator at each interval: $$B_i = \mathbf{1}[f_i < -b]$$ Compute the **consecutive run length** (how many consecutive bad intervals ending at $i$): $$R_i = (R_{i-1} + 1) \cdot B_i$$ with $R_0 = B_0$. The run length resets to zero whenever $B_i = 0$. Define the **activation flag:** $$A_i = \mathbf{1}[R_i \geq m]$$ **Payoff:** $$\Pi_{\text{DAF}} = \min\!\Big(L,\;\sum_{i=1}^{n} A_i \cdot \max(0,\; -f_i - d)\Big)$$ **Parameters:** | Symbol | Name | Units | Description | |--------|------|-------|-------------| | $b$ | Threshold | per-interval CF ($\geq 0$) | Severity level defining the "bad" state. Interval $i$ is bad when $f_i < -b$. | | $m$ | Streak length | intervals (integer $\geq 1$) | Number of consecutive bad intervals required before activation. | | $d$ | Deductible | per-interval CF ($\geq 0$) | Per-interval deductible applied to each activated interval. **Baseline: $d = b$** (strike continuity). | | $L$ | Cap | fraction of notional | Maximum aggregate payout. | **How it works, step by step:** 1. At each interval $i$, check if $f_i < -b$. If yes, increment the run counter; if no, reset it to zero. 2. Once the run counter reaches $m$, the floor "activates." From that point onward (while the run continues), each interval's payoff is $\max(0, -f_i - d)$. 3. If the run breaks (a non-bad interval occurs), the counter resets and the product deactivates until the next streak of $m$ bad intervals occurs. 4. Sum all activated payoffs over the window and cap at $L$. **Activation timeline example** ($b = 0.0001$, $m = 3$): ``` Interval: 1 2 3 4 5 6 7 8 9 f_i: +.0001 -.0002 -.0003 -.0005 -.0001 +.0002 -.0004 -.0003 -.0002 B_i: 0 1 1 1 1 0 1 1 1 R_i: 0 1 2 3 4 0 1 2 3 A_i: 0 0 0 1 1 0 0 0 1 Payoff: 0 0 0 .0004 .0000 0 0 0 .0001 ``` - Intervals 2–5 form a streak. Activation begins at interval 4 ($R_4 = 3 \geq m$). - Interval 5 is activated ($R_5 = 4 \geq 3$), but $-f_5 - b = 0.0001 - 0.0001 = 0$, so payoff is zero. - Interval 6 is positive → run resets. Intervals 7–9 start a new streak; activation at interval 9. **The $d = b$ convention (strike continuity):** Setting $d = b$ means the threshold simultaneously defines what is "bad" AND the strike of the per-interval payoff. Once activated, the payoff per interval is: $$\max(0,\; -f_i - b) = |f_i| - b \quad \text{(when } f_i < -b \text{)}$$ This eliminates a discontinuity: without $d = b$, an interval could be "not bad enough to contribute to the streak" (because $f_i \geq -b$) yet still produce a payoff (because $-f_i - d > 0$ with $d < b$). Tying $d = b$ makes the product continuous in severity at the activation boundary. **Configurations:** | Variant | $b$ | $m$ | $d$ | $L$ | Role | |---------|-----|-----|-----|-----|------| | Baseline | $0.0001$ | $3$ | $= b = 0.0001$ | $\infty$ | 24h sustained distress trigger | | Sensitivity | $0.0001$ | $2$ | $= b = 0.0001$ | $\infty$ | 16h trigger, more sensitive | **Why $b = 0.0001$?** $b = 0.0001$ is a clean, interpretable "strike scale" at the exchange's fundamental rate unit ($\approx 10.95\%$ APR). On base-rate venues the typical positive rate is $+0.0001$ (the base rate). Defining "bad" as $f_i < -0.0001$ means funding has swung at least 2 basis points from the normal positive level — not just marginally negative, but clearly in distress territory. This makes $b$ simultaneously interpretable and microstructure-aware. **Why $m = 3$ as primary?** $m = 3$ was selected over $m = 2$ for three reasons: 1. **24h human-timescale boundary.** Three 8-hour intervals span a full day. This is a natural boundary for "sustained distress" — it means the market has been in a persistently negative-funding regime for an entire calendar day, not just a single settlement cycle. 2. **Noise reduction.** At $m = 2$, a single borderline observation or a missing data print can flip the activation. At $m = 3$, the persistence gate is more robust to microstructure noise and data quality issues — a genuine design advantage for a product that is supposed to distinguish "persistent regimes" from transient blips. 3. **Economically credible activation rate.** Empirical analysis (NB02) showed the expected bad-run length on Bybit is $\approx 2$ intervals at $b = 0$, so $m = 3$ filters out the majority of runs (which are shorter) and activates only on genuinely persistent distress. Calibration (NB03) gives a full-sample activation rate of 24.5% for 30d windows, though rolling-window analysis (NB06 §2) shows this ranges from 4.2% to 49.3% across eras due to nonstationarity. For comparison: - $m = 2$ activates in 45.1% of windows — nearly half, approaching vanilla-floor territory and undermining the "distress-only" narrative. - $m = 4$ and $m = 5$ activate in 16.4% and 11.9% respectively — viable sensitivity checks but risk missing economically significant events. --- ### 2.3 Aggregate Stop-Loss (ASL) **Purpose:** Directly mirrors reserve-fund logic. "My reserve can absorb the first $D$ of losses over this period; insure me against anything beyond that." This is the cleanest mapping to the Ethena reserve-draw story. Empirical analysis across all pricing functionals confirms ASL has the **highest capital efficiency** (Eff$_A$ = 2.66–2.70 vs 2.31–2.45 for Floor and DAF) — it delivers the most tail-risk reduction per premium dollar. This ranking is stable under CVaR-loaded, Wang distortion, Esscher, and target-Sharpe pricing (NB06 §9a). **Payoff:** $$\Pi_{\text{ASL}} = \min\!\Big(L,\;\max(0,\;\Lambda - D)\Big)$$ where $\Lambda = \sum_{i=1}^{n} \max(0,\; -f_i)$ is the total reserve draw over the window. **Parameters:** | Symbol | Name | Units | Description | |--------|------|-------|-------------| | $D$ | Deductible | fraction of notional per window | Aggregate loss threshold. Losses up to $D$ are retained; the ASL pays only the excess. **Calibrated per horizon.** | | $L$ | Cap | fraction of notional | Maximum payout. | **How it works:** Sum all per-interval losses over the window to get $\Lambda$. Subtract the deductible $D$. If $\Lambda \leq D$, the payoff is zero (the reserve absorbed all losses). If $\Lambda > D$, the payoff is the excess $\Lambda - D$, capped at $L$. **Key distinction from the Floor:** The Floor pays based on per-interval losses (each interval independently), while the ASL pays based on the aggregate sum. A 30-day window with many small negative intervals might have $\Lambda > D$ (triggering ASL) even though no single interval's loss is large enough to exceed the Floor's deductible. **Why $D$ is per-horizon (not a single number):** $\Lambda$ is a sum over $n$ intervals. A 90-day window accumulates approximately $3\times$ the loss of a 30-day window (for a stationary process). Setting $D$ to a single value regardless of horizon would mean the ASL trivially triggers on long horizons and rarely triggers on short ones. Instead, $D$ is set as a **quantile of the rolling $\Lambda$ distribution for that specific horizon**. **Configurations:** | Variant | $D$ calibration | $L$ | Role | |---------|----------------|-----|------| | Baseline | $q_{90}(\Lambda_{\text{horizon}})$ | $\infty$ | Reinsurance tail layer, activates ~10% of windows | | Sensitivity | $q_{95}(\Lambda_{\text{horizon}})$ | $\infty$ | Catastrophe layer, activates ~5% of windows | **Frozen numeric $D$ values (Bybit):** | Horizon | $D$ baseline ($q_{90}$) | $D$ sensitivity ($q_{95}$) | |---------|-------------------------|---------------------------| | 7d | 0.001701 (0.170% notional) | 0.003153 (0.315% notional) | | 30d | 0.008114 (0.811% notional) | 0.012967 (1.297% notional) | | 90d | 0.023236 (2.324% notional) | 0.029158 (2.916% notional) | **Why $q_{90}$, not $q_{75}$?** The baseline was moved from $q_{75}$ (which activated in ~25% of windows) to $q_{90}$ (activating ~10%). The rationale: if the ASL is positioned as "reserve insurance," the reserve should cover routine losses; insurance covers the **tail**. A 25% activation rate is too frequent for an insurance narrative — it would be more like a cost-sharing arrangement. At $q_{90}$, the product only pays in genuinely adverse windows. --- ## 3. Benchmark: Funding Rate Swap **Purpose:** Linear hedge — lock funding at a fixed rate $k$ for the contract tenor. Eliminates all variability (both upside and downside). This is the benchmark against which all option products are compared: options should offer better tail protection per unit cost, while swaps offer variance reduction. **Net cashflow (buyer):** $$\text{CF}_{\text{swap}} = n \cdot k$$ where $n$ is the number of intervals in the horizon and $k$ is the fixed per-interval rate. **Fixed rate estimation (for backtesting):** The fixed rate $k$ cannot use future data. It is estimated from a **trailing lookback window** ending just before the evaluation window starts: $$k = \hat{f}(\text{lookback})$$ | Method | Formula | Role | |--------|---------|------| | **EWMA mean** (primary) | Exponentially weighted moving average with halflife = lookback/2 | More responsive to regime changes; market-adaptive | | **Trailing mean** (secondary) | $\frac{1}{L}\sum_{j=1}^{L} f_{t-j}$ | Simple benchmark for robustness | | Trailing median (appendix only) | Median of the lookback window | Demoted because it sticks at the base rate on 3 of 4 venues | **Parameters:** | Symbol | Name | Units | Description | |--------|------|-------|-------------| | $k$ | Fixed swap rate | per-interval CF | Estimated from trailing data; not a free parameter | | $L_{\text{lookback}}$ | Lookback window | intervals | Length of the estimation window. Default: 90 intervals (30 days). | | $h$ | EWMA halflife | intervals | Decay parameter for EWMA. Default: 45 intervals (15 days). | **Why EWMA is primary:** The trailing mean gives equal weight to all observations in the lookback window, including those from 30 days ago. The EWMA weights recent observations more heavily, making it more responsive to regime shifts. On base-rate venues, where the median sticks at 0.0001 regardless of conditions, the EWMA mean provides a more realistic "what would the market clear at" proxy. ### Margin / Collateral Requirement Swaps have zero explicit premium but require margin collateral. The margin proxy is defined as: $$M_\alpha = \text{CVaR}_\alpha\!\big(\max(0,\; -X^{\text{swap}})\big)$$ where $X^{\text{swap}} = n \cdot k$ is the swap's net cashflow over the window. On Bybit, per-window trailing EWMA swap rates range from −47% to +160% APR, reflecting extreme regime variation. At $h=1$, the swap margin proxy is ~3.4% of notional — comparable to the unhedged reserve requirement (~3.7%). Swaps are not "free"; they shift capital from reserves to margin. **Correlation with stress:** The trailing swap rate has a 0.63 correlation with the underlying funding outcomes (NB06 §5d). This means the swap locks in bad rates during stress periods, producing worse net-CF tail risk than unhedged at full hedge ($h=1$). Partial hedging ($h \approx 0.3$) diversifies more effectively. --- ## 4. Appendix Product: Soft-Duration Cover (SDC) **Status: Appendix only.** Not included in mainline frontier/premium/event analyses. Included only in dedicated robustness or mechanism-design notebooks. **Reason for demotion:** Empirical analysis confirms SDC has the **lowest capital efficiency** of all option products. The soft ramp dilutes hedge efficiency compared to DAF's hard activation. SDC is primarily useful as a mechanism-design tool (smoothing the cliff effect to reduce manipulation incentives around the activation boundary), not as an economics-frontier winner. **Weight function:** $$w(R_i) = \begin{cases} 0 & \text{if } R_i < m \\ \frac{R_i - m}{s} & \text{if } m \leq R_i < m + s \\ 1 & \text{if } R_i \geq m + s \end{cases}$$ where $R_i$ is the consecutive run length (same as in DAF). **Payoff:** $$\Pi_{\text{SDC}} = \min\!\Big(L,\;\sum_{i=1}^{n} w(R_i) \cdot \max(0,\; -f_i - d)\Big)$$ **Parameters:** Same as DAF plus: | Symbol | Name | Units | Description | |--------|------|-------|-------------| | $s$ | Ramp width | intervals (integer $\geq 1$) | Number of intervals over which activation ramps from 0 to 1. Intervals $[m, m+s)$ receive partial weight. | --- ## 5. Premium Decomposition For any product with payoff $\Pi$ computed across many rolling windows (or Monte Carlo paths), the premium has three additive components. ### 5.1 Pure Premium (actuarial fair price) $$PP = \mathbb{E}[\Pi]$$ The expected payoff. A zero-profit benchmark — the minimum price at which the seller breaks even on average. ### 5.2 CVaR Risk Load (seller's tail-risk compensation) $$RL = \lambda \cdot \big(\text{CVaR}_{\alpha}^{\text{right}}(\Pi) - \mathbb{E}[\Pi]\big)$$ where $\text{CVaR}_{\alpha}^{\text{right}}(\Pi)$ is the mean of the top-$\alpha$ fraction of payoffs (the seller's worst-case claims exposure). | Parameter | Symbol | Default | Rationale | |-----------|--------|---------|-----------| | Risk-load multiplier | $\lambda$ | 0.35 | Industry-standard range for insurance loading | | Tail level | $\alpha$ | 0.01 | 99th percentile tail | **Why right-tail CVaR?** Insurance payoffs are non-negative. The seller's risk is large claims (right tail), not small claims (left tail). Using left-tail CVaR would produce a negative risk load, which is economically nonsensical. ### 5.3 Capital Charge (opportunity cost of locked collateral) $$CC = k_c \cdot C \cdot T$$ where: - $C = \text{CVaR}_{\alpha}^{\text{right}}(\Pi)$ — required collateral (worst-case claim exposure) - $k_c = 0.12$ — annualized cost of capital (DeFi opportunity cost) - $T$ = horizon in years ($= n \times 8 / (365 \times 24)$) ### 5.4 Total Loaded Premium $$P = PP + RL + CC$$ **Empirical note:** The risk load dominates the premium decomposition: it is 7–22× the pure premium across products (NB06 §3), because the payoff distribution is extremely heavy-tailed. This makes the CVaR estimator the pricing bottleneck. ### 5.5 Alternative: Target-Sharpe Premium $$P_{S^*} = \mathbb{E}[\Pi] + S^* \cdot \text{Std}[\Pi]$$ This sets the premium so that the seller's Sharpe ratio equals the target $S^* = 0.75$. ### 5.6 Wang Distortion Premium $$P_{\text{Wang}} = \int_0^\infty \Phi\!\big(\Phi^{-1}(S_\Pi(x)) + \theta\big)\,dx$$ where $S_\Pi(x) = P(\Pi > x)$ is the survival function and $\Phi$ is the standard normal CDF. Default $\theta = 0.5$. Higher $\theta$ gives more weight to the right tail. ### 5.7 Esscher Premium $$P_{\text{Esscher}} = \frac{E[\Pi \cdot e^{\theta \Pi}]}{E[e^{\theta \Pi}]}$$ Default $\theta = 1.0$. The exponential tilting amplifies the contribution of large payoffs. --- ## 6. Risk Metrics Every strategy (unhedged, swap, and each option variant) is evaluated using both a **net-CF lens** (total P&L) and a **loss-only lens** (reserve draw). ### 6.1 Net-CF Lens For a window of $n$ intervals with funding $\{f_1, \ldots, f_n\}$: | Strategy | Net CF formula | |----------|---------------| | Unhedged | $\sum_i f_i$ | | Swap | $n \cdot k$ | | Option | $\sum_i f_i + \Pi - P$ | Metrics computed on the distribution of net CF across all rolling windows: | Metric | Definition | |--------|-----------| | Mean | $\mathbb{E}[\text{Net CF}]$ | | VaR$_{1\%}$ | 1st percentile of Net CF (worst 1% outcome) | | CVaR$_{1\%}$ | $\mathbb{E}[\text{Net CF} \mid \text{Net CF} \leq \text{VaR}_{1\%}]$ | | $P(\text{loss})$ | $P(\text{Net CF} < 0)$ | ### 6.2 Loss-Only Lens (Residual Reserve Draw) Premium is a **deterministic expense** and is excluded from the stochastic risk measure. The residual loss measures only the random shortfall after the hedge payoff: | Strategy | Residual reserve draw formula | |----------|-------------------------------| | Unhedged | $\Lambda = \sum_i \max(0, -f_i)$ | | Option | $\tilde{L}^{\text{opt}}(h) = \max(0,\; \Lambda - h \cdot \Pi)$ | | Swap | $\tilde{L}^{\text{swap}}(h) = \sum_i \max(0,\; -(1-h)f_i - hk)$ | The reserve requirement is $R_\alpha(h) = \text{CVaR}_\alpha(\tilde{L}(h))$. Same metrics (Mean, VaR, CVaR) computed on the residual loss distribution. **Note:** Options act as end-of-window payoffs that offset $\Lambda$; swaps act per-interval by replacing $f_i$ with $k$. These are structurally different loss reductions. ### 6.3 Capital Efficiency (Sharpness) $$\text{Eff}_A(h) = \frac{R_\alpha(0) - R_\alpha(h)}{h \cdot \pi}$$ Measures how much tail reserve requirement is reduced per unit of premium spent. Higher is better. Empirical analysis confirms ASL has the highest Eff$_A$ (2.66–2.70), stable across all pricing functionals (NB06 §9a). ### 6.4 Hedge Ratio and Efficiency Frontier The hedge ratio $h \in [0, 1]$ scales the fraction of exposure hedged: - Scaled payoff = $h \cdot \Pi$ - Scaled premium = $h \cdot \pi$ Sweeping $h$ from 0 (unhedged) to 1 (fully hedged) traces a **frontier** in cost-vs-risk space. A point $(H, h)$ is **Pareto-efficient** if no other $(H', h')$ achieves both lower risk and lower cost with strict improvement in at least one. ### 6.5 Capital Efficiency Metrics **Total economic cost** (flow cost per period): $$\text{Cost}_B(h) = h \cdot \pi + k \cdot \frac{T}{365} \cdot \big(R_\alpha(h) + M_\alpha(h)\big)$$ where $M_\alpha(h)$ is the swap margin proxy (zero for options). **Net benefit** (positive = insurance beats reserves): $$\text{NetBenefit}(h) = k \cdot \frac{T}{365} \cdot \big(R_\alpha(0) - R_\alpha(h) - M_\alpha(h)\big) - h \cdot \pi$$ **Break-even cost-of-capital:** $$k^* = \frac{h \cdot \pi}{(T/365) \cdot (R_\alpha(0) - R_\alpha(h) - M_\alpha(h))}$$ --- ## 7. Frozen Baseline Parameters These parameters are frozen for all downstream analyses (NB04–NB06+). Calibrated on the Bybit BTCUSD inverse perpetual series (7,971 intervals, Nov 2018 – Feb 2026). ### 7.1 Per-Interval Parameters | Product | Parameter | Value (per 8h) | Value (APR %) | Rationale | Portable? | |---------|-----------|----------------|---------------|-----------|-----------| | Floor benchmark | $d$ | 0 | 0% | Full insurance upper bound | All venues | | Floor realistic | $d$ | 0.0001 | 10.95% | Severity strike at 1 bp; filters ~58% of conditional losses | Base-rate only | | Floor realistic alt | $d$ | 0.0003 | 32.85% | Higher severity strike; filters ~84% of conditional losses | Base-rate only | | DAF baseline | $b = d$ | 0.0001 | 10.95% | 1 bp severity threshold; strike continuity | Base-rate only | | DAF baseline | $m$ | 3 | — (24h) | Sustained distress boundary | All venues | | DAF sensitivity | $b = d$ | 0.0001 | 10.95% | Same threshold | Base-rate only | | DAF sensitivity | $m$ | 2 | — (16h) | More sensitive trigger | All venues | **Portability key:** "Base-rate only" = calibrated for Bybit/BitMEX/Binance (which share the 0.0001 base-rate spike). Deribit requires separate calibration for $b$ and $d$ (see §8.2). Parameters like $m$ (streak length) and $D$ (quantile-based) are structurally portable because they adapt to the venue's own distribution. ### 7.2 Per-Horizon Parameters (ASL Deductible $D$) | Horizon | $D$ baseline ($q_{90}$) | % notional | $D$ sensitivity ($q_{95}$) | % notional | |---------|------------------------|------------|---------------------------|------------| | 7d | 0.001701 | 0.170% | 0.003153 | 0.315% | | 30d | 0.008114 | 0.811% | 0.012967 | 1.297% | | 90d | 0.023236 | 2.324% | 0.029158 | 2.916% | ### 7.3 Swap Parameters | Parameter | Primary | Secondary | |-----------|---------|-----------| | Method | EWMA mean | Trailing mean | | Lookback | 90 intervals (30d) | 90 intervals (30d) | | Halflife | 45 intervals (15d) | — | --- ## 8. Calibration Methodology All parameters in Section 7 were derived from the Bybit funding-rate series using the following methodology. ### 8.1 Floor Deductible $d$ **Method:** Compute quantiles of the conditional loss distribution $l_i \mid l_i > 0$. **Why conditional?** On Bybit, 81.6% of intervals have non-negative funding. The unconditional p50 of $l_i$ is zero — true but uninformative. By conditioning on intervals that actually produce a loss, we get a meaningful distribution of "when losses happen, how big are they?" **Result (Bybit):** Conditional median loss = 0.000077 per 8h (8.42% APR). The value $d = 0.0001$ was selected following a severity-strike approach: 0.0001 per 8h (1 bp) is the exchange's fundamental rate unit — a natural, interpretable scale. Since this sits above the conditional median, it filters out ~58% of all loss intervals (the smaller, routine ones), confirming that the choice is neither too aggressive (filtering almost everything) nor too lenient (filtering nothing). ### 8.2 DAF Threshold $b$ and Streak $m$ **Threshold $b$:** Set to 0.0001 (1 bp per 8h). The condition $f_i < -0.0001$ means funding is meaningfully negative — not just slightly below zero but a full basis point into negative territory. On base-rate venues where the typical positive rate is +0.0001, this represents a swing of at least 2 bp from normal conditions. **Streak $m$:** Chosen by examining the empirical distribution of streak lengths and DAF activation frequencies: | $m$ | Expected meaning | 30d activation rate | Mean active intervals per 30d window | |-----|-----------------|---------------------|--------------------------------------| | 2 | 16h of sustained distress | 45.1% | 2.85 | | 3 | 24h of sustained distress | 24.5% | 1.65 | | 4 | 32h | 16.4% | 1.09 | | 5 | 40h | 11.9% | 0.76 | $m = 3$ was chosen as baseline because: (a) 24h is a natural human-timescale boundary, (b) it reduces sensitivity to microstructure noise — at $m = 2$, a single borderline value or missing print can flip the activation, while $m = 3$ requires a more convincing streak, and (c) it has a 24.5% activation rate that is meaningful but not excessive (compared to $m = 2$ at 45.1%, which approaches vanilla-floor frequency). **Deductible $d = b$:** This "strike continuity" convention eliminates a severity discontinuity at the threshold boundary and makes $b$ simultaneously the state classifier and the payoff strike. ### 8.3 ASL Deductible $D$ **Method:** For each horizon, compute rolling-window $\Lambda$ values across all valid windows. Set $D = q_{p}(\Lambda)$. **Why $q_{90}$?** At $q_{90}$, the ASL activates in approximately 10% of rolling windows. This positions it as a **reinsurance tail layer** — the buyer's reserve absorbs routine losses (the first 90% of windows), and the ASL covers only the adverse tail. The baseline was moved from $q_{75}$ (25% activation, too frequent) to $q_{90}$. ### 8.4 Swap Rate Method **Method:** EWMA mean with halflife = lookback/2. **Why EWMA over trailing mean?** The trailing mean gives equal weight to all observations in the lookback window. The EWMA gives more weight to recent observations, making it more responsive to regime changes. On base-rate venues where the median is pinned at 0.0001, the mean and EWMA can diverge when the market enters a stressed regime — the EWMA adapts faster. **Why not median?** On base-rate venues, more than 50% of intervals are at exactly 0.0001, so the trailing median is often pinned at 0.0001 regardless of market conditions. This makes it unresponsive as a swap-rate proxy. --- ## 9. Design Rationale Summary | Decision | Rationale | Source | |----------|-----------|--------| | $d = 0$ as Floor benchmark only | Full insurance is too expensive for a realistic product; use as upper bound | Design decision | | $d = 0.0001$ as Floor realistic | Severity strike at exchange's fundamental rate unit (1 bp); validated by calibration (filters ~58% of conditional losses, median = 0.000077) | NB03 calibration | | $d = b$ for DAF/SDC | Strike continuity; eliminates severity discontinuity at threshold | Design decision | | $m = 3$ primary, $m = 2$ sensitivity | 24h human-timescale boundary; reduces microstructure noise vs $m = 2$; 24.5% activation is meaningful but not excessive | NB02–NB03 | | $D = q_{90}(\Lambda)$ baseline | Reinsurance tail layer (~10% activation), not routine cost-sharing | NB03 calibration | | $D$ per horizon | $\Lambda$ scales with window length; single $D$ is not comparable across horizons | Design decision | | EWMA as primary swap estimator | More responsive to regime changes than trailing mean; median is pinned at base rate | Design decision | | SDC moved to appendix | Lowest capital efficiency; soft ramp dilutes hedge efficiency | NB04 analysis | | SABR/LMM not adopted | No vol surface to calibrate; payoffs are path-dependent; discrete spike + caps don't fit | Design decision | | Loss-faithful payoffs (not constant-benefit) | Pays actual shortfall, not a flat rate; more intuitive as insurance | Design decision | | Right-tail CVaR for risk load | Seller's risk is large claims (right tail), not small claims (left tail) | Design decision | | ASL has highest Eff$_A$ across all pricing functionals | Stable ranking under CVaR, Wang, Esscher, and Sharpe pricing | NB06 §9a | | Hedge-ratio frontiers over $h \in [0,1]$ | Prevents cherry-picking a single notional that favors one product | NB06 §5 | | Walk-forward protocol | Prevents lookahead bias in hedge evaluation | NB06 §7 | --- *This document is the authoritative specification for all DDX derivatives. For implementation details, see the source code in `src/ddx/payoffs/`. For calibration details, see `src/ddx/calibration.py` and `notebooks/03_calibration.ipynb`.*