# Density-Mortality for Grant Systems
### Source:
- https://gov.gitcoin.co/t/ecological-analogies-to-an-evolutionary-framework-in-gg24/23297/1
- https://github.com/metagov/daostar/blob/main/DAOIPs/daoip-5.md
- https://github.com/metagov/daostar/blob/main/DAOIPs/x-daoip-5.md
#### AI Assessment of feasiblity with DAOIP-5 Metadata
yes—it's very feasible to compute “density-dependent mortality” for grant systems with DAOIP-5 data. Below is a crisp, implementation-ready mapping (definitions → formulas → DAOIP-5 fields → queries), plus a minimal scoring function that slots alongside TVF.
# 0) Translate ecology → grants (stage model)
Think of a grantee’s journey as developmental stages. “Mortality” = drop-off between stages.
* S0 Seed: Application submitted
* S1 Seedling: Passed triage/shortlist/review
* S2 Juvenile: Awarded/approved
* S3 Adult: First disbursement made
* S4 Reproductive: Delivered required milestone(s)/report(s) on time
For any domain i in round r:
* N\_s(i,r) = population at stage s
* D\_s(i,r) = deaths at stage s (fail/reject/withdraw/revoke/no-show etc.)
* Stage mortality: m\_s(i,r) = D\_s(i,r) / N\_{s-1}(i,r)
* Cumulative survival to S\_k: S\_k(i,r) = ∏\_{s=1..k} (1 - m\_s(i,r))
* Cumulative mortality to S\_k: M\_k(i,r) = 1 - S\_k(i,r)
# 1) Density metrics
Two practical “densities” (use one or both):
* Application density: ρ\_app(i,r) = N\_0(i,r) / K\_app(i)
* Funding density: ρ\_fund(i,r) = A\_i(r) / K\_fund(i)
Where:
* A\_i(r) = total allocated (or matched) to domain i in round r
* K\_app(i), K\_fund(i) = carrying capacities (see §4)
# 2) Frequency / rarity metric
Let f\_i be a domain’s frequency share across a history window H (e.g., last 4–8 rounds):
* by applications: f\_i = (∑*{r∈H} N\_0(i,r)) / (∑*{r∈H} ∑\_j N\_0(j,r))
* (or) by funded projects, or by total donations; pick one, be consistent.
Define rarity boost with a governance parameter β ≥ 0:
* F\_i = ( \bar f / f\_i )^β, where \bar f is the average f across domains
(If f\_i < \bar f, F\_i > 1 → rare domains get a lift.)
# 3) Density-dependent mortality models
Estimate how mortality varies with density. Two simple, interpretable options:
**(A) Quadratic (captures Allee + crowding):**
m\_s(i,r) = α\_s + b\_s·ρ(i,r) + c\_s·ρ(i,r)^2 with c\_s > 0
– Allee effect if b\_s < 0 at very low ρ; crowding increases mortality as ρ rises (c\_s term).
**(B) Logistic with density term(s):**
logit(m\_s(i,r)) = α\_s + b\_s·ρ(i,r) + γ·controls + u\_round
Controls can include average grant size, first-time grantee %, reviewer load, etc. Fit per stage s, pool type, or domain family, with hierarchical shrinkage for sparse domains.
# 4) Estimating carrying capacities K
Pick the capacity you want to regulate (counts vs dollars):
* K\_app(i): fit the dome-shaped success curve for domain i:
success\_rate(i,r) = N\_2(i,r)/N\_0(i,r) ≈ a\_i·ρ\_app(i,r) − b\_i·ρ\_app(i,r)^2
The vertex gives the “optimal” density ρ\* = a\_i/(2b\_i); set K\_app(i) = median\_r N\_0(i,r)/ρ\*.
* K\_fund(i): regress realized completion/on-time delivery vs A\_i(r), look for diminishing returns breakpoint (piecewise linear or spline). Set K\_fund(i) at the elbow.
Keep K’s adaptive: recompute each meta-round with a rolling window and shrinkage toward global priors.
# 5) A compact regulation factor D (crowding damper)
Once K is set, convert density into a \[0,1] damper:
* D\_i(r) = 1 / (1 + (ρ\_fund(i,r))^η) with η ≥ 1 (steepness).
(D≈1 when under-saturated, falls toward 0 near/over capacity.)
Or use D\_i(r) = max(0, 1 − A\_i(r)/K\_fund(i)) if you prefer linear.
# 6) A simple adjusted domain score (to layer on TVF)
score\_i(r) = TVF\_i(r) × F\_i × D\_i(r)
* TVF\_i(r): your baseline Total Value Flowed for domain i
* F\_i: frequency-rarity lift
* D\_i(r): density/capacity damper
This score is diagnostic (for signaling/what-if), or can softly guide allocations as a secondary weight.
# 7) DAOIP-5 field mapping (minimal)
DAOIP-5 gives you consistent objects to compute all of the above:
* **GrantPool / Round**: id, name/slug, start/end, domain/tags/focusAreas (where your domain i comes from)
* **Application**: id, poolId, projectId, createdAt, requestedAmount, status (submitted/triaged/shortlisted/rejected/withdrawn), review/score metadata
* **Award/Decision**: applicationId, decisionAt, awardAmount (approved/declined/revoked)
* **Disbursement**: awardId, disbursedAmount, disbursedAt, status (pending/sent/failed/returned)
* **Milestone/Report**: projectId/awardId, dueAt, submittedAt, status (on-time/late/missed), acceptance flag
* **Project**: id, org, repo/links, (optionally) first-time grantee flag, domain tags
(Exact field names vary by source adapter; DAOIP-5 normalizes these concepts, so you can map from raw source → these canonical fields in your lake.)
# 8) How to compute (step-by-step)
**A) Build per-round, per-domain cohorts**
1. Assign domain i to each Application via Application.tags OR Project.tags OR GrantPool.focusAreas (priority: app-level > project-level > pool-level).
2. For each round r & domain i, count stage populations:
* N\_0 = applications submitted
* N\_1 = applications with triage\_passed == true (or shortlisted == true)
* N\_2 = applications with award.status == "approved"
* N\_3 = awards with ≥1 disbursement.sent
* N\_4 = awards with all required milestones delivered/accepted on time
3. Deaths D\_s are the deltas to the next stage (e.g., D\_1 = N\_0 − N\_1, etc.)
**B) Compute mortality + survival**
* m\_s(i,r) = D\_s / N\_{s-1} ; S\_k = ∏(1 − m\_s)
**C) Compute densities & capacities**
* ρ\_app = N\_0 / K\_app(i) or ρ\_fund = A\_i / K\_fund(i)
* Fit K’s as in §4 on historical windows.
**D) Frequency**
* f\_i from historical share; F\_i = ( \bar f / f\_i )^β
**E) Model density-mortality**
* Fit m\_s \~ ρ + ρ^2 (or logistic) per stage/domain family; store b\_s, c\_s for diagnostics (Allee threshold, crowding slope).
**F) Scoring**
* score\_i = TVF\_i × F\_i × D\_i (or run it as a side-by-side “what-if” next to baseline allocations).
# 9) Example (toy numbers)
Suppose in round r for domain “Interoperability”:
* N\_0=120, N\_1=84, N\_2=48, N\_3=36, N\_4=24 → m\_1=0.30, m\_2=0.43, m\_3=0.25, m\_4=0.33
* A\_i=\$180k, K\_fund=\$240k → ρ\_fund=0.75 → D=1/(1+0.75^2)=0.64
* Across last 6 rounds, f\_i=5%, mean \bar f=10%, β=0.5 → F=(0.10/0.05)^0.5≈1.41
* TVF\_i(r)=\$1.2M → score ≈ 1.2M × 1.41 × 0.64 ≈ **\$1.08M (signal)**
Interpretation: good rarity boost (underrepresented) but getting closer to funding capacity, so density damper tempers the lift.
# 10) Practical queries (pseudo-SQL over your DAOIP-5 lake)
**Cohort & mortality per round/domain**
```sql
WITH apps AS (
SELECT a.id, a.pool_id, COALESCE(a.domain, p.domain, gp.domain) AS domain,
a.created_at, a.status,
CASE WHEN a.status IN ('triaged','shortlisted') THEN 1 ELSE 0 END AS s1,
CASE WHEN aw.status = 'approved' THEN 1 ELSE 0 END AS s2,
CASE WHEN EXISTS (
SELECT 1 FROM disbursement d WHERE d.award_id = aw.id AND d.status='sent'
) THEN 1 ELSE 0 END AS s3,
CASE WHEN ms.all_required_delivered_on_time = TRUE THEN 1 ELSE 0 END AS s4
FROM application a
LEFT JOIN award aw ON aw.application_id = a.id
LEFT JOIN project p ON p.id = a.project_id
LEFT JOIN grantpool gp ON gp.id = a.pool_id
LEFT JOIN milestone_rollup ms ON ms.award_id = aw.id
),
agg AS (
SELECT pool_id, domain,
COUNT(*) AS N0,
SUM(s1) AS N1,
SUM(s2) AS N2,
SUM(s3) AS N3,
SUM(s4) AS N4
FROM apps
GROUP BY 1,2
)
SELECT pool_id, domain,
N0, N1, N2, N3, N4,
(N0 - N1)::float / NULLIF(N0,0) AS m1,
(N1 - N2)::float / NULLIF(N1,0) AS m2,
(N2 - N3)::float / NULLIF(N2,0) AS m3,
(N3 - N4)::float / NULLIF(N3,0) AS m4
FROM agg;
```
**Funding density**
```sql
SELECT pool_id, domain,
SUM(award_amount) AS A_i
FROM award a
JOIN application ap ON ap.id=a.application_id
GROUP BY 1,2;
```
**Frequency share over window H**
```sql
WITH hist AS (
SELECT domain, COUNT(*) AS n_apps
FROM application
WHERE created_at BETWEEN :start AND :end
GROUP BY 1
),
tot AS (SELECT SUM(n_apps) AS total FROM hist)
SELECT h.domain,
n_apps::float / t.total AS f_i
FROM hist h CROSS JOIN tot t;
```
# 11) Validation & governance knobs
* β (rarity sensitivity), η (crowding steepness), window H, and K methodology are governance-set and should be published with rationale.
* Run diagnostics dashboards: m\_s vs ρ scatter, fitted curves, per-domain K’s, and “what-if” allocations.
# 12) Caveats to watch
* **Status normalization:** map many source statuses → \[submitted, shortlisted, approved, disbursed, delivered].
* **Timing:** define a fixed observation window post-round (e.g., T=90 days) for S3/S4; otherwise late reports look like “deaths.”
* **Domain labeling:** prefer application-level tags; fall back to project/pool if missing; keep a small ruleset for ambiguous cases.
* **Small-n:** use hierarchical pooling so sparse domains borrow strength from their family.
---
If you want, I can drop these pieces into your existing DAOIP-5 SQLMesh/Dagster pipeline and produce:
1. a “domain density & mortality” mart,
2. a K-estimation job, and
3. a small “TVF × F × D” comparison dashboard next to Condorcet outputs.