# Deep Funding is a Special Case of Generalized Impact Evaluators
---
## Tl;dr
[Deep Funding](deepfunding.org)’s credit allocation mechanism instantiates an Impact Evaluator (IE) with:
- Scope = Dependency graph
- Measurement = AI + human inputs
- Evaluation = linear model optimization
- Reward = hierarchical normalization
## **Impact Evaluators (IEs) - Core Definition**
From [*Generalized Impact Evaluators*](https://research.protocol.ai/publications/generalized-impact-evaluators/):
An Impact Evaluator (IE) is formally defined by the tuple:
```
IE = {r, e, m, S}
```
where:
- **S** (Scope): Subset of entities/actions/outcomes being evaluated
- **m** (Measurement): Function mapping scope to indicators/entities
- **e** (Evaluation): Function converting measurements to value scores
- **r** (Reward): Function allocating rewards based on scores
---
## **We can map Deep Funding as a special case of an IE**
### **1. Scope (S)**
- **Entities**: Edges in the dependency graph (Ideas & Science, Art, Open Source Software)
- **Actions**: Dependencies ("Ethereum is influenced by Bitcoin")
- **Outcomes**: Normalized credit weights for edges
- **Interval**: Single evaluation epoch (static dependency graph)
---
### **2. Measurement (m)**
**Input**: Dependency graph structure (predefined relationships)
**Process**:
- **AI Models** (LLMs, Random Forests, ...): Generate initial credit distributions as logits
- **Human Jurors**: Provide pairwise comparisons (e.g., _"Austrian Economics vs Keynesian theory influenced Bitcoin more"_)
**Output**:
```
m(S) = {
AI_logits: [log(30), log(20), ...],
Human_samples: [(i=5, j=8, d=ln(2.5)), ...]
}
```
---
### **3. Evaluation (e)**
**Input**: AI logits + human judgment samples
**Process**:
- Solve constrained optimization:
$$
\min_{\alpha} \sum_{s=1}^S \left[ \left( \sum_{m} \alpha_m (L_m[i_s] - L_m[j_s]) \right) - d_s \right]^{2}
$$
where:
- $\alpha_m$ = weights for AI models
- $L_m$ = AI model logits
- $d_s$ = human judgment differentials
**Output**: Combined value scores
```
e(m(S)) = final_weights = [0.25, 0.15, ...]
```
---
### 4. Reward ( r )
**Input**: Final evaluation scores
**Process**:
1. Exponentiate logits → credit weights
2. Hierarchical normalization (children edge weights sum to 1 per parent)
**Output**:
```
r(e(m(S))) = normalized_weights = dependency credit allocation
```
---
## **Why This is a Special Case**
1. **Hybrid Evaluation**: Blends programmatic AI scoring (quantitative `m`) with human pairwise comparisons (subjective `e`)
2. **Retroactive Focus**: Rewards past contributions rather than prospective work
3. **Fixed Scope**: Single evaluation of predefined dependency graph (vs recurring IE rounds)