# Alignment ga
mes
---
> As they evolve, good protocols tend to rise to the standard articulated by Milton Friedman: **they “make it profitable for the wrong people to do the right thing.”**
> -- Venkatesh Rao
---
## What is Protocol Governance?
- **Blockchain Protocol**
- set of rules that specify how entities in a network can interact, i.e., predictable action space → lower tx costs (triangulation, trust, transfer)
---

---


---
- **Protocol Governance**: Set of rules that specify how to update a protocol, includes:
1. **Protocol Updates**
- *deploy Uniswap V4*
2. **Protocol Parameter Updates**
- *set Uniswap Protocol Fee Switch on the ETHUSDC pool*
3. **Policy Updates**
- *Update Uni v3/v2 Deployment Process*
4. **Treasury Allocation**
- *allocate 25,000 UNI to the Accountability Working Group*
---

---
## Why is Governance important?
<img src="https://hackmd.io/_uploads/SJQJlul8A.png" alt="drawing" width="500"/>
> **the need to adapt a set of network protocols is emergent**
> -- Eric Alston
---

No Governance = Maladaptive Protocols → Protocol Collapse
---
### Tension
Users value a *predictable action space*, but also require that the action space *adapts over time*
---
### Proposition
A protocol’s value is a function of the predictability of its governance mechanism.
---
## Why is Decentralized Protocol Governance important?
> **decentralized technology alone does not guarantee decentralized outcomes**
> -- Nathan Schneider
---

Centralized Protocol Governance = Strong incentives to change the rules arbitrarily
---
## Examples of Governance Mechanisms

---
## Protocol Governance Problems

Self-dealing
---

Free Riding / Apathy
---
## A Framework: Alignment Games
1. Given an **objective**, e.g., *Maximize Total Value Locked*
2. Evaluate a **contribution**, e.g, *proposal to set Uniswap Protocol Fee Switch on the ETHUSDC pool*
3. Determine its **alignment**, e.g., *[-1,1]*
---
**Question**
*Can we use AG to produce a mechanism that reliably allocates resources for contributions that achieve an arbitrary objective?*
---
# Alignment Games for distributing rewards
Problem: distribute compensation to contributors according to an objective, e.g., token price, TVL, order flow
---
## Futarchy
*Vote on values, bet on beliefs*
- 📐 *Measurable* objective (e.g. order flow)
- 🔮 How much is the objective improved?

- ⚡ Execute
- 💰 Reward contributor
---
## Retroactive funding
*Build first, earn later*
- 💼 Contributions
- 🏦 Evaluation (e.g. trusted experts)
- 💰 Reward
- 🪙 Project tokens
---
## Mixing auctions and futarchy
*Preferences and predictions*
- ✋ Proposals in competition for resources
- 🪄 Max welfare and external social value

---
## Inter-subjective evaluations
*Reward alignment*
- 👁️ *Unequivocal* objective (e.g "liquidity layer of the internet")
- 💯 Contributions rating
- 🔪 Raters are slashed if misbehave
---
# Alignment Mining
---
## Evaluation

---
## Security
- 🔒 PoS: consensus & slash
- 🧑⚖️ Adjudication: appeal & slash
Requires:
- 👁️ Observability
- 🔴 Unequivocal violations
---
### Strong ratings
- Forking
- Reputation
---
## Scaling device
- Core raters review a sample
- Peer prediction mechanism on top

-
{"title":"Alignment Games","description":"View the slide with \"Slide Mode\".","contributors":"[{\"id\":\"ffd4c12d-9295-4469-9c84-145f40f08df8\",\"add\":5109,\"del\":3144},{\"id\":\"0a1a79bc-298f-49b7-b643-ea3a871beb22\",\"add\":3220,\"del\":878}]"}