# Quantitative Analysis of Delegation Architectures for Lean Rainbow Staking
---
## **Abstract**
This study quantitatively evaluates three delegation-state architectures for the Lean Rainbow Staking (minimal) specification.
All models share a single staking role (Attester) and the canonical flat list `stake[S]`.
The analysis compares state size, computational overhead, reorg determinism, and zk-prover friendliness across three design candidates that define how delegation relationships could be recorded and processed in Lean consensus:
1. **Horizontal** (delegate-centric inbound lists)
2. **Vertical + CSR** (centralized sparse representation)
3. **Committed Mapping (+ Deltas)** (zk-minimal mapping)
Each architecture is examined in both many-to-many and many-to-one (all-or-nothing) delegation topologies. The scope of the study is to quantify the maximum theoretical state footprint, the delegation-state size at fast finality scale (with $N = 2^{14}$ attesters), and the computational overhead for slot-level updates and reorg handling.
---
## **1 Introduction**
Delegation architectures determine and verify relationships between stakers and their delegates, informing proportional rewards, penalties, and slashing accountability. While implementation details may vary, delegation mechanisms fall into three structural families:
* A **horizontal inbound view**, where each delegate lists its delegators directly,
* A **vertical ledger**, where delegations are stored as centralized entries, preferably with deterministic offsets,
* A **committed mapping**, where delegations exist as compact per-delegator mappings, completed by cumulative indices.
---
## **2 Modelling Assumptions and Parameters**
### Minimal Functionality
All models must:
- Track delegation / undelegation / redelegation slot-by-slot. Keep operator fee accounting.
- Apply rewards, penalties, and slashing proportionally to stake participation.
- Remain reorg-safe and deterministic under a bounded (≈3-slot) finality window.
- Support incremental slot updates with complexity O(Δ), where Δ = #delegation operations per slot.
- Expose verifiable commitments usable for zk proofs and historical audits.
### Encoding & Accounting
- Each `Uint64` = 8 bytes; `Bitlist[N]` = $\frac{N}{8}$ bytes.
- Fields count fully toward consensus-state size, even when zero.
- Only persistent consensus state is measured (no transient execution data).
### Graph models
Given your rule that a staker can be either a delegator or a delegate (never both), and no self-delegation follows from that rule, the type of graph used to represent delegation relationships under the analysed architectures is a [directed, acyclic graph](https://en.wikipedia.org/wiki/Directed_acyclic_graph).
There are $N$ total stakers, split as $D$ delegators and $T$ delegates, where $D+T=N$
- Delegators are graph vertices with an out-bound degree $>= 1$ and an in-bound degree $=0$
- Delegates are graph vertices with an in-bound degree $≥ 1$ and an out-bound degree $=0$
We define have two types of delegation relationships, for each architecture:
- **Many-to-many delegation**, where each Delegator graphs are [Bipartite Tournament graphs](https://en.wikipedia.org/wiki/Tournament_(graph_theory)), with the number of possible edges being:
$E=k(N−k)$, where each of the $k$ Delegators can connect to any of the $N-k$ Delegates
The expression
$E(k) = k(N - k) = -k^2 + Nk$
is a quadratic equation, in standard form
$ak^2 + bk + c$, with
$a = -1$,
$b = N$, and
$c = 0$.
The vertex of the parabola
$y = ax^2 + bx + c$
occurs at
$k = \frac{−b}{2a}$ $= \frac{-N}{2 ({-1})}=$ $\frac {N}{2}$.
Thus, the number of edges (possible delegations under the given constraints) is maximised at
$D = T = \frac{N}{2}$ $-> E_{max} = \frac{N^2}{4}$
- **Many-to-one delegation**
When the out-bound degree of the Delegators vertices is 1,
$E(k) = k(N - k) = N - 1$,
thus, the number of edges (possible delegations under the given constraints) is:
$E_{max} = N - 1$
### Design Parameters
| Parameter | Meaning | Value |
|----------------------|--------------------------------------------|------------|
| **N** | Total attesters (stakers) | 16,384 |
| **3SF window (K)** | Non-finalized slots that may reorg | 2/3 |
| **Role split** | Delegators **D** + Delegates **T** = **N** | — |
| **Many-to-many max** | **[N² / 4]** delegations (D ≈ T ≈ N/2) | 67,108,864 |
| **Many-to-one max** | **N − 1** delegations (D = N−1, T = 1) | 16,383 |
### Shared constraints.
- A staker can act **either** as a delegator **or** a delegate (never both);
- Self-delegation is **disallowed**.
- Stake flat list (`stake[S]` - $8·S$ bytes) holds self-owned balances and is **excluded** from delegation-state size comparison totals.
---
## **3 Horizontal Architecture (Delegate-Centric)**
### 3.1 Overview
The horizontal architecture embeds delegation data inside each delegate’s state object. Each delegate `t` keeps:
* `delegators[D]`: indices of stakers delegating to `t`,
* `quotas[D]`: proportional participation weights per inbound delegator.
Every entry in `delegators[]` identifies a staker that delegated stake to `t`, and the corresponding position in `quotas[]` records the proportional share of the delegate’s cumulated stake contributed by that delegator.
This delegate-centric structure offers excellent locality for reward and slashing computation because all relevant delegators of a delegate are co-located within the same object. However, its state footprint grows as $O(S·D)$, where `S` is the number of stakers and `D` is the per-delegate capacity.
During slot processing, updates are applied in-place with $O(D)$ complexity per affected delegate.
Because each delegate’s list is independent, concurrent redelegations can occur without global locks, and reorgs are expensive: restoring state after rollback requires rewriting or re-linking multiple inbound lists.
Despite its conceptual simplicity, the Horizontal model is less practical at the consensus layer due to its quadratic scaling and non-deterministic reorg merges, though it can be useful as a conceptual or practical baseline for off-chain or higher-layer implementations.
This mirrors the
inbound [eODS pattern, presented in a prior work](https://ethresear.ch/t/eods-enshrined-operator-delegator-separation-a-delegation-model-proposal/22826): intuitive, locally readable, but memory-heavy because capacity is reserved per delegate.
### 3.2 Many-to-many
* **Max delegations:** $\frac{N^{2}}{4}$ = 67,108,864$
* **Scaling:** $O(S·D) = 16·N·D$ bytes (two 8-byte arrays)
* **3SF footprint ($N=16,384$):** $16 \times 16,384 \times 16,383 = 4,294,705,152 bytes ≈ 4.00 GiB$
* **Computation:** $O(D)$ updates per affected delegate; global scans $O(S·D)$
* **Notes:** High density; merges under reorgs are heavy and non-deterministic without extra mechanics.
### 3.3 Many-to-one
* **Delegations:** $16,383$
Even though only $N − 1$ delegations are live, capacity is still $D = N − 1$, same allocation:
* **3SF footprint:** $≈ 4.0 GiB$
* **Computation:** $O(1)$ per delegation event
* **Notes:** Horizontal has identical delegation footprint at $N=16,384$ regardless of multiplicity, because costs are capacity-driven.
### 3.4 Summary
| Mode | Max Delegations | Approx. State | Per-Slot Update | Notes |
|--------------|-----------------|---------------|-----------------|----------------------------------------|
| Many-to-many | 67,108,864 | ~**4.0 GiB** | $O(D)$ | Quadratic scaling; heavy memory use |
| Many-to-1 | 16,383 | ~**4.0 GiB** | $O(1)$ | Sparse but over-allocated structurally |
### 3.5 Reward and Slashing Flow
Reward distribution in this model is executed directly through the delegate’s local `quotas[]` list.
When the protocol finalizes a reward or penalty for a delegate `t`, the total change `Δ_t` is
divided proportionally among all inbound delegators `d ∈ delegators_t` as:
```
reward[d] = quota[d] × Δ_t
```
Each delegator’s withdrawable balance is increased (or decreased) accordingly. Because rewards and penalties are computed locally inside the delegate container, no global indexing mechanism is required.
Slashing, on the other hand, is **applied in-place**: when `t` is slashed, both `t`’s own stake and the stakes of its inbound delegators are reduced proportionally to their participation quotas.
The mapping and quotas remain unchanged; only the `stake[]` values are diminished.
This yields correct proportional loss but implies high recomputation cost during reorgs, since slashed entries must be revisited within each affected delegate object.
---
## **4 Vertical + CSR Architecture (Central Ledger)**
### 4.1 Overview
The **Vertical CSR (Compressed Sparse Row)** architecture structures delegation data into
centralized, contiguous arrays shared by all delegates. Instead of embedding inbound lists within each delegate object, delegations are encoded as **edges** (`E`) in a flat ledger, grouped by target staker via deterministic slice offsets. This model enables a compact memory layout and deterministic updates.
The core arrays can be of the form:
```
delegation_delegator[] – delegator indices
delegation_quota[] – proportional weights
delegation_update_counter[] – per-delegator monotonic counter
delegate_offsets[] – [start, count] pair per delegate
```
Together they form a compressed matrix where each segment of `delegation_*` arrays corresponds to all delegators that delegated to a given delegate.
The `delegate_offsets[]` array holds, for each delegate, the starting index and the number of inbound records in the centralized ledger.
This architecture allows fast linear scans over each delegate’s slice, ideal for reward distribution and slashing proportionality.
All delegations are stored in a single ledger, thus memory usage scales linearly with the number of active delegations (`E`), not quadratically with the number
of stakers(`S`).
Per-delegate inbound relationships become **contiguous slices**, yielding cache-friendly scans and deterministic deltas.
### 4.2 Many-to-many
- **Max delegations:** $E = \frac{N^{2}}{4} = 67,108,864$
- **Scaling:** 32 bytes per edge (three 8 B fields plus padding) + per delegate
offsets $≈ 32·E + 16·N$ bytes
- **3SF footprint:** $≈ 2.0 GiB$
- **Computation:** $O(Δ)$ append per slot; $O(Δ)$ merge at finality
using [LWW](#72-Last-Write-Wins-LWW-via-Update-Counters)
- **Notes:** Dense but deterministic; strong auditability via per-delegate slices
### 4.3 Many-to-one
**Simplified columns:** one active edge per delegator; per-staker granularity; no
`delegation_quota[]`.
- **Delegations:** $16,383$
- **Per-entry columns**: $3 \times (8 B·S)$ + small meta ($8·S$) $= 24 \times S$
- **Offsets**: $16·S$ Bytes = $(24 + 16) \times S = 40 \times S$ B $= 40 \times 16,384 =$ 655,360$ B $≈ 0.625$ MiB
- With some common aggregates ($+ 16 \times S = 262,144$ B): $655,360 + 262,144 = 917,504$ B $≈ 0.875 MiB$
- **3SF footprint:** $≈ 0.63–0.88 MiB$
- **Computation:** $O(Δ)$ per slot; simple append/removal into slices
- **Notes:** Minimal canonical form;
### 4.4 Summary
| Mode | Max Delegations | Approx. State | Per-Slot Update | Notes |
|--------------|-----------------|--------------------|-----------------|-----------------------------------------|
| Many-to-many | 67,108,864 | ~**2.0 GiB** | $O(Δ)$ | Deterministic merges; contiguous layout |
| Many-to-1 | 16,383 | ~**0.63–0.88 MiB** | $O(Δ)$ | Compact; reorg-stable baseline |
Each slot appends Δ-sets of new delegations; at finality, those deltas merge deterministically using Last-Write-Wins (LWW) counters, offering identical post-merge states.
Reorg handling is lightweight — non-finalized Δ-sets are discarded, and only K = 2/3 buffers are retained under the fast finality window.
### 4.5 Reward and Slashing Flow
In the CSR ledger, rewards and penalties are processed by iterating through each delegate’s contiguous slice of delegations.
For a delegate `t` receiving net reward `Δ_t`, the protocol computes:
```
for each (d, quota[d,t]) in slice(t):
reward[d] += quota[d,t] × Δ_t
```
Each delegator’s reward index or withdrawable balance is updated proportionally.
Because all delegations for a delegate are stored sequentially, this operation is cache-friendly and linear in the number of inbound delegators.
Slashing is executed in a similar (proportional) manner.
When delegate `t` is slashed by amount `Slash_t`, the total loss is distributed across `t` and all its inbound delegators using their normalized quotas.
The slashing event directly reduces the affected `stake[]` entries in-place, while the CSR structure itself — arrays and offsets — remains unchanged. This makes slashing deterministic and atomic at merge time, with minimal recomputation during reorgs.
Overall, the CSR model achieves a practical compromise between **compactness**, **determinism**, and **auditability**, making it one of the best candidates for consensus delegation models in lean Consensus.
---
## **5 Committed Mapping Architecture (ZK-friendly)**
### 5.1 Overview
The committed mapping compresses delegation state into a single array between delegators and delegates, augmented with cumulative indices:
Core delegation-state arrays:
```
target_staker[S] – active delegate per delegator
target_reward_index[S] – cumulative reward index per delegate
delegator_reward_index[S] – – last claimed reward index per delegator
(optional) target_effective_stake[S]
(optional) target_penalty_index[S]
```
Each delegator can reference **only one active delegate** at any given time.
The mapping is rewritten atomically upon delegation ops, producing a new root commitment every slot.
This design eliminates merge logic and ensures deterministic reorg behavior — non-finalized roots are simply replaced during reorganization.
### 5.2 Many-to-many (infeasible for this architecture)
A single `target_staker[d]` can hold only one active target. Additional mappings is not possible in this architecture, thus, true many-to-many cannot be implemented with a flat mapping.
### 5.3 Many-to-one
* **Delegations:** **$16,383$**
* **3SF footprint:** $3 \times (8 B·S) = 0.375 MiB$ $ + Optional fields $≈ 0.75 MiB$ $+ ~144$ KB
rolling $Δ$
* **Computation:** $O(Δ log S)$ per slot with root replacement (no merging of roots)
* **Notes:** Smallest persistent footprint; fully deterministic
### 5.4 Summary
| Mode | Max Delegations | Approx. State | Per-Slot Update | Notes |
|-----------|-----------------|--------------------|-----------------|-----------------------------------------------|
| Many-to-1 | 16,383 | ~**0.38–0.75 MiB** | O(Δ log S) | Native zk-consensus form, constant-size state |
### 5.5 Rewards, Penalties and Slashing Flow
Because reward and penalty indices are cumulative, the Committed Mapping architecture computes rewards algebraically at withdrawal, rather than distributing them in real time.
For a delegator `d` mapped to target `t`, the withdrawable reward is determined by the difference between the global and nominal indices:
```
reward[d] = stake[d] × (target_reward_index[t] - delegator_reward_index[d])
```
This allows the protocol to track rewards compactly as scalar delta-values rather than lists or quotas, removing the need for direct iteration over lists.
When a delegate `t` earns new rewards or penalties, only its `target_reward_index[t]` is
incremented; individual delegators retrieve their proportional gains later during withdrawal/ undelegation/ exit.
**Slashing** follows the unified in-place rule: both the delegate’s and delegators’ `stake[]` entries are reduced proportionally to their participation weights, with the indices remaining unchanged.
This delegation model could be described as exact, state-minimal, and reorg-proof, **a suited candidate for a zk-proved consensus**.
---
## **6 Stake Semantics (unified across models)**
### Deriving PoS weight
Delegations reference `stake[]` in all analysed models, but do not alter it, as it holds just the self-owned balances.
Cumulative stake is derived, dependent on model, from the same formula:
- `cumulated_stake[t] = stake[t] + Σ(stake[d] for all d delegating to t)`, if many-to-1 delegation applies
- `cumulated_stake[t] = stake[t] + Σ(stake[d] × quota[d,t])`, if many-to-many delegation applies
Some of the models store persistent cumulated stake (PoS voting weight) data, others derive it algebraically from the core structs they do keep.
### Slashing the stake
* Applied in-place at the slashing moment.
* Delegation mappings and reward indices do not change because of slashing.
* Slash is split **proportionally** between a delegate and its inbound delegators using stake - or weight-based shares already available in each model.
---
## **7 Core Structural Tools: CSR and LWW**
### **7.1 Compact Sparse Row (CSR) Architecture**
The vertical delegation models in this study rely on the [Compact Sparse Row (CSR)](https://en.wikipedia.org/wiki/Sparse_matrix) pattern to store large sets of delegation relationships efficiently and deterministically. This is ideal in the delegation context, where the delegation matrix ($N_{delegators} \times T_{delegates}$) is sparse, with some values here and there, the rest being un-allocated entries.
CSR decomposes delegation data into **parallel dense arrays** (columns) accompanied by an **offset index**, avoiding nested or per-object lists.
Together, the arrays define contiguous *slices* of delegations for each delegate.
For example, all delegations to target `t` live in:
```
range(delegate_offsets[t].start, delegate_offsets[t].start + delegate_offsets[t].count)
```
Each record across the arrays at the same index collectively forms the logical tuple
`(delegator_index, target_index, quota, update_counter)`.
#### Advantages of CSR
- **Compactness:** state scales with actual delegations ($S·d$), not capacity.
- **Contiguity:** per-target reward/penalty distribution is a linear scan.
- **Determinism:** merging finalized deltas or reorg rollbacks are simple range operations.
- **ZK-friendliness:** each column can be separately Merkle-committed for inclusion proofs.
This structure could replace the heavy nested arrays of the horizontal model with a global, linear, memory- and proof-efficient ledger of delegations.
---
### **7.2 Last-Write-Wins (LWW) via Update Counters**
Delegation state transitions — **delegation**, **undelegation**, and **redelegation** — use a
monotonic update counter to enforce Last-Write-Wins semantics.
Each delegator maintains an incrementing counter (`update_counter[d]`) that increases with every delegation-related operation.
When multiple records exist for the same delegator within temporary deltas, the entry with the highest update counter represents the canonical active mapping.
#### Purpose
- Prevents **replay or reordering** of outdated delegation events.
- Provides a **total causal order** on delegation state.
- Ensures merges between deltas and the base CSR are deterministic.
- Allows reorg-safe rollbacks: lower counters can be invalidated safely.
#### Lifecycle application
1. **Delegation:** new target staker assigned → increment counter.
2. **Undelegation:** clear mapping → increment counter.
3. **Redelegation:** update to new target staker → increment counter.
This is a possible approach to conflict resolution at slot finality, i.e. for concurrent or replayed updates.
---
## **8 Reorg Determinism and the 3SF Window**
- Fast finality window (K=2-3)
- Only the last 2/3 slots may reorg. Everything older is finalized.
- Deterministic ordering with tools like LWW
- Every delegation event (delegation / undelegation / redelegation) increments a per-delegator `update_counter`. Within a slot, events are ordered by `(slot, op_index)`; across slots, higher `update_counter` (or later slot) wins.
- CSR models
- Under these models, we keep per-slot delta buffers; on reorg, dropped deltas are to be deleted and aggregates reverted. On finality, deltas are to be merged into the base CSR deterministically (LWW-filtered, sorted by target, then delegator, then slot/op index). Only the last $K$ buffers are retained.
- Committed Mapping.
- Each slot commits fresh roots for `target_staker` and cumulative indices. Reorgs swap non-finalized roots; no merge step is required for this model.
**Result**: reorg handling is done locally, in a deterministic way, across all architectures.
## **9 Comparative Results**
### 9.1 State Size Comparison
| Architecture | Many-to-many | Many-to-1 | Scaling Behavior |
|-----------------------|--------------|--------------------|-------------------------------------------------|
| **Horizontal** | ~**4.0 GiB** | ~**4.0 GiB** | Quadratic for D×T; linear for single-target |
| **CSR** | ~**2.0 GiB** | ~**0.63–0.88 MiB** | Linear in active delegations; reorg-stable |
| **Committed Mapping** | - | ~**0.38–0.75 MiB** | Constant-size commitment; multiplicity-agnostic |
### 9.2 Computational Overhead Summary
| Architecture | Slot Apply | Reorg Handling | Merge / Proof Behavior |
|-----------------------|------------|-----------------------|------------------------------------------|
| **Horizontal** | O(D) | Full rewrite/journals | Non-deterministic without extra rules |
| **CSR** | O(Δ) | Deterministic merge | Stable via LWW counters |
| **Committed Mapping** | O(Δ log S) | Root replacement | zk-proof enforces transition correctness |
---
## **10 Conclusions**
Although models are in general imperfect, since functionality is persistent across analysed models, and provided they don't contravene with the fundamentals of the projected system, we can assess that, under fast finality, with the assumption of having $N ≈ 16 K$ attesters:
- **Committed Mapping** provides the **smallest and most predictable** delegation-state footprint (sub-MiB), invariant to multiplicity at the consensus layer and ideal for recursive zk proofs.
- **CSR** offers the **best native compromise**: deterministic merges, contiguous slices, and linear scaling with active delegations; well-suited for transparent, auditable consensus.
- **Horizontal** is intuitive but likely impractical for lean Consensus use due to quadratic state growth and heavy reorg handling. It can still be useful as a conceptual baseline for research.
**Overall:** Many-to-one delegation delivers compact, verifiable state across all architectures, while many-to-many rapidly stresses memory in Horizontal and CSR layouts.
The Committed Mapping structure marks the **asymptotic minimum** for consensus-level delegationstate in lean, zk-friendly designs.