# All Roads Lead To CoW Chain
This research proposal proposes CoW Chain, a purpose-built decentralised layer for cross-chain intent settlement. Rather than rehashing known centralisation challenges, this document focuses exclusively on concrete solutions and technical architectures that will enable CoW Protocol to achieve true decentralisation while expanding its cross-chain capabilities. The proposal is structured into distinct components, each addressing a specific aspect of the system architecture.
It aims to provide a realistic roadmap that can be implemented incrementally, addressing critical points of centralisation while maintaining the performance and efficiency that users expect from CoW Protocol.
# Consensus and Validator Mechanism
CoW Chain uses a consensus mechanism for decentralised intent settlement across blockchains. Based on POS systems, it uses validators to maintain a distributed ledger of trading intents, auctions, and settlements. This replaces centralised components with a verifiable consensus system that prevents censorship.
The consensus layer moves from trusted off-chain computation to a distributed network with clear accountability. By keeping validator consensus separate from solver execution, the system maintains efficiency while removing central points of failure.
This mechanism has different components working together to achieve decentralisation across the stack.
## Epoch based state aggregation
CoW Chain operates through a time-bound epoch system that efficiently manages state transitions while minimising transaction costs on settlement chains. Each epoch represents a fixed period (e.g. 1 hour or a predetermined number of blocks) (we should go with predetermined no. of blocks as blockchains aren’t time-continuous systems) during which CoW Chain:
1. **Epoch State Aggregation**: During each epoch, the protocol comprehensively collects and processes all relevant data, including order settlements, validator performance metrics, cumulative solver/validator rewards, insurance outcomes, settler fee distributions, and detected rule violations, compiling them for efficient batch processing at epoch boundaries to minimize transaction frequency and gas costs.
2. **Batched State Updates**: Rather than continuously submitting updates to underlying L1 (e.g. Ethereum), CoW chain aggregates state changes into periodic checkpoint transactions. This approach reduces gas costs compared to real-time updates.
3. **Manages Validator Set Changes**: Validator entries and exits are restricted to epoch boundaries, providing stability to the network while allowing for periodic reconfiguration of the validator set.
4. **Enforces Penalties and Slashing**: Validator misbehavior logged during an epoch is processed and enforced at epoch transitions, utilising an efficient batched approach to punishment. Validator rewards can also be included in this.
At the conclusion of each epoch, designated validators submit a checkpoint transaction to underlying L1. There can be three distinct approaches to implementing this critical function, each with different tradeoffs in terms of security, efficiency, and implementation complexity:
- **Naive Merkle Tree**
In the naive Merkle tree approach, CoW chain organises all epoch data into a standard Merkle tree structure. This comprehensive representation includes all transaction data and settlement events, validator set changes such as additions and removals, economic adjustments including rewards and penalties, and all settlement results encompassing executed trades and prices. At epoch conclusion, a designated validator submits the Merkle root to underlying L1, accompanied by a BLS threshold signature from the validator set to ensure authenticity. While the Merkle root itself is compact at approximately 32 bytes, this approach necessitates maintaining the full tree structure off-chain to generate inclusion proofs when needed. To verify the inclusion of any specific event or transaction, a standard Merkle proof must be generated by traversing the path from root to the target leaf node.
This approach can support fraud proofs and optimistic execution of state root hashes.
It is a low implementation complexity and higher gas costs for verification due to larger proofs approach.
- **Merkle Mountain Ranges**
The Merkle Mountain Range (MMR) approach uses the inherently append-only nature of CoW chain's state transitions to achieve superior performance. Unlike standard Merkle trees, MMRs are optimised for sequential append operations, which aligns with workflow of continuously adding new orders, settlements, and state changes. In this structure, all new nodes are written sequentially to storage, eliminating the need for random access during updates and enabling highly efficient disk I/O patterns. At epoch conclusion, the validator set collectively signs the "bagged peaks" – the combined hash of all mountain peaks and the total node count – which serves as the authoritative MMR root. With MMRs, as few as 10-11 hashes can be stored in the smart contract while providing access to the entire historical blockchain data. More information about how they work and benchmarkings can be found here [[1]](https://commonware.xyz/blogs/mmr.html) [[2]](https://commonware.xyz/benchmarks.html).
It provides lower gas costs for frequent updates due to efficient append-only operation and minimal storage requirements but can be complex to implement due to the same reason.
- **Modular Data Availability**
The modular approach separates concerns between state commitment and data availability. Only essential verification data is submitted to underlying L1, including a compact Merkle root representing the epoch's state, threshold signatures from the validator set, and critical state transition parameters. Meanwhile, all raw transaction data, execution details, and event logs are posted to a dedicated data availability (DA) layer optimised for this purpose. A ZK proving mechanism pipeline can be used if needed, allowing verification that data committed to the DA layer as a block is correct (can be built using SP1). This approach is suitable for a validity proof system which is somewhat easier to verify on-chain.
It costs minimal gas costs by storing only essential verification data on L1 and using specialised DA layers for bulk data and offering maximum flexibility for future scaling.
Although the latter two approaches enhance scalability, they're optimised for high-frequency append operations rather than epoch aggregation. Their appropriate implementation would depend on CoW chain's specific epoch duration and block time parameters, with benefits potentially diminishing for longer epoch intervals (especially for Merkle mountain ranges approach).
These approaches allows CoW chain to operate with minimal on-chain footprint while maintaining verifiable and transparent state records.
## Consensus and VM Design
### Multiple Proposer BFT
Traditional Proof-of-Stake (PoS) consensus protocols like Tendermint represented a significant advancements by offering deterministic finality and eliminating the possibility of forks. However, they face fundamental throughput limitations due to their reliance on a single-proposer architecture.
In the single-proposer model, consensus progresses through a synchronised sequence where only one validator at a time can propose blocks. This creates several critical bottlenecks that constrain performance regardless of hardware capacity or network conditions:
1. **Proposer Dependency**: The entire network's progress depends on a single validator's availability and honesty during each round. If the designated proposer is malicious or offline, the protocol must wait for a timeout before selecting a new proposer, causing significant delays.
2. **Sequential Processing**: Consensus steps must occur in strict sequence—proposal, validation, voting, and commitment—with each step requiring network-wide synchronisation before the next can begin. This creates a fundamental limit on how quickly blocks can be produced.
3. **Coupling of Data and Consensus**: The dissemination of transaction data is tightly coupled with the consensus process. When a proposer fails, both data propagation and consensus agreement are halted simultaneously, magnifying the impact of any disruption.
4. **Network Synchronisation Overhead**: The entire validator set must synchronise around the actions of a single proposer for every round, creating coordination bottlenecks that prevent the protocol from utilising the full capacity of the network.

According to me, a multi-proposer architecture represents the optimal design pattern for CoW Chain's needs for a permissionless intent settlement layer.
A multi-proposer architecture offer several critical advantages:
- Multiple validators can simultaneously propose transaction batches, significantly increasing the protocol's capacity to process cross-chain settlements in parallel
- Data dissemination occurs independently from consensus ordering, allowing order information to propagate across the network even during consensus disruptions
- The network maintains progress even when some proposers fail, ensuring robust operation even during partial network partitions
- Transaction throughput scales with the number of active validators, allowing the protocol to grow its capacity as adoption increases
There have been several significant leaderless BFT protocol designs that have been introduced in the past few years. Some of the ones in production are [Mir](https://github.com/consensus-shipyard/mir?tab=readme-ov-file), DBFT, [HoneyBadger](https://medium.com/elbstack/understanding-the-honey-badger-consensus-algorithm-240be28d7dc) and [AutoBahn](https://arxiv.org/pdf/2401.10369). All of these provide some specific features with some complexity and limitations, like HoneyBadger's asynchronous resilience but high latency, Mir's parallel transaction processing but complex coordination requirements, DBFT's weak coordinator approach with limited horizontal scaling, and Autobahn's seamless recovery but increased implementation complexity.
Thus, I think, we can take inspiration from Autobahn's approach to create a consensus mechanism specifically tailored for CoW Chain. By implementing a two-layer architecture that separates data dissemination from consensus ordering, we can achieve both high throughput and resilience to network disruptions. The key part would be parallel data lanes where all validators can simultaneously propose transaction batches containing settlement orders. This approach will allow transaction throughput to scale with the number of validators while maintaining Byzantine fault tolerance. With $n = 3f + 1$ validators (where f is the maximum number of Byzantine failures), the system can theoretically achieve up to n times the throughput of a single-proposer system. Each lane would employ a chained structure with Proofs of Availability (PoAs) requiring $f + 1$ validator signatures, ensuring that at least one honest validator possesses each data proposal. This enables constant-time verification of transaction history without requiring validators to store the entire history of all lanes.
For consensus, we could implement a two-phase protocol with $5$ message delays in the normal case: a Prepare phase requiring $2f + 1$ matching votes to form a `PrepareQC`, followed by a Confirm phase requiring another $2f + 1$ votes to form a `CommitQC`. During gracious intervals with no Byzantine behavior, we can optimise to a fast path requiring only $3$ message delays by forming a `CommitQC` directly when all $n$ votes are received.
For CoW Chain's cross-chain settlement use case, this architecture offers particular advantages. The epoch-based consensus cuts align naturally with our auction cycles, providing clear boundaries for settlement processing. Following any network disruption, the system can commit the entire backlog of settlement transactions in a single consensus round, minimising the impact on user experience. Additionally, this approach would enhance censorship resistance by providing multiple paths for transaction inclusion, preventing any single entity from blocking specific settlement orders. This is critical for ensuring fair auctions across multiple blockchains.
This architecture can be designed to incorporate encrypted mempool and ZK light client designs.
Although this multi-proposer architecture offers notable benefits, it comes with limitations. The protocol incurs quadratic $(O(n²))$ communication complexity, making it less scalable for large validator sets (which i don’t think will be the case for CoW chain). Its two-layer design increases implementation complexity due to intricate state management, cross-lane dependencies, and more demanding view change mechanisms. Validators also face higher resource requirements, maintaining state for all lanes. Also, while our multi-proposer architecture has higher communication complexity, we maintain app-specific efficiency by strictly limiting the state model and transaction types. The complexity is contained within consensus messaging rather than execution. Lastly, as a newer approach, it lacks the production testing and security scrutiny of established protocols like Tendermint. The most challenging part of a consensus like this is designing it's VM and in case, required, make it EVM-comptabile.
### ZK Light Client
CoW Chain would implement light clients as smart contracts on various settlement chains (Ethereum, Arbitrum, Gnosis, etc.) responsible for verifying CoW chain's state transitions. These light clients would store minimal historical data - primarily hashes representing CoW chain's epoch states - while enabling verification of transactions and events.
Although light clients fulfil the purpose of verifying transactions and block headers, they need to sync which is computationally heavy, which costs time, CPU cycles, and battery life, whereas zk light clients can instead validate a ZK proof that a block header is valid.
Instead of validating each header and consensus signature, ZK light clients validate a proof that someone else knows a header chain and the signatures that are required to make a block header that hashes to block hash H. The ZK approach is particularly relevant for our needs. By executing CoW chain's validation logic off-chain within ZK circuits, we can:
1. Drastically reduce gas consumption on settlement chains
2. Handle complex validation rules without burdening the settlement chain
3. Generate concise proofs that can be efficiently verified on-chain
For our epoch-based system, the light client would verify transitions between epoch states rather than individual blocks, with each proof demonstrating that a new epoch state follows validly from the previous one according to CoW Chain's consensus rules.
By organising updates as batches (epochs) and storing only essential hash data (first state, last state, and Merkle root), we can maintain efficient verification capabilities while minimising storage requirements.
There has been significant improvements around ZK light clients which will also help in reduced efforts on engineering side. Succint SP1 has shown promising results for ZK light client proving.
Also there are pre-built [circuits](https://github.com/succinctlabs/tendermintx) and POCs for [ZK Tendermint light client on Ethereum powered by SP1](https://github.com/succinctlabs/sp1-tendermint-example) along with general purpose [zkVMs](https://github.com/succinctlabs/tendermintx).

> Note: All performance figures are provisional and should be updated once RISC0 VM 2.0’s benchmarks are fully evaluated.
>
For all settlements completed by a solver/executor, the solver/executor provides:
- A proof that the settlement occurred within a verified epoch on the settlement chain.
- A Merkle proof connecting the specific settlement to the epoch's Merkle root.
which then can then be verified by validators and solver funds can be unfreezed.
CoW chain can also implement a flexible proof routing system that directs different types of settlement proofs to the appropriate verification mechanism based on the source and destination chains involved. This design would support multiple verification methods simultaneously - from gas-efficient light client verification for major chains to oracle-based verification for less frequently used chains - with routes configured immutably during deployment. By aggregating multiple verification strategies, we can optimise for both security and gas efficiency.
If an update doesn't match the current state of the CoW chain, state transitions will be verified permissionlessly using on-chain ZK light clients deployed on settlement chains.
### App Specific Design
In an ideal world, CoW chain is designed as a purpose-built chain optimised exclusively for intent settlement and cross-chain trading operations.
CoW Chain intentionally forgoes EVM compatibility to optimise for its specific use case. The chain's state machine tracks only the essential components for the protocol's operation like:
- Active orders and their execution states
- Solver positions and collateral balances
- Validator stakes and performance metrics
- Protocol parameters and configuration
This focused approach eliminates unnecessary complexity, enabling higher throughput and more efficient state transitions than would be possible with a general-purpose virtual machine.
There are different types of VM that can fit well with our use-case based on final execution model and weighing the benefits and drawbacks of different VM designs.
ISAs: MIPS, RISC-V, x86, ARM
Virtual Machines: WASM
Blockchain VMs: EVM, BitcoinScript, MoveVM, SVM
Changes to the protocol can be made using hard forks which will be completely in the hands of CoW DAO.
## Validator Design
Validators stake COW tokens on the underlying L1 (Ethereum), which serves as both economic security and the basis for validator selection.
The staking mechanism creates strong economic alignment between validators and the protocol's integrity. Stake is subject to slashing penalties for misbehavior, including rule violations, settlement manipulation, or censorship. This design ensures validators are financially motivated to maintain protocol security and correct operation across all integrated chains.
A limited number of validators with the highest COW stake-weight constitute the active set, eligible to participate in consensus and earn protocol rewards. This structure creates competitive pressure for validators to maintain substantial stake.
Active validators must operate COW Chain nodes that validate transaction bundles, settlement proofs, and state transitions. They sign attestations confirming the correctness of these operations and submit them to the protocol according to epoch boundaries. Validators who fail to submit valid attestations within the required timeframe do not receive their allocation, with these rewards rolling over to subsequent epochs.
The validator set undergoes updates only at epoch boundaries, creating stable periods of operation while allowing for gradual evolution of the validator ecosystem.
To ensure atomic cross-chain settlements, validators need to control the release of signatures for sequential transaction steps, only authorising the next operation after the previous one is confirmed. This can be achieved using threshold encryption where validators generate shared threshold encryption keys using a DKG protocol and after each sub-transaction is confirmed on its blockchain, validators provide their decryption shares only if they've verified the previous step was successful.
To prevent validator censorship of transactions, CoW Chain implements a multi-layered protection mechanism:
1. All validators must broadcast verified incoming orders to the entire network, creating shared knowledge of pending orders.
2. Validators reach consensus on which valid orders should be included in the upcoming transaction bundle.
3. Validators who omit valid orders face significant economic penalties, creating strong disincentives for censorship.
4. Secondary inclusion paths activate if censorship is detected, ensuring transaction execution potentially at the expense of the censoring validator.
### Role of validators
Validators perform several critical functions:
- Verifying auction results according to combinatorial auction rules and enforcing fair execution according to protocol rules
- Validating solver transactions and ensuring proper sequencing of fallible/irreversible operations (open to discussion)
- Monitoring solver collateral sufficiency for proposed settlements
- Verifying cross-chain settlements with the help of ZK light clients
- Aggregating state updates into epochs and generating cryptographic commitments
- Monitoring resource locks across chains using light client verification
- Enforcing protocol-specific consensus rules tailored to intent settlement
- Implementing slashing for rule violations
- Managing collateral locking and unlocking based on settlement outcomes (secondary layer)
- Verifying proofs submitted by solvers for settlement execution
- Using threshold cryptography to release signatures for sub-transactions only after confirming previous operations are sufficiently finalised for settlement.
# Cross-Chain Settlement Mechanism
## Resource Locks
Resource locks are essential for ensuring cross-chain settlement atomicity in CoW Chain. These locks guarantee that users cannot withdraw or reallocate funds midway through a settlement process, protecting solvers from being griefed when prices move or when executing multi-step transactions. In our architecture, when a user submits an intent to CoW Chain, validators classify it and secure the relevant assets through a time-bound resource lock. This lock represents a cryptographic commitment that prevents double-spending while the order is being processed. Unlike traditional on-chain locks requiring gas and confirmation waiting, CoW Chain validators enforce these locks directly through consensus, eliminating latency.
The lock creates a crucial separation between fulfillment time and settlement time - solvers can confidently execute destination chain operations without waiting for finality on the source chain. Validators verify lock integrity throughout the settlement process, and threshold cryptography ensures that transaction sequence steps are only revealed after proper lock validation. For cross-chain settlements, lock duration is calibrated to allow sufficient time for solvers to execute the destination chain transaction and submit proof of fulfillment. Once settlement is verified, validators release the locks, either transferring assets to solvers (for successful execution) or returning them to users (in case of failure).
This mechanism removes double-spend and finality risks from solvers, transferring them to the protocol's validator set, which has stronger crypto-economic security guarantees through staking and slashing conditions.
Although, there aren’t many protocols actively working on resource locks, [Uniswap's Tribunal](https://github.com/Uniswap/the-compact), [OneBalance Resource Locks](https://docs.onebalance.io/onebalance-vision/why-resource-locks), and [Pimlico's lock mode](https://docs.pimlico.io/infra/flash-fund/modes/resource-lock) demonstrate practical implementations that could be adapted for our architecture.
When combined with EIP-7702, batching capabilities, and privilege de-escalation features, these mechanisms provide a well suited results for resource lock design. We can customise these approaches to further align with our model. Although, we would have to somewhere rely on a GMP protocol.
## Solver Settlement Contract
For cross-chain settlement to function efficiently, each solver deploys specialised settlement contracts on every supported chain. These contract will be based as proxy factory instance contract with centralised nullifiers and transfer approvals which will be developed by CoW DAO. These contracts serve as execution endpoints for trades, interfacing between the consensus layer and the native functionalities of each blockchain.
Settlement contracts perform several critical functions:
- Receive and hold user funds during the trading process
- Execute swaps against on-chain liquidity sources when needed
- Implement standardised event emission for settlement verification
- Manage atomic execution following CoW chain-specified transaction ordering (fallible/reversible)
Settlement contracts emit standardised events upon successful execution, creating consistent verification touch points that the validators can observe and confirm. These events will follow a uniform format containing intent identifiers, solver identifier, execution parameters and commitments to execution states. This event format emitted by settlement contracts enables proof generation for cross-chain verification through ZK light clients. Permit2 will be used to pull funds into the settlement contracts.
Here, since each solver has a settlement contract, it is their responsibility to collect protocol fees from users. This delegates fee management to solvers while maintaining protocol revenue through the validator-monitored settlement process. Also, each settlement contract is linked to a specific solver's identifier within CoW Chain, with authorisation controls that prevent unauthorised execution. Settlement contracts also need to interact with resource locks mechanisms specific to each chain. Along with these, for non-EVM compatible chains, each settlement contract includes chain-specific adapters that translate CoW chain's universal settlement instructions into native transactions appropriate for that blockchain's ecosystem. These adapters handle differences in transaction formats, gas mechanisms, event structures, and smart contract interactions across diverse chains.
## Cross-Chain State Verification
We need to employ a verification framework for cross chain state and settlement verification. As already discussed above, Zk light clients can be used to verify CoW chain state transitions with minimal gas costs. For cross chain settlement verification, when solvers complete settlements, they generate inclusion proofs demonstrating that their settlement events exist within a specific block. These proofs are submitted alongside state verification data to establish the settlement's validity. The flow can look like this:
- A solver executes a settlement on a destination chain via their settlement contract
- The settlement contract emits standardised events containing all execution details
- To prove settlement completion and unlock collateral, the solver needs to:
- Generate a proof that the settlement event exists on the destination chain
- Submit this proof to the collateral chain's contract that manages solver bonds
- The verification happens directly on the collateral chain through its light client:
- The light client verifies that the block containing the settlement event is valid
- The contract verifies the Merkle proof showing the event exists in that block
- If verification succeeds, the contract automatically releases the appropriate amount of collateral.
For any disputes in this system, anyone can submit valid proofs to the bonded validator set and final determination can be made using threshold signatures.
Cross-chain interactions introduce significant complexity when managing external states, as they require reconciling data and state across disparate blockchain ecosystems. For example, swapping DOT (Polkadot) for ETH (Ethereum) necessitates not only ensuring the DOT state on Polkadot’s blockchain is updated correctly but also synchronising this change with Ethereum’s ledger for ETH. In the case of transactions between ecosystems, communication between them becomes necessary. From the perspective of Ethereum, ownership of DOT is ‘external state’ information. This external state information originates from another chain, outside of Ethereum, and can thus not be proven by Ethereum. For chains like these without native support for light clients, CoW Chain validators produce threshold signatures attesting to observed settlement events, providing a lightweight yet secure verification alternative.
## Permissionless Liquidity
The transition from single-chain swaps to cross-chain settlements introduces a fundamental challenge: providing permissionless liquidity for solvers in an atomic and trustless world. Despite extensive research, this problem remains unsolved for me in the current ecosystem.
Cross-chain settlements face multiple obstacles that single-chain operations don't encounter. When executing multi-hop or even cross-chain swaps involving bridges, participants must trust the security of those bridge - a big risk, given the numerous bridge exploits seen in recent years. Even with theoretically secure bridges, finality risks persist across chains with different consensus mechanisms and finality guarantees, creating worse UX.
While this [article's](https://forum.cow.fi/t/cross-chain-cows/2928) approach of matching complementary cross-chain intents offers a compelling solution, these matching opportunities occur relatively infrequently, especially for less common trading pairs and short standing orders. This limitation means that solvers still require other liquidity sources for most cross-chain settlements. PS: A lot of research needs to be put into this.
# Orderbook decentralisation
Orderbook in the context of the new architecture will be a permissionless and decentralised mempool preserving unfilled user intents. A pseudo-selected subset of bonded validators would construct batches from the encrypted mempool at regular intervals (similar to today's autopilot), submit them for validator approval, and then trigger solver competition once committed, so this mempool will directly gossip with the validator set.
There could be a model where there is no canonical orderbook and users directly submit their intents to validators, but this would lead to information asymmetry, order censorship, and preferential relationships with validators which would undermine fair competition as orders may remain completely unknown to most validators until settlement. Drawbacks of using EIP-4844, using blob space for order posting are already mentioned [here](https://github.com/cowprotocol/research/issues/20). A better approach is to implement an encrypted mempool, where user orders are encrypted and only decrypted after the batch is committed.
## Pre-Trade Privacy : Encrypted Mempool
The approach we follow to pre-trade privacy draws significant inspiration from [Shutter Network's threshold encryption mempool model](https://docs.shutter.network/docs/shutter/research/the_road_towards_an_encrypted_mempool_on_ethereum), where CoW chain bonded validators will participate in a DKG protocol to collectively generate an encryption key without any single party having access to the complete key. Rather than encrypting each transaction individually, orders are encrypted under an epoch key, with decryption only possible after the batch is committed and orders details remain encrypted until after validator commitment to the batch composition, preventing manipulation of order inclusion based on content. An epoch based key generation approach suits the model for practical reasons:
- This approach eliminates the DKG ceremony across multiple batches, avoiding the latency and compute of per-batch key generation.
- A single epoch key means orders never need to be re-encrypted between auctions, removing transition-phase vulnerabilities.
- Orders that carry over into later batches remain protected under the same key, preventing accidental exposure.
Pre-trade privacy would also prevent MEV extraction by removing visibility into pending intents. Front-runners, sandwich bots, and validators have no data to act on until batch closure, which ensures no one can reorder, insert, or exploit orders.
## Censorship and Collusion Resistance
Unlike traditional systems where block proposers can arbitrarily censor transactions, we implement protections against both censorship and collusion. Some measures that can be incorporated for this can be:
- When validators receive encrypted orders, they must broadcast them to the entire validator network, creating a shared knowledge base of pending intents without revealing their contents. This mechanism ensures that orders cannot silently disappear. Also, by employing multiple concurrent BFT, the system creates multiple paths for order inclusion.
- Validators collectively verify that batches proposed for commitment include all valid orders, Batches missing known valid orders can be rejected and validators who are found to have omitted valid orders face slashing.
- Collusion is very rare in the case of well bonded validators, but in the case of collusion with threshold encrypted batches, there is practically no objective way of identifying that.
## Silent Vs. Batched Threshold Encryption
For encrypted mempool’s threshold encryption, we must choose between Silent Threshold Encryption (STE) and Batched Threshold Encryption (BTE), each with different approaches detailed in the research papers [[1]](https://eprint.iacr.org/2024/263.pdf) [[2]](https://eprint.iacr.org/2024/1516.pdf) and minimal implementations [[1]](https://github.com/guruvamsi-policharla/silent-threshold-encryption) [[2]](https://github.com/guruvamsi-policharla/batched-threshold-pp). After thorough assessment of both, I feel that BTE offers better alignment with CoW chain's architectural requirements.
- BTE was specifically designed to address the mempool privacy problem. It directly solves the challenge of batch decryption with minimal communication overhead, which aligns well with the epoch-based auction mechanism. In our epoch‐based auctions, each round settles $B$ pending intents. BTE’s decryption then takes $O(N+B)$ time, collect $N$ shares and interpolate all $B$ ciphertexts, so as market activity (and $B$) grows or shrinks, total work grows linearly and predictably per auction cycle.
- Regardless of how many intents $B$ are in play, each of the N validators emits one 48-byte share. Total communication is $O(N)$, insulating network gossip and on-chain posting costs from transaction volume spikes and keeping per-epoch gas usage minimal.
- The efficiency measurements: 8.5 ms encryption time, 3.2 s batch decryption for 512 transactions, and 3.0 s reconstruction time which is slightly slower than STE but it’s polynomial-commitment trick only lets you open exactly the $B$ orders you choose in each auction, and leaves every other intent ciphertext irrecoverable, even though they lived in the same mempool. That map-and-open design gives strong “pending transaction” privacy by construction, which is not applicable in STE. Lastly, STE leans heavily on witness-encryption machinery which hasn’t seen the same level of production use as compared to KZG, Shamir, etc.
# Solver Decentralisation
## Collateral vs. Inventory Model
Under an inventory-based approach, each solver must front its own on-chain balances to satisfy user orders. When a batch executes, the solver’s personal wallet or “inventory” moves funds directly into the settlement contract or directly to user’ wallet, and any execution failure simply reverts on their own balance. No external bond is ever posted, so there’s no on-chain collateral to slash if the solver compromises trust and perform malicious interactions. While this model minimises setup, it offers no economic guarantee and requires solvers to have liquidity across chains, which is quite difficult in the world of infinite chains.
All the drawbacks and comparisons about these models are already mentioned in this [article](https://forum.cow.fi/t/cross-chain-cows/2928), so jumping straight into a collateral‐based design. The obvious question arises that where should the solver collateral be present and for how long? There are multiple ways solver collateral can be arranged, a bond can be setup for each chain, a unified bond on any major chain (Ethereum/Gnosis) or a unified bond on CoW chain. There are benefits and drawbacks for each approach. Per chain approach lock separate stakes on each settlement chain which allows each chain to slash misconduct locally and release funds immediately after its finality period. However, it fragments capital and requires solvers to maintain multiple deposits and approvals across different chains. Unified bond on an major chain approach provides the strongest economic security with only one deposit to manage but slashing requires cross-chain messaging through bridges or light clients, introducing latency, bridge risk, and higher gas costs while keeping funds locked for the slowest chain's finality period. Whereas, the unified bond on CoW chain exists within the same environment that handles auctions and settlements. This enables instant protocol-level slashing with minimal fees and a single source of truth. However, in this approach, security is limited to CoW Chain's validator set and mechanisms for recovery are need in the case of chain halt.

The most compelling approach among these seems like the unified bond on CoW chain and similarly to [swaps.io](http://swaps.io), a unified collateral pool can be established on CoW chain where collateral is divided for specific chains and remains on a single chain while being logically tracked across multiple chains. This system would use counter-based tracking with separate "locked" and "unlocked" collateral states per chain. When a solver executes an order on any settlement chain, the corresponding amount of collateral is marked as locked for that specific chain without requiring actual asset transfers. After the solver provides proof of successful settlement, the locked amount returns to the available pool. Additionally, for maximum efficiency, this approach can primarily rely on direct light client verification for collateral lock and unlock based on settlement and validators can act as a secondary layer that can intervene in exceptional circumstances.
The collateral lockup duration should be atleast the sum of the solver window to submit and confirm all on-chain transactions $(T(exec))$ and longest finality + challenge period on any settlement chain involved $(T(finality))$ to prevent solvers from overcommitting.
## Solver Bond Reduction
The current CoW protocol architecture requires solvers to provide substantial bonds ($500K in stablecoins + 1.5M COW) to participate, creating a significant barrier to entry. The core of this problem is capital that sits locked in a bond has an opportunity cost. This could be significantly brought down by two concepts: User specified insurance (also discussed [here](https://github.com/cowprotocol/research/issues/7)) + fixing solver collaterals. User specified insurance refers to users indicating in their order how much bond they expect from solvers and as discussed before, a user who posts an order with an extremely large insurance figure is trying to force solvers to lock a disproportionate slice of collateral, potentially crowding out smaller competitors. To neutralise this behaviour a cost-based-cap can be employed on the user specified insurance, where the $insurance ≤ κ × orderNotional$ (eg. $k = 0.6$). Another alternative or perhaps a complementing approach is introducing built-in economic penalty that the protocol can add to each solver’s bid so that using up a lot of collateral (because users asked for large insurance) makes the bid look less attractive utilising a funding-cost penalty coefficient $(λ)$ (eg. $λ = 0.0005)$ , where it is just a conversion factor decided by CoW DAO to translate one dollar of collateral that a solver must freeze into an equivalent negative adjustment in the auction-score. Then the score would be:
$score = grossPrice − λ × (total insurance in batch)$
So, If two solvers quote the same trade price, the one that needs less frozen collateral gets a higher score and wins. A user who asks for huge insurance forces every solver to lock more capital, that increases the term λ × insurance and therefore reduces the solver’s score unless the solver raises the trade price to offset the penalty. In effect, the extra insurance cost is pushed back onto the user as a worse execution price, so users will only request large insurance when they truly value the extra protection.
> Note: This is a high-level concept, precise figures must be calibrated through analysis.
>
Regarding the appropriate amount of collateral a solver should put for the settlement of the batch, according to me, should be determined empirically using historical data, analysis and testing rather than speculation. I’m deliberately leaving any numeric formula out of this draft for it, the figure should emerge from continuous measurement and adjustment rather than from a one-off theoretical guess.
## Executor Market Approach
To further reduce solver bonds and let smaller teams compete on pure strategy, without tying up large bonds or facing slashing risk, we can introduce an open executor market, where solvers focus on producing the best batch, while specialised executors, already bonded, race to underwrite and settle those batches. This slashes the capital barrier for new solvers, widens competition, and lets collateral be pooled where it’s most efficient. In this model, the protocol splits responsibilities between two specialised roles:
- **Solvers**: Focus on finding optimal solutions, optimising for surplus generation without managing settlement execution or maintaining bonds for execution guarantees
- **Executors**: Specialise in reliable cross-chain settlement execution, maintaining bonds, and managing on-chain settlement risk.
In this model each bid is submitted together with a commitment from a pre-bonded executor. Before validators see any bids, the executor posts the required collateral and signs a commitment to settle the corresponding batch within its validity window. That signed commitment is bundled into the solver’s bid, so validators can verify in one step that the proposal is fully execution backed.
A minimal flow for this approach can be:
- Solver finds optimal solutions for a batch in the auction along with a fee percentage the solver and executor would distribute and post them into a solution mempool.
- An executor commits to a solvers batch, locks collaterals, and signs a commitment for that batch.
- Solver submits to the auction.
- Validator scores bids, highest score wins immediately.
- Winning executor performs the on-chain transactions and, once finality proofs are posted, its collateral is released (or slashed if anything goes wrong).
Although this approach sharpens competition among solvers, it has multiple drawbacks too:
- once a solver publishes a bundle, another solver could copy it, attach their own signature, and undercut the fee-split, diverting the reward without doing any work.
- if the market moves against the solution, executor locks in a fixed cut of the proceeds, there is incentive misalignment where executor already committed a percentage of rewards and loose rewards or even bear loss.
- without posting costs or rate-limits, a solver can flood the mempool with meaningless solutions.
While the approach offers significant benefits in lowering entry barriers and enabling solver specialisation, the associated challenges of solution plagiarism, incentive misalignment, and mempool spam introduce operational risks. While promising, the approach should be included only after developing protective mechanisms to address these specific vulnerabilities, as right now the drawbacks seems to outweigh the benefits.
# Final thoughts
This architectural proposal represents a rather ambitious yet achievable path toward decentralisation for CoW Protocol. What's particularly compelling to me is that many components could be initially bootstrapped in a single Actively Validated Service (AVS), allowing for incremental deployment without immediately establishing a full validator set.
The most challenging part about researching and writing about this architecture was it’s breadth and complexity. There are so many moving parts that it becomes particularly difficult to know where to begin and how to coordinate the numerous interdependent components.
As a side note, I also wanted to designate a section on auction design but given that the [combinatorial auction proposal](https://forum.cow.fi/t/cip-draft-moving-from-batch-auction-to-the-fair-combinatorial-auction/2967) is already well-developed with thorough testing and benchmarking, there isn't much to contribute to its design at this stage, but this approach represents impressive work by the CoW Protocol team, showing significant improvements over the current batch auction mechanism. The benchmark results are particularly compelling, a 33% increase in order throughput and a 25% increase in solver rewards while maintaining fair price distribution among traders.
Given the huge scope, the best course is an incremental rollout: land the highest-impact feature first, watch it under real traffic, then layer in the deeper consensus and proof pieces only once the numbers look good. That would help to keep each release small enough to audit, limits risk, and lets us learn from each release before moving on.
Looking ahead, the next logical steps would involve deeper analysis of each architectural component, developing wireframes to visualise system interactions (mostly UML diagrams) and creating prototype algorithms to validate the core mechanisms before implementation begins.
Thanks for taking the time to read this proposal. I look forward to feedback and happy to clarify any point or dive deeper into specific areas. PEACE ✌️
Below is a small subset of curated resources I found especially insightful and that can help guide the design of the proposed protocol.
- https://arxiv.org/pdf/1809.09044
- https://github.com/succinctlabs/sp1-tendermint-example
- https://github.com/informalsystems/malachite?tab=readme-ov-file
- https://developers.openibc.com/docs/concepts/polymer/zkmint
- https://github.com/argumentcomputer/zk-light-clients/tree/dev/ethereum
- https://arxiv.org/pdf/2503.05338
- https://eth2book.info/capella/part2/building_blocks/signatures/
- https://arxiv.org/pdf/2503.15380
- https://docs.unichain.org/whitepaper.pdf
- https://arxiv.org/pdf/2401.10369
- https://hackmd.io/@0x3114/S1r0M-0Nyl
- https://eprint.iacr.org/2024/263.pdf
- https://eprint.iacr.org/2024/1516.pdf
- https://threesigma.xyz/blog/amm/mastering-amm-order-books-and-intents#Addressing-Risks-e187950daf5b