This work can be seen as a type of hierarchical consensus where each sub component implements causal broadcast instead of total order broadcast.
Establishing a total order of transactions is one of the main scalability bottlenecks in blockchain networks. There have been numerous attempts at scaling blockchains, exemplified by the Lightning Network, Optimistic and ZK roll ups, sharding, and many more. Here we outline an approach for scaling built upon the observation that a total ordering is not required for all applications looking to achieve consensus faster than their base chain.
After Flashbot's successfully isolated the centralizing effects of MEV to the block builder role in Gasper PoS using a PBS design, the next goal became to decentralize the block builder role itself. In attempt to do so, the MEVM was released, a modification of the EVM with precompiles to support MEV use cases, all running on top of a network of Trusted Execution Environments (TEEs). This architecture allows users to send bids to an encrypted mempool and allow their bids to be processed by MEV applications written as smart contracts in Builder Solidity. By offering a higher level blockchain where these programs live, users are guranteed that the rules of the orderflow auction they grant premission to, are verifiably run over their bids.
One of the most powerful parts of the MEVM design is that components that are common between OFAs, Solver Networks, RFQs, and Block Building can all be reused. Below we can see an example of what an early MEVM ecosystem might look like and how the components interact.
Credible auctions on SUAVE consist of three parts:
Using the last two pieces a user should be able to construct a fraud proof in the event that their bid was not included.
One example of a MEVM powered orderflow auction is a walrassian auction, "a type of simultaneous auction where each agent calculates its demand for the good at every possible price and submits this to an auctioneer. The price is then set so that the total demand across all agents equals the total amount of the good." [1].
To translate this example onto SUAVE we simply need to write our auction logic into a smart contract with solidity, deploy it, and then users can start to send bids to the chain to be incorporated into the auction. Because the auction does better with more inputs (not sure this is strictly true) and users want censorship resistance, they can store those encrypted transactions onchain to be used when enough bids have been aggregated.
💡 Observation 1: In this framing, the ordering of bids in the auction doesn't matter. Or equivalently, similar mechanism designers need not consider ordering in their design. Whether your transaction was first or last in the batch your demand curve is indentical.
This observation, that our system needs consensus on a set, not on a single ordering, will form the basis for all designs mentioned in this document. Another way to say this is that we do not require Total Order Broadcast, instead we can use one of many other options [2]:
m1
causally precedes another message m2
, then no correct process delivers m2
before m1
. It respects the causal order but not necessarily the total order.Of the above broadcast types, the Reliable Broadcast variant of each should be used. In reliable broadcast, the goal is to ensure that if a correct process broadcasts a message, then all correct processes will eventually deliver it. It doesn't guarantee any specific order of delivery.
Lastly, it should be noted that this design choice forces a strong opinion on all future auctions living on SUAVE, which should not be taken lightly.
For brevity I have moved prior art to the end but will highlight the key ideas from each:
Narwhal [3]:
Heterogenous Narwhal [4]:
A large inspiration for Heterogeneous Narwhal is the validator topology in Cosmos and Cosmos Hub where some nodes validate for multiple networks and others a single network.
Hierarchical Consenus [5]:
Hierarchical Consenus [5]:
Before getting into designs let's go through an example distributed block building flow that we can use as a basis to compare across designs. This flow starts with users sending in bids and ends with blocks being sent to validators.
Our example set up consists of:
SUAVE
, UniX
, and CowSwap
:
UniX
is denoted vu1 which could be equivalent to any vi for i <= nSUAVE
, UniX
, and CowSwap
respectivelyTransactions/Bids/Confidential Compute Requests:
Here is a view from the current SUAVE chain of how a progression of txns like this would go:
Note: One thing this diagram does a poor job of is differentiating confidential compute requests, which can trigger sending a block to a validator for example. The effects from confidenitla computation which results in a block being sent isn't necessarily only registered once the tx responsible for the request lands onchain. The block will be sent to a validator once an executor node runs the request and then whatever callback that function returns will then eventually make it's way to the chain.
I am calling this a Roll Up because it's pushing updates back to the main chain with those arrows, but the key point is it's only pushing summaries of what was available, not what the order of things received was.
Notice that mev-share bids can still go onto the main chain. Ideally subnets are completely optional.
This satisfies our goal of set up where ΔU < ΔC < ΔS, but is it as fast as possible?
Remember that Observation 1 implies that the ordering of Bid's within a bucket or slot don't actually matter since the user's mechanism should be ordering independent. Thus we may be wasting communication rounds in order to come to agreement upon ordering.
ToDo: Create chart similar to below with communication complexity of Narwhal compared to PBFT and Gasper.
The problem with the security of checkpointing is that you get safety but the use case we have in mind doesn't require safety. Because we are influencing how other chains are working, as soon as the next block is produced it doesn't matter if we retroactively look back and say "oh hey this was invalid" like rollups do
A couple thoughts here:
This is the same as the prior approach except that UniX and CowSwap validators would also take part in the voting on blocks proposed by SUAVE main chain validators aka all validators are SUAVE main chain validators except some have extra duties.
The advantage of this approach though would be that we don't fragment our validator set security for the main chain.
Maybe I'm not sure.
A hierarchial vector clock is well suited for modern, message-passing distributed systems which make use of highly hierarchical communication networks.
Cons of this approach:
We can use a similar idea as mempool IDs in the ERC 4337 bundler spec potentially, but this doesn't answer the stake question.
The metadata associated to each mempool that a bundler supports is documented and stored in IPFS (a copy of this is also suggested to be submitted to eth-infinitism Github repo). This IPFS hash of the file is called mempool-id and this is used as the topic for subscription in the bundlers. The proposed structure of the mempool metadata is as follows
chainId: '1' entryPointContract: '0x0576a174d229e3cfa37253523e645a78a0c91b57' description: >- This is the default/canonical mempool, which will be used by most bundlers on Ethereum Mainnnet minimumStake: '0.0'
The mempool-id of the canonical mempool is TBD (IPFS hash of the yaml/JSON file).
Questions:
- Can anyone deploy mempool IDs?
- How do users know the safety of the mempool?
- How do
Directed acyclic graphs (dags) feature in several recently proposed consensus
protocols Danezis et al. 2022; Keidar et al. 2021; Spiegelman et al. 2022. - 🔗 sauce
Narwhal claim: The leader is the bottleneck
Narwhal ideas:
Key difference with heterogenous narwhal is that we can have varying validator sets for underlying certificates. This of course comes at a risk of liveliness, but ideally the liveliness of a "higher order" subnet is not affected by an underlying subnet being offline.
Is this a difference: In our SUAVE use case, the transactions that mempool is coming to consensus on availability, and the resulting computation from the auction, don't need to be run on the SUAVE chain. They just need to end up in a block sent to the validator on a different domain. Although eventually I think we should
central narwhal concepts
System Assumptions
Narwhal Mempool Abstraction
write(d,b)
: Stores a block ( b ) with its digest ( d ).valid(d, c(d))
: Checks if a certificate is valid.read(d)
: Returns a block ( b ).read_causal(d)
: Returns a set of blocks ( B ) with a transitive happened-before relationship with ( b ).Algorithm Description
function MAIN_LOOP(validator v):
WHILE TRUE DO
TRANSACTION_ACCUMULATION(v)
CERTIFICATE_ACCUMULATION(v)
IF COUNT(v.certificate_list) ≥ 2f + 1 THEN
v.r ← v.r + 1
BLOCK_CREATION(v)
END IF
BLOCK_VALIDATION(v)
CERTIFICATE_FORMATION(v)
CERTIFICATE_HANDLING(v)
GARBAGE_COLLECTION(v)
END WHILE
Properties
read(d)
by honest parties return the same value.read(d)
is invoked after write(d, b)
by an honest party, it returns ( b ).read_causal(d')
is a subset of ( B ).read_causal(d)
returns a set ( B ) that contains at least 2/3 of the blocks written before write(d, b)
.read_causal(d)
were written by honest parties.–
The present paper is rather about heterogenizing existing
protocols, choosing dag-based consensus as an example that goes together with
heterogeneous Paxos I. Sheff et al.
Along the way, we shall describe how heterogenization applies differently to
matters of availability and integrity I. C. Sheff et al. 2019:
availability all relevant transaction data is certified to be available
integrity the available data is unaltered, e.g., no double spends, equivocation,
etc..
As running example, let us consider two base ledgers with proof-of-stake.
The validators set out to build a single shared Narwhal dag that comprises the
information of two separate “native” Narwhal dags. The rationale is that then
blocks may contain cross-chain transactions that can be executed atomically.
Naïve superposition of narwhal DAGs comes at the price of risks to liveliness.
Now, as a first thought, we can restrict anchor blocks to those that satisfy the conditions for both base ledgers; in fact, we obtain a safe implementation of a combined Narwhal mempool. However, apart from obvious questions about skewed incentives for the validators that only hold stake of one of the base ledgers, the “perfect” safety guarantees of the naïve superposition come at the price of risks for liveness: roughly, superimposing Narwhal dags leads to a “less live” dag, relative to each one of the base ledgers.
https://docs.filecoin.io/basics/interplanetary-consensus/hierarchical-consensus
Q: What are example of algorithms which implement casual broadcast?
A: Causal broadcast is a communication primitive that ensures messages are delivered respecting their causal order. Several algorithms have been proposed to implement causal broadcast in distributed systems. Here are some notable ones:
Vector Clocks:
Logical Clocks (Lamport Timestamps):
Birman-Schiper-Stephenson Protocol:
Triggered Message Sequence Charts (TMSC):
Dependency Sequences:
Hierarchical Vector Clocks:
These algorithms and protocols provide mechanisms to ensure that if a message m1
causally precedes another message m2
, then no correct process delivers m2
before m1
. The choice of a particular algorithm or protocol often depends on the specific requirements of the distributed system and the trade-offs between time and space complexity.
[1] : https://en.wikipedia.org/wiki/Walrasian_auction
[2] : https://www.doc.ic.ac.uk/~ac3419/notes/year3/distributed_algorithms/03_fct_broadcast.pdf
[3] : https://arxiv.org/pdf/2105.11827.pdf
[4] : https://github.com/anoma/research/blob/master/distributed-systems/heterogeneous-narwhal/HeterogeneousNarwhal.pdf
[5] : https://research.protocol.ai/publications/hierarchical-consensus-a-horizontal-scaling-framework-for-blockchains/delarocha2022.pdf