# Dec 5th, Updates on the Validium design
## Updated Verle Tree design
There are several aspects to a light-client-friendly state management system; easy insertion, ease of query, and ease of inclusion verification. Considering that
1. we want the ease of inclusion verification and
2. most of the leaves of a particular user will likely be under few sequencers
3. we want a somewhat easy insertion that can be done on L1
we can split the problem and apply optimizations to each.
Instead of having Verkle branches per sequencer and calculating the root on L1, we can have **separate Verkle trees** per Sequencer.
This comes with a little more work on querying and inclusion verification, but it's likely to be tolerated under the assumption that there exists a full node that is willing to provide inclusion proofs.
On the light client side, the verification will be
- verify the L1 state root
- verify the Verkle inclusion proofs obtained from a full node
However, if there are 100 sequencers, then the worst case for light clients is: to verify 100 leaves, it's also necessary to verify that all 100 sequencer roots are indeed allocated in the respective L1 storage slots by checking against the L1 state root. (And Ethereum, which is the L1 we're assuming to use Merkle Patricia Tree, hence the proofs will be too large)
However, **Ethereum is transitioning to Verkle Trees as well, so such Sequencer roots verification can be also compressed into a single Verkle proof.**
We can start building on top of Merkle Ethereum, and as soon as Verkle Ethereum ships, we can gain efficiency immediately.
## Possibly higher censorship resistance than Ethereum due to the non-existence of MEV nor a similarly exponential centralization vector
**It is becoming more clear [as the curtains of social media platforms get lifeted](https://twitter.com/mtaibbi/status/1598822959866683394), that speech censorship is as important as financial censorship.**
Economies of scale exist for Validium sequencers, but it is presumably less severe of a centralization vector than MEV in Ethereum. One can imagine that relatively _cheap_ countermeasures such as publishing out-of-the-box open source Sequencers and related materials will have a meaningful impact on sufficient decentralization.
## Lazily applied business logic
Only having the state of verified signatures/zkps won't be sufficient to build apps; there needs to be some logic to interpret the verified data.
Moreover, since we assume duplicate data could exist, a schema for resolving conflicting states is necessary.
Fortunately, state conflicts are common in distributed systems.
For example, [Farcaster employs](https://github.com/farcasterxyz/protocol#4-replication) a state management method that resolves conflicts between messages in its protocol.
Moreover, we can expect ORM-like abstraction layers (.e.g. MUD) to be built on top of this Validium.
Therefore, we can conclude that ordering/conflict resolving is out of the scope of the Validium, and it should be up to the apps to decide how to interpret data.
**Isn't lazy execution prone to DoS attacks?**
Well, it is indeed a DoS vector. Even though we can think of it as less severe than simply putting everything on IPFS since signatures/zkps are verified, there need to be some precautions against DoS attacks; DoS tolerance on the app level is the likely pathway (won't go into this too much since I think we can learn a lot while actually building).
## Data availability
We can envision all sequencers to host data (both signatures/zkps, and the attested raw content) adhering to the IPFS protocol. However, there is nothing stopping sequencers from not publishing the data to IPFS.
To distribute data as much as possible, **users can submit data to multiple full nodes/relayers, but only send the transaction to a single sequencer.** Then as soon as the transaction gets included in the Verkle tree, other nodes can start providing inclusion proofs as well. Full nodes/relayers can require users to provide the transaction receipt signed by the sequencer to be accompanied by the data.