<span style="color:orange">[update 2024.4.22] We are hosting a followup workshop at ICML: Agentic Markets (agenticmarkets.xyz)</span>.
Goals
After cryptoXai.wtf, we are moving the discussion around crypto and AI forward to a more focused direction - commitment devices. Specifically, we are having a small invite-only gathering/whiteboarding workshop.
sxysun ⚡️🤖 changed a year agoView mode Like 6 Bookmark
From Asimov's Laws to Ethereum’s Protocol: [Re]searching the intersection where crypto meets AI alignment.
An Unfinished Treasure Map
Both the cryptoeconomics research community and the AI safety / new cyber-governance / existential risk community are trying to tackle what is fundamentally the same problem: How can we regulate a very complex and very smart system with unpredictable emergent properties using a very simple and dumb system whose properties once created are inflexible.
- Vitalik Buterin, Why Cryptoeconomics and X-Risk Researchers Should Listen to Each Other More (2016)
Over the course of the past few years, there has been many explorations of combining crypto and AI in solving practical problems, e.g. MEV and commitment races, credible auctions via blockchains, conditional information disclosure and programmable privacy, federated learning, zkml, and identity. Recently, transactions on Ethereum are becoming more like agentic intents (e.g., account abstraction, smart transactions), and protocols like SUAVE arise to turn adversarial on-chain bot war into coordinated execution markets that satisfy human preferences.
Here in Zuzalu, we attempt to explore their intersection from first principles. During an evening whiteboarding session at a Pi-rate Ship pop-up hackerhouse, we, a group of humans, started by brainstorming the core concepts that underpins the foundations of both fields. We arrived at a collective mindmap for Crypto X AI, taken inspiration from MEV mindmap: undirected traveling salesman. This exploration leads us to a continuing journey into a future where crypto mechanisms become increasingly conscious, and AI plays a transformative role in prediction and alignment.
George Zhang changed 2 years agoView mode Like 4 Bookmark
This is an expansion on the Flashbots research prize announcement thread and a followup thread.
[Update on Jul 25] page now include takeaways and results of the hackathon
For any questions, contact sxysun on Twitter or Telegram @sxysun
The Prize
Flashbots is doing a research partnership with augmenthack.xyz by offering 15k Research Grants (FRPs) to high-quality projects with a research angle coming out of the hackathon that are willing to have an in-depth engagement for R&D after the event.
Since this is a research grant, your project doesn't have to be traditional hackathon projects, it can be in the form of research-athon (think about the experiment section of a paper, something like grid-world games is a decent start!). Here are some previous FRP research projects that we supported for you to get a flavor of what kind of projects we like, we like projects that ask deep questions.
sxysun ⚡️🤖 changed 2 years agoView mode Like 1 Bookmark
This is a work-in-progress document. It is NOT the canonical/official SUAVE spec, only a personal interpretation with some AI-MEV flavored research (in section "Cognition and Complexity") on top. The ideas in this doc are inspired by discussions with Barnabé Monnot and based on the original ideas by Phil Daian.
For questions, contact sxysun.
Introduction
SUAVE is a permissionless credible commitment device (PCCD) that programs and privately settles higher-order commitments for other PCCDs.
Manual
the goal for this spec is to answer: "starting only with basic theory, how can we derive the SUAVE design and demonstrate that it is the necessary design," and "if you take this design choice out or modify it, here is how things would go wrong"
the goal is to be clear about “what are the open problems that we don't know how to solve, but here is the direction and problem statement”
sxysun ⚡️🤖 changed 2 years agoView mode Like 2 Bookmark
Behold, the enigmatic programmable privacy! A concept cited far and wide, yet misunderstood like an elusive unicorn. Let us embark on a quest to shed light upon its true nature and discern the impacts and pitfalls that lurk beneath the surface.
We present a precise definition for programmable privacy after noticing that, despite community-wide citation and reference, there lacks an understanding of what programmable privacy actually is.
The only existing written definition is one from secret network. There are also some floating descriptions of what it is in the suave/mev-share post and notion/groupchat folklore, but none of them is satisfactory. Moreover, since the mentioning of programmable privacy as a concept in mev-share post, many teams have started abusing the term assuming that "it means something cool" despite nobody have a precise idea of it (and most people have a wrong idea of what it is).
Notations
commitment type: $\mathcal{C}$
commitment (inductive) constructors: $K$
information space: $I$
sxysun ⚡️🤖 changed 2 years agoView mode Like 2 Bookmark
we provide a friendly intro to the concept of {Monarch, Moloch, Mafia}EV.
Or, 3EV, or, $\Sigma \text{EV}$ (both are sideway “M”), and represents sum types
for a detailed, formalized description of 3EV, refer to
https://hackmd.io/@sxysun/this-is-mev/edit
https://hackmd.io/@sxysun/semantics-lattice/edit
https://www.youtube.com/watch?v=8qPpiMDz_hw
https://docs.google.com/presentation/d/1MyGRlZTHzppFYfEF04cCf0DbkokNTs6BZLeLqV99F7A/edit#slide=id.g163c2a140da_0_99
sxysun ⚡️🤖 changed 2 years agoView mode Like 1 Bookmark
MEV has largely been a field where engineering drives science, it's now time for science to drive engineering: we present a formalization of MEV and the theory/taxonomy based on it.
Setting the scene
Objects
In our universe of objects, there exist:
a virtual machine (domain) $\Xi$ that execute state transitions
a mechanism $M$ that lives on domain $\Xi$
multiple agents, $p_1, p_2, ... p_n \in P$
an agent, the coordinator, $c$. $c$ is the one who executes mechanism $M$.
sxysun ⚡️🤖 changed 2 years agoEdit mode Like Bookmark
h/t @yahgwai
Crypto has focused on the wrong direction for past few years by spending too much effort on scaling.
Why? Because we’ve came up with a wrong explanation to why bad UX in crypto bappen.
If we look at any system that has real value, the number of users (aggregated by value of economic activity) in that system should increase exponentially. But this is assuming the UX of the system grows to support the user loads.
Our hypothesis has been that gas (or its fancier name blockspace), a commodity that is used for computing in the execution layer, is the limiting factor. However, this is not true as we know that even today over 65% of the activity on Uniswap is arbitrage and the other 35% are a few big market makers, so MEV constitutes a major part of the activity.
sxysun ⚡️🤖 changed 2 years agoView mode Like Bookmark
TLDR
We define the notion of semantics lattices. Using this notion, we devise an incentive compatible mechanism for efficient MEV preference aggregation. We prove this mechanism’s correctness (i.e., it is private value incentive compatible, and it is welfare-maximizing) and gave an algorithm for the coordinator to execute this mechanism (i.e., do blockbuilding). Since VCG mechanisms have collusion-resistance and computational issues, we discuss how additional fee markets on-top of the mechanism can mitigate those problems and therefore aid efficient allocation of MEV.
Furthermore, with the notion of semantics lattice defined, we can easily devise some logic or language framework to describe MEV at a higher abstraction level than the sophsitication semantics we used to model MEV and therefore flexibly borrow existing results in those areas to reason MEV at the right level of abstraction.
Specification semantics lattice
Definition
A specification semantics lattice $S$ is parametrized by a logic (language with a semantics of truth) $L$ (which in turn might be parametrized over some free variable $x$), where every element is the set of all semantically equivalent sentences in $L$, e.g., $\texttt{true}, \texttt{true} \wedge \texttt{true}$, etc,.
$\top$ is defined to be the set of all semantically equivalent propositions to $\texttt{true}$. And $\bot$ is the set of all semantically equivalent propositions to $\texttt{false}$.
sxysun ⚡️🤖 changed 3 years agoEdit mode Like Bookmark
peter note: OCC with deterministic aborts is for nodes to be all the same when it comes to fee market charging fees for abort, plus classical OCC is not for distributed systems. primary market and secondary market (MEV) for transaction preferences. l2 sequencers/block building/latency
parallelization is basically nodes doing a first round of blockbuilding (kicking out irrelevant nodes in the graph). And then grind multidimensional knapsack on the remaining transactions
- EIP1559 parallelizability market: 1559 is better if we assume there are little edge cases to n-dimensional knapsack problem, i.e., if bundles are independent then 1559 is like a naive auction algorithm which actually works
- parallelization as a 1-dimention of the fee market is like merging all parallelization related resources together, which is sub-optimal and less refined? Yes, if we consider the local view. But if we consider the global view then the conflicts are not present in this design, but if so then we are essentially encouraging users to predict where the MEV is and avoid them, but then isn't this already done by per-account fees? So essentially the parallelization fee market is same as per-account fees!
- parallelization is like binary blockbuilding
- parallelizd auctions = OFA separation of txn batches = internalized by a block builder
Fair ordering is like for those that do not benifit from the positive sum searching anyway, they need to be awarded with lower latency, but they cannot really because MEV is not a priori, MEV is a posteriori. So unless they know beforehand (opt-in), then they can —→ Arbitrum is really like enshrining latency/privacy as their designated social choice function that is better than any other social choice function
Through the lens of MEV, how do the optimization choices on EVM impact and be impacted by protocol design. We discuss the tradeoffs of several parallelism proposals to increase the EVM execution speed (for block building, private information bundle merging, and zkrollups) and show why they are subject to “the conjecture of MEV conservation.”
sxysun ⚡️🤖 changed 3 years agoView mode Like Bookmark