--- title: "Anoma Research Day Talk Synoposes" author: nyc --- ## Anoma Research Day Talk Synoposes ### #1 **Title** Realizing intents with a resource model **Abstract** What _is_ an intent? This talk will describe what Anoma's intent-centric architecture actually looks like from a programming perspective. I will start with a quick recap of the research literature on voluntary conditional commitments and program equilibria. A direct translation of this model runs into four main challenges: termination / fix-point finding, function introspection, structural vs. nominal identity, and information flow control compatibility, each of which I will describe in turn. I will then introduce Anoma's resource model, including the concepts of logic, data, balance, and partial transactions, and outline how it exploits the unique properties of our design context to address all of these challenges. A few simple examples will be described to provide an intuition. Next, I will sketch the model of information flow control which we have found to be most applicable, and give several examples of how this resource model can act as a programmable runtime, preserving information flow properties defined by the user. Finally, I will outline a few open research questions and areas of potential collaboration. **Extended Synopsis** - Introduction - Icebreaker and inspiration: [Andrew Miller's tweet](https://twitter.com/socrates1024/status/1655325271048822786) - What this talk is about, and what it is not about - In a sense, this talk is about VM design, but we may mean something different by "VM" than you might think. - Brief architecture history tour - The von Neumann architecture (basis for your computer) consists of a processing unit (CPU), instruction set, registers/stack, volatile memory, non-volatile storage. - The Ethereum virtual machine is (basically) a von Neumann architecture with some minor changes for the DLT context. - von Neumann architectures are designed to _execute programs sequentially_. - Intent-centric architectures are designed to _match conditional commitments atomically_. - This is a different design space. von Neumann architectures are concerned with how programs get _executed_, intent-centric architectures are concerned with how commitments get _matched_. How the commitments themselves are interpreted is mostly an orthogonal decision (could be done on a von Neumann architecture). - The asterisk: Part of the EVM is relevant here: message calls and sequential scheduling. We'll come back to this later. - Review of the research literature - [Program equilibria](https://www.sciencedirect.com/science/article/abs/pii/S0899825604000314) (2004), [Voluntary conditional commitments](https://www.tau.ac.il/~samet/papers/commitments.pdf) (2010) - What if we just sent around conditional commitment functions? - Basic model: commitment functions, central executor. - Four challenges - Termination / fix-point finding: higher-order commitments compute each player's commitments from the other players' commitments. In order for this process to _eventually_ terminate, you need randomness, and even then it's not very DoS-resistant. Randomness adds a security assumption and creates a disadvantageous tradeoff axis between welfare maximisation and execution efficiency. - Function introspection: program equilibria depend on being able to read and _reason about properties of_ other players' conditional commitments. This is possible in practice but very non-trivial (requires e.g. full dependent types), and even then would be difficult to coordinate, since you don't know ahead of time what properties other players will want proofs of. - Nominal vs structural identities: the typical game theoretic model assumes that the set of players is known in advance, and that each player knows each other player by name (index). This does not match the typical distributed systems interaction context, where the set of players is unknown and where invididuals typically want to interact on the basis of capabilities (e.g. holding token X), not identity. - Unclear information flow: in the default model where the commitments are "executed" somewhere, it's unclear how to do complex information flow control (e.g. restrictions on who learns about what commitment when) without replacing the "somewhere" with TEE/FHE/etc. Even if you're willing to accept the security assumptions, that strategy will not scale, because it requires logically centralised ordering even for commitments which don't need it. - Introducing Anoma's resource logic (Taiga) - One weird trick: discontinuous atomic jumps in configuration space (diagram here). The game theory literature typically assumes that actions are incremental - make a commitment, run commitments one by one until termination, etc - configuration space is explored incrementally, with only one player taking an action at once. But the power of the DLT context is that - without extra security assumptions - multiple players can take an action at the same time, so we can simply jump between equilibria. - Basic structure: resources: logic, static data, dynamic data, quantity; partial transactions; validity; full transactions; balance. - How are these four challenges addressed? - Termination & introspection: separation of computation from verification, discontinuous jumps - Nominal vs structural identities: interaction on the basis of resources (capabilities) - Information flow: clear validity conditions for sub-transitions on an inclusive basis - A few examples to provide an intution: Prisonner's dilemma, token swaps. - Towards a substrate for information flow control - Brief context from the literature: [Viaduct](https://www.cs.cornell.edu/andru/papers/viaduct/viaduct-tr.pdf) (2021) - Taiga is not a high-level language for information flow control. Rather, it is a _runtime_ which could be programmed by such a language to execute in a way which preserves information flow properties. - Hierarchy of cryptographic primitive practicality, "least powerful primitive" rule. - Examples of information flow substrate: - Solver selection and partial solving - Threshold decryption and batching (aka. Penumbra-on-Taiga) - Threshold FHE for aggregate statistics - Conclusion, open research questions, and areas of potential collaboration ### #2 **Title** The edge of MEV: switching costs and the slow game **Abstract** This talk will outline some of our recent research work to characterise what we call the "slow game". Systems are valuable if and only if asset issuers and users choose to use them, but because issuers and users don't want to be online all of the time, they typically delegate custody and execution to a separate class of operators. This class of operators includes validators, searchers, builders, gossip nodes, etc. - everyone you might think of as "extracting MEV". I will argue that users and issuers face a coordination problem: if they do not coordinate, operators (who set prices) can extract almost as much value as users and issuers gain in welfare from using a shared system in the first place. In order to avoid too much extraction by operators, users and issuers must coordinate. This "slow game" already exists, but only in discussions, ad-hoc governance decisions, "social slashing", etc. I will describe how one might go about codifying it directly in the protocol, what users and issuers can do to credibly demonstrate their ability to coordinate, how the timescale of that coordination will affect the value which can be extracted, and what a plausible end-state equilibrium might look like. **Extended Synopsis** - Introduction - Icebreaker & inspiration: discussions w/Nikete - What this talk is about, and what it is not about - Slapdash MEV taxonomy: four categories - Supply/demand asymmetry and switching costs (users must agree on an ordering location). - Information flow asymmetry (most of current "bad MEV" is in this category) - Delegation of computation (most of current "good MEV" is in this category) - Cross-domain information flow and clock differences (interesting, but a topic for a different talk) - This talk is about the _first_ category only/primarily. - Why you should care about the slow game - Reasons why you might want to be concerned about this - Operators are coordinated (often via explicitly created mechanisms) - Operators set prices (and they can coordinate to do this, we should expect them to do so). - Users/issuers must agree on a logical ordering location (at least per kind of interaction) - Suppose that users/issuers _don't_ coordinate. Rational incentive for users who benefit from the system is to keep using it as long as their benefits exceed the costs charged by operators - so operators can charge just up to the total welfare gain from the system in the first place before users will stop using it. - This has _absolutely nothing to do with information flow asymmetry_. TEEs, threshold encryption, etc. etc. do _not_ help at all. Even if the value isn't public, operators can easily test to see what fees they can extract by testing transaction demand at higher & lower prices. - Reasons why we don't see this right now - Operators are too nice. They go to your conferences, go to your parties, and post about their Ethereum home validator setups on Twitter. - Most people still run the default software. - Users and issuers make the system valuable by choosing to use it - but if they want to actually benefit, they need to coordinate to prevent operators from extracting all of the welfare. - The slow game right now - Social coordination, Twitter, bridging assets elsewhere. - ~ messy, high switching costs - Concretizing the slow game: resource controllers - Brief recap of Anoma's resource logic model - Addition of the _controller list_ (technically a DAG, but that detail isn't relevant for this) - First controller in the list is user/issuer - What does a controller have? - Custody - Liveness assumption - What do users want, and what can they do? - Users want to delegate to operators, but want them not to charge "too much". - i.e., users want to set prices - In order to do this, users need to credibly demonstrate to operators that they will fork out / stop using operators if operators charge too much. - Users can periodically demonstrate credible ability to coordinate. - Doesn't need to be fast, but it needs to be credible. - This can be done socially, of course - but we _already have_ this nice credible coordination mechanism: consensus. - What if users _just run consensus_? - This will be inefficient (slow), because there are many users, but that's quite alright! - Users' consensus needs to do very little: - Potentially set prices - Agree not to use some particular operator which is extracting too much - Demonstrate ability to coordinate, sovereign control over networks, etc. - Could be daily, monthly, even less frequent potentially - But there is a relation between frequency and epsilon that an operator can potentially extract - Helps to have the "option" to run consensus - Technical details - Probably we don't want a proposer - Users include aggregatable data in votes (price levels, commitments, etc.) - Heterogeneous tokens - can potentially be dealt with in a Stellar consensus / Heterogeneous Paxos style system. - We are doing this anyways! - Just not with a protocol, which makes the threat much less credible / subject to propaganda / etc. - A protocol (tech + culture) will make the threat credible. - The slow game is the edge of MEV - Think about the network topology - Users/issuers on the "edge" - Operators in the "middle" - If users coordinate, they set the price levels, and the MEV is returned to the edges. If the operators coordinate and the users don't, the MEV flows to the center. Coordination is an "information sink". - End-state equilibrium - Prices decline over time (hardware costs), but still with some margin for operators - Operators and users are both coordinated - Periodic consensus ritual :) - Conclusion, open research questions, and areas of potential collaboration