owned this note
owned this note
Published
Linked with GitHub
# CryptoXai.wtf
*From Asimov's Laws to Ethereum’s Protocol: [Re]searching the intersection where crypto meets AI alignment.*
![](https://i.imgur.com/aopPZqt.png)
# An Unfinished Treasure Map
> Both the cryptoeconomics research community and the AI safety / new cyber-governance / existential risk community are trying to tackle what is fundamentally the same problem: How can we regulate a very complex and very smart system with unpredictable emergent properties using a very simple and dumb system whose properties once created are inflexible.
> *- Vitalik Buterin, [Why Cryptoeconomics and X-Risk Researchers Should Listen to Each Other More](https://medium.com/@VitalikButerin/why-cryptoeconomics-and-x-risk-researchers-should-listen-to-each-other-more-a2db72b3e86b) (2016)*
Over the course of the past few years, there has been many explorations of combining crypto and AI in solving practical problems, e.g. [MEV](https://tinyurl.com/mevccd) and [commitment races,](https://www.lesswrong.com/posts/brXr7PJ2W4Na2EW2q/the-commitment-races-problem) [credible auctions via blockchains](https://arxiv.org/abs/2301.12532), [conditional information disclosure](https://arxiv.org/pdf/2204.03484.pdf) and [programmable privacy](https://hackmd.io/@sxysun/progpriv), [federated learning](https://arxiv.org/pdf/2205.07855.pdf), [zkml,](https://github.com/ddkang/zkml) and [identity](https://thedefiant.io/worldcoin-ai-privacy-orb). Recently, transactions on Ethereum are becoming more like agentic intents (e.g., [account abstraction](https://ethereum.org/en/roadmap/account-abstraction/), [smart transactions](https://www.youtube.com/watch?v=dS3NRPu5L9s)), and protocols like [SUAVE](https://hackmd.io/@sxysun/suavespec) arise to turn [adversarial on-chain bot war](https://www.paradigm.xyz/2020/08/ethereum-is-a-dark-forest) into coordinated execution markets that [satisfy human preferences](https://writings.flashbots.net/the-future-of-mev-is-suave).
Here in Zuzalu, we attempt to explore their intersection from first principles. During an evening whiteboarding session at a *Pi-rate Ship* pop-up hackerhouse, we, a group of humans, started by brainstorming the core concepts that underpins the foundations of both fields. We arrived at a [collective mindmap](https://app.excalidraw.com/l/AlhZ6I3InuW/ZIerhzsyeW) for Crypto X AI, taken inspiration from [MEV mindmap: undirected traveling salesman](https://link.excalidraw.com/l/AlhZ6I3InuW/41auleMfa6K). This exploration leads us to a continuing journey into a future where crypto mechanisms become increasingly conscious, and AI plays a transformative role in prediction and alignment.
<div style="display: flex; justify-content: center;">
<figure style="display: flex; flex-flow: column; max-width:600px">
<img style="" src="https://i.imgur.com/Rh3BQd1.png">
<figcaption style="font-size:14px; text-align">This is just the beginning of our journey exploring the convergence of crypto and AI alignment research... If you would like to contribute to our collective mindmap, ping @sxysun1 on Twitter.</figcaption>
</figure>
</div>
<!-- ![](https://i.imgur.com/Rh3BQd1.png) -->
<!-- <div style="display: flex; justify-content: center;">
<figure style="display: flex; flex-flow: column; max-width:600px">
<img style="" src="https://i.imgur.com/wUOpPZZ.png">
<figcaption style="font-size:14px; text-align"></figcaption>
</figure>
</div> -->
<!-- ![](https://i.imgur.com/wUOpPZZ.png) -->
<!-- Is it possible for crypto to successfully coordinate games involving a coalition of misaligned humans and AIs?
The value of credible commitments (mechanisms) shines the most where players have the least trust, and trust originates from audibility and interpretability (knowing the counterparty's action space is constrained). However, algorithmic agents are largely uninterpretable (e.g., neural networks), it remains a question if this uninterpretability could harm the auditability of AI’s actions, making their alignment and coordination harder.
**3. AI eating its own alignment devices**
* The potential for AI to self-replicate lower innate coordination costs and disrupt commitment devices
* AI coordination difference and humans, familiar with how AIs can be adversarial to alignment devices or break them)
* the DAN jailbreak of AI
* the cryptographic guarantee of chatGPT fingerprint
`12:45-13:00` **[TBC] AI Alignment by Analogy** *- [Rob Knight](https://twitter.com/rob_knight?s=21&t=lvFx_xo5OMp8SxFytrUdOg)*
* **Abstract:** It is tempting to think of AI alignment as primarily an ethical problem, but this presupposes that we have some means of making an ethical scheme binding on an AI, which we do not have. At best we have only unreliable means of constraining AI computation or outputs, which are not sufficient to meet any commonly-accepted standards of reliability for critical systems. By analogy with common computing systems such as disk drives and file systems, we can see that we lack the technology and engineering practice to ensure equivalent levels of reliability, and so much of our talk about AI ethics is akin to designing file hierarchies on top of a storage system that randomly forgets or rewrites data.
-->
# Agenda (WIP)
**Time:** 11:00 - 19:30 (GMT+2) on Sunday, May 7, 2023
**Location:** The Lighthouse, Zuzalu
**Livestream:** zuzalu.streameth.org
**Disclaimer:** No finality on the agenda yet, see you in [MEV-time](https://hackmd.io/@sxysun/suavespec). ;)
**Pre-game:** Wanna have your questions answered by the speakers? Want the event to focus on what you are interested in? [**Make yourself heard**](https://www.notion.so/flashbots/CryptoXai-wtf-Zuzalu-Sunday-7th-9587c86ee7ed4ddeadbd862dc13a0c7f?pvs=4
)!
## Chapter I. Putting X-Risk in Perspectives
`11:00-11:30` **AI Capabilities and Our Greatest Fears: A Collective Timeline** ([slides](https://drive.google.com/file/d/1W7cKNJFlF3FdkcTLxSLQ-eg567egCUa0/view?usp=share_link))([Recording](https://zuzalu.streameth.org/session/633))- *[Daniel Kang](https://ddkang.github.io)*
[- Add & vote on questions!](https://pol.is/5bekwwcwba)
* **Abstract:** Discussions of risks from AI are often emotionally charged, making it difficult to have a common understanding of what the concrete risks are and what capabilities will lead to these risks. In this talk, Daniel will provide an opinionated view on the history and future of these capabilities/risks. The goal will be to align the workshop attendees on concrete scenarios to discuss.
* **Background Reading:**
* [A brief (opinionated) view on the history of capabilities](https://ourworldindata.org/brief-history-of-ai)
* [An overall explainer of risks](https://www.nytimes.com/2023/05/01/technology/ai-problems-danger-chatgpt.html)
* [How the EU is thinking about risks](https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai)
`11:30-12:00` **AI Dystopias vs. AI Utopias** ([slides](https://drive.google.com/file/d/1dh2U4IIH4fOWUywAvKxIDHEBYxyfOcg_/view?usp=sharing))([Recording](https://zuzalu.streameth.org/session/637)) - *[Vitalik Buterin](https://twitter.com/VitalikButerin)*
[- Add & vote on questions!](https://pol.is/487cj3j5ii)
* **Abstract:** Laying out hypothetical scenarios of dystopian and utopian outcomes of AI development on humans, and potential paths as to how to get there.
* Dystopias: FOOM Doom, medium-speed descent into madness, locked in: totalitarianism, stagnation, civilizational decline, the human Moloch: what might cause the above
* Utopias: Fun theory, CEV and other theories of aggregating human preferences, "Minimal AI government" ideas, Human coordination: what might cause the above
`12:00-12:30` **Eventually We All Die** ([slides](https://drive.google.com/file/d/1z3Ll7_mbhxYH3leKfI02NBZAYvgj13rZ/view?usp=sharing)) - *[Nikete](https://twitter.com/nikete)*
[- Add & vote on questions!](https://pol.is/7b7vbcejcr)
* **Abstract:** *Competing existential risks, falsifiability, near-misses, and sharing the planet with cognitively more advanced agents. With a side of LoRa, Markets and Predictions.* Understanding the relative sizes of x-risks is crucial to guiding policy. Increasing AI capabilities have two effects on this, increasing the probability that the AI destroys us at some point, while potentially also preventing things that destroy us. Credibly signalling who understands the relative size of these two counter-acting forces on human welfare is crucial for our future. This talk proposes an initial mechanism towards this in the form of fine-tunings of LLMs that reflect beliefs, and are judged on their ability to predict tommorows news given todays.
* **Background readings:**
* [Decision Scoring Rules](https://www.andrew.cmu.edu/user/coesterh/DecisionScoringRules.pdf)
* [Decision Rules and Decision Markets](https://www.cs.cmu.edu/~sandholm/decision%20rules%20and%20decision%20markets.AAMAS10.pdf)
* [The Singularity in Our Past Light-Cone](http://bactra.org/weblog/699.html)
* [Forecasting Future World Events with Neural Networks](https://arxiv.org/abs/2206.15474)
* [The A.I. Dilemma: Growth versus Existential Risk](https://web.stanford.edu/~chadj/existentialrisk.pdf)
* [LoRA: Low-Rank Adaptation of Large Language Models](https://arxiv.org/abs/2106.09685)
`12:30-13:00` **The Limits of the Utility Function and the Structure of a New Science of Consciousness** ([slides](https://docs.google.com/presentation/d/1-VLV7mFXpxEPp0KzyDqV4-aOISZAsGvHj4TxjKnP4lk/edit))*- [Mike Johnson](https://twitter.com/johnsonmxe)*
[- Add & vote on questions!](https://pol.is/7prhxcmcci)
* **Abstract:** Consciousness is a pre-scientific phenomenon similar to alchemy. What would it take to transition to a principled chemistry of phenomenology? This talk will survey “what kind of problem” consciousness is, what we might expect from a mature science of consciousness, and my solutions thus far. Implications for “what kind of thing humans are” and AI alignment will be discussed.
* **Background readings:**
* [Symmetry of Valence (STV) Primer](https://opentheory.net/2021/07/a-primer-on-the-symmetry-theory-of-valence/)
* [It From Bit]( https://opentheory.net/2022/04/it-from-bit-revisited/)
* [Principia Qualia (condensed version)](http://opentheory.net/Qualia_Formalism_and_a_Symmetry_Theory_of_Valence.pdf)
* [Principia Qualia (original version)](https://opentheory.net/PrincipiaQualia.pdf)
* [Making and breaking symmetries in mind and life](https://royalsocietypublishing.org/doi/10.1098/rsfs.2023.0015)
`13:00-14:00` **Lunch Break(out) Session: Lightning Talks**
**5min lightning talks - topics collected from event attendees**
* How to Overcome the Fear of AGI - Han
* Making a Positive singularity Incentive-compatible (Roko)
* Levels of Defence in AI Safety (A. Turchin)
* AI-Directed: Adding a New Cultural Type to D. Riesman's the Lonely Crowd (R. Yager)
## Chapter 2. A Rational Hitchhiker's Guide to AI Alignment
*This chapter aims to provide a survey of the various approaches to AI alignment.*
`14:00-14:30` **An Overview of AI Alignment Research, 2008-2023** ([slides](https://docs.google.com/presentation/d/1OYXbWwkqzYUTLcxJaV-kkwCCpeB1qtnmjRMAQ_8UNDM/edit?usp=drivesdk)) - *[Jessica Taylor](https://twitter.com/jessicata)*
[- Add & vote on questions!](https://pol.is/4xacpkcbtj)
* **Abstract:** What progress has the AI alignment field made over time? How have its problems been formulated, reframed, and solved over time? What are some of the fundamental obstacles? This talk presents an overview of the history of the field and current lines of inquiry.
`14:30-15:00` **Seaside Chat: Alignment as a Layer in the Stack** *- [Rob Knight](https://twitter.com/rob_knight), [Nate Soares](https://twitter.com/so8res?s=21&t=lvFx_xo5OMp8SxFytrUdOg) ([Recording](https://zuzalu.streameth.org/session/640))*
[- Add & vote on questions!](https://pol.is/5sndhmedp5)
* **Abstract:** Brief introduction to the idea of "perspective-taking", trying to understand why someone has the opinion that they do
* Let's try to separate out alignment from other concerns, e.g. ethics, by analogy with other layer/stack systems in computing:
- TCP/IP 5-layer model (stacked protocols)
- File system stack
* How does an approach like reinforcement learning fail to ensure alignment?
* Can we solve this by fixing the "context" in which AI is deployed, e.g. reforming human society?
* What kind of approaches might work for alignment? How can people contribute?
* How might alignment failures become apparent as AI progresses along the capability curve?
`15:00-15:30` **Reform AI Alignment** - *[Scott Aaronson](https://www.scottaaronson.com) ([slides](https://docs.google.com/presentation/d/103Pn1AnJFAT0YF6jh3Rt2IL7uTOMv9db/edit?usp=drivesdk&ouid=117509835856332636687&rtpof=true&sd=true))([Recording](https://zuzalu.streameth.org/session/642))*
[- Add & vote on questions!](https://pol.is/4cm33htk9d)
* **Abstract:** I'll share some thoughts about AI safety, shaped by a year's leave at OpenAI to work on the intersection of AI safety and theoretical computer science.
`15:30-16:00` **A Bold Attempt At Aligment: Open Agency Architecture** ([slides](https://docs.google.com/presentation/d/12TqbJVwZ5Rq6UhwtK8VRT4rEjTiWzfNsYkBePWSGXdw/edit))([Recording](https://zuzalu.streameth.org/session/644)) - *[Deger Turan](https://twitter.com/degerturann)*
[- Add & vote on questions!](https://pol.is/2ypubadny7)
* **Abstract:** Open Agency Architecture is a bold theory and proposal for AI alignment that requires a massive and wide ranging formal-modeling enterprise that integrates into a global world-model. OAA systems do not deploy the trained ML system itself, but instead aim to constrain powerful ML systems to deploy verifiably aligned, less powerful outputs. Our plan is to develop OAA by iterating on smaller, domain-specific applications that can find immediate use as institutional decision-making tools and provide OAA with feedback from different academic disciplines and expert networks in an international collaboration.
* **Background reading:**
* [The open agency model](https://www.lesswrong.com/posts/5hApNw5f7uG8RXxGS/the-open-agency-model)
* [An open agency architecture for safe transformative AI](https://www.lesswrong.com/posts/pKSmEkSQJsCSTK6nH/an-open-agency-architecture-for-safe-transformative-ai)
* [Davidad's bold plan for alignment](https://www.lesswrong.com/posts/jRf4WENQnhssCb6mJ/davidad-s-bold-plan-for-alignment-an-in-depth-explanation)
## Chapter 3. Moral Characters of Crypto
*This chapter aims to build a mental model of blockchains, cryptoeconomic mechanisms, and cryptography, focusing on their ability to align and coordinate agents.*
`16:00-16:20` **It's Too Late... MEV has Already Achieved AGI** ([slides](https://docs.google.com/presentation/d/1tNrwc7aCTeeKSitXNuqZAMBqLEyHKKVOSlcfF0UIctk/edit?usp=sharing))([Recording](https://zuzalu.streameth.org/session/645)) - *[Phil Daian](https://twitter.com/phildaian)*
[- Add & vote on questions!](https://pol.is/9cttz427ev)
* **Abstract**: In this talk, I will discuss the parallels between MEV alignment and AI alignment. First, I will give a brief introduction to MEV as a primitive for representing complex coordination games. I will argue that the MEV ecosystem represents a synthetic consciousness of fundamentally unaligned and often robotic actors, whose local incentives drive them to a common outcome. I posit several learnings and opportunities from the intersection of MEV and AI, including the ability to use cryptocurrencies as a hyper-realistic and ultra-adversarial sandbox to test agent modeling axioms. I will claim that privacy and decentralization are the key to an aligned future, and that we must align around these topics as humans as well.
`16:20-17:00` **Intelligence beyond Commitment Devices** ([slides](https://tinyurl.com/ai-pccd))([Recording](https://zuzalu.streameth.org/session/646)) - *[Xinyuan Sun](https://twitter.com/sxysun1)*
[- Add & vote on questions!](https://pol.is/8cxjrxmskn)
* **Abstract:** A major value proposition of cryptoeconomic mechanisms is that users can trustlessly collaborate by making credible commitments of their actions. We discuss ways where crypto-enforced credible commitments may mitigate human-AI coordination failures and demonstrate the limit and tradeoff of those commitment devices in mitigating intelligence alignment risks. We demonstrate how, surprisingly, the problem of mitigating the negative externalities of commitment devices in cryoto (i.e., MEV) is same as the problem of cooperative AI and a large part of AI alignment.
* **Background readings:**
* [Foundations of cooperative AI](https://www.cs.cmu.edu/~conitzer/FOCALAAAI23.pdf)
* [Commitment games](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/11/2010-A_Commitment_Folk_Theory.pdf)
* [Program equilibria](https://arxiv.org/pdf/2208.07006.pdf)
* [Reasoning about knowledge](https://www.cs.rice.edu/~vardi/papers/book.pdf)
* [Ethereum is a game-changing technology](https://medium.com/@virgilgr/ethereum-is-game-changing-technology-literally-d67e01a01cf8)
* [Crypto as credible commitment devices](https://crediblecommitments.wtf/)
* [Maximal Extractable Value (MEV) from commitment devices](https://tinyurl.com/mevccd)
* [Speed of common knowledge in commitment devices | SUAVE](https://hackmd.io/@sxysun/suavespec)
`17:00-17:15` **Open Agency Architecture Meets MEV: Collective Q&A** - *[Phil Daian](https://twitter.com/phildaian)*, *[Xinyuan Sun](https://twitter.com/sxysun1)*, *[Deger Turan](https://twitter.com/degerturann)*
[- Add & vote on questions!](https://pol.is/27rbsmft8a)
`17:15-17:30` **Using Cryptography to prevent human defectors in world war AGI** ([slides](https://docs.google.com/presentation/d/17zYXO3gNUET_HLsOO24cWaFquT1BMAFPnZIJ_XZ0mis/edit?usp=sharing))([Recording](https://zuzalu.streameth.org/session/650))- *[Barry Whitehat](https://twitter.com/barrywhitehat)*
[- Add & vote on questions!](https://pol.is/2eebkcpzxx)
* **Abstract:** The crypto community has thought a lot about how to build collusion resistant mechanisms. Basically making it impossible to get bribed by making it impossible to prove you did the thing you want to get bribed for. If we combine this with proof of individuality and proof of possession of private key we can make it impossible for AI to bribe humans to defect.
* **Background Reading:**
https://vitalik.ca/general/2019/04/03/collusion.html
https://ethresear.ch/t/minimal-anti-collusion-infrastructure/5413
https://vitalik.ca/general/2019/10/01/story.html
`17:30-17:45` **What ZK can do for us: Privately Authenticating Real People (without third parties) and Auditing ML** ([slides](https://drive.google.com/file/d/11xOoAUDb3aUK5wvAdkKAFgiPXONDnjaY/view?usp=share_link))([Recording](https://zuzalu.streameth.org/session/649)) - *[Daniel Kang](https://ddkang.github.io)*
[- Add & vote on questions!](https://pol.is/6ns2mc5ffw)
* **Abstract:** Zero-knowledge proofs have made amazing advances in proving arbitrary computation, but the real uses have mostly been limited in zkEVMs. In this talk, I will describe how to use zero-knowledge proofs to interact with the real world. I'll start by describing to authenticate real people and media (videos, images, audio) _without_ needing to trust third parties when combined with attested sensors. With open standards, we also don't need to rely on specific hardware vendors. I'll also describe how to audit ML deployments. As a case study, I'll describe how to audit the Twitter algorithm. The same technology can also be used to audit providers such as OpenAI.
* **Reading list**:
* [Verifying the Twitter algorithm](https://medium.com/@danieldkang/empowering-users-to-verify-twitters-algorithmic-integrity-with-zkml-65e56d0e9dd9)
* [Fighting deepfakes](https://medium.com/@danieldkang/zk-img-fighting-deepfakes-with-zero-knowledge-proofs-9b76c23e3789)
* [Using the zkml framework](https://medium.com/@danieldkang/bridging-the-gap-how-zk-snarks-bring-transparency-to-private-ml-models-with-zkml-e0e59708c2fcJ)
`17:45-18:00` **Tea Break**
6 Months Moratorium: Game Theory and Decision Theory
## Chapter 4. The Art of Abstraction
`18:00-18:30` **Using the Veil of Ignorance to Align AI Systems with Principles of Justice** ([slides](https://docs.google.com/presentation/d/1VKVF-gUnZJl7pH9gYvekN03vor_e4OZt4PYupwQic7Y/edit?usp=sharing))([Recording](https://zuzalu.streameth.org/session/664)) - *[Saffron Huang](https://twitter.com/saffronhuang)*
[- Add & vote on questions!](https://pol.is/5m5jziczr6)
* **Abstract:** The philosopher John Rawls proposed the Veil of Ignorance (VoI) as a thought experiment to identify fair principles for governing a society. Here, we apply the VoI to an important governance domain: artificial intelligence (AI). In five incentive-compatible studies (N = 2, 508), including two preregistered protocols, participants choose principles to govern an Artificial Intelligence (AI) assistant from behind the veil: that is, without knowledge of their own relative position in the group. Compared to participants who have this information, we find a consistent preference for a principle that instructs the AI assistant to prioritize the worst-off. Neither risk attitudes nor political preferences adequately explain these choices. Instead, they appear to be driven by elevated concerns about fairness: Without prompting, participants who reason behind the VoI more frequently explain their choice in terms of fairness, compared to those in the Control condition. Moreover, we find initial support for the ability of the VoI to elicit more robust preferences: In the studies presented here, the VoI increases the likelihood of participants continuing to endorse their initial choice in a subsequent round where they know how they will be affected by the AI intervention and have a self-interested motivation to change their mind. These results emerge in both a descriptive and an immersive game. Our findings suggest that the VoI may be a suitable mechanism for selecting distributive principles to govern AI.
`18:30-19:00` **Yudkowsky vs. Plato: can language models possess knowledge?** ([slides](https://www.icloud.com/keynote/066uNOonke1QaqLmsnlMXpEng#CoT-zuzalu-final))([Recording](https://zuzalu.streameth.org/session/666)) *- [Tarun Chitra](https://twitter.com/tarunchitra)*
[- Add & vote on questions!](https://pol.is/6ub79chx6p)
* **Abstract:** Language models have ruined the ability for us to have a clean separation between man and machine — RIP Turing Test. On the other hand, other areas of computer science, such as interactive proofs and ZK have very 'clean' notions of knowledge built into their definitions. The type of 'knowledge' in zero knowledge exists in a particular sense — it can only 'exist' if a polynomial time algorithm generated it and it can only be 'stolen' if you have exponential compute resources. This dichotomy between the lack of "knowledge" in LLMs versus the formal and clear definition of "knowledge" in ZK suggests that we might be able to import some lessons about 'knowledge' from ZK to LLMs. In this talk, we'll go through the epistemological concerns related to this question and try to provide some ideas for how LLMs can display possession of knowledge to each other.
* **Reading List:**
- [Initial Blog Post](https://hackmd.io/@pinged/zk-and-llms)
- [Chain of Thought Prompting Elicits Reasoning in Large Language Models (initial Google paper on CoT)](https://arxiv.org/abs/2201.11903)
- [Language Models (mostly) Know What They Know](https://arxiv.org/abs/2207.05221)
- [Justin Thaler's ZK book, Section 7.4, Knowledge Soundness](https://people.cs.georgetown.edu/jthaler/ProofsArgsAndZK.html)
- [AI safety via debate](https://arxiv.org/abs/1805.00899)
`19:00-19:30` **Seaside Chat: can artificial intelligence produce art?** *- [Sarah Meyohas](https://sarahmeyohas.com/) ([Recording](https://zuzalu.streameth.org/session/669))*
[- Add & vote on questions!](https://pol.is/4emc4xe8ku)
* **Abstract:** In this talk, we will delve into the fascinating world of AI-generated images, exploring the intricate relationship between AI and artistic authorship. We'll examine whether searching for an image can truly be considered as creating it, and discuss the characteristics of a medium that influence the strength of authorship claims. Further, we'll investigate how the semantics of language as a tool for image synthesis impacts the end results and consider whether AI is inadvertently codifying "style" in its creations. Along the way, we'll ponder if AI-generated art evokes a sense of nostalgia by design, and ultimately, address the burning question: Can AI truly produce art?
*Special thanks to Vitalik Buterin, Michael Johnson, Rob Knight, Nate Soares, Xinyuan Sun (Sxysun), Barry Whitehat, George Zhang and Zuzalu friends of the Pi-rate Ship. If you have any proposed topics, would like to speak or attend the .wtf unconference, please ping [sxysun](https://twitter.com/sxysun1) or [T.I.N.A.](https://twitter.com/tzhen) via Twitter DM.*
---
### Snacks for Thoughts...
Blockchains enable trustless collaboration via cryptographic and crypto-economic primitives. These primitives allow users to delegate their decision-making to smart contracts (algorithmic agents). And consensus on commitments makes this delegation common knowledge, thus [shifting equilibria](https://medium.com/@virgilgr/ethereum-is-game-changing-technology-literally-d67e01a01cf8#:~:text=Ethereum%20is%20an%20unprecedented%20arena,external%20authority%20to%20enforce%20rules).
AIs, as complex algorithmic agents that may or may not employ agentic behavior, can lead to undesirable equilibria for humans, and many has even predicted to be bringing [horrible destruction](https://intelligence.org/2022/06/10/agi-ruin/) for humans very quickly. Can existing coordination technologies like crypto help us answer this question?
CryptoXAI delves into the coordination and alignment aspects of AI and crypto. After all, crypto's potential lies in its ability to act as a coordination device through the use of [credible commitments](https://collective.flashbots.net/t/whiteboard-an-overview-of-credible-commitment-devices/926), e.g., global payment, public goods funding, democratized financial access. How does crypto, as an alignment/commitment device, compare with popular alignment approaches such as [decision theories](https://wiki.lesswrong.com/wiki/Updateless_decision_theory?_ga=2.22276914.826386532.1559783077-36269370.1469250194) or [open-sourcing](https://intelligence.org/files/ProgramEquilibrium.pdf) AI’s [source code](https://www.sciencedirect.com/science/article/abs/pii/S0899825604000314)? Does it make more sense to align AIs by combining [functional decision theory](https://arxiv.org/abs/1710.05060) with [cryptographic commitments](https://medium.com/@virgilgr/ethereum-is-game-changing-technology-literally-d67e01a01cf8) about the AI's actions instead of allowing arbitrary access to source code (which could cause [programmable privacy](https://hackmd.io/@sxysun/progpriv) issues)? But even if AIs can coordinate using cryptographic/crypto-economic commitments, can those commitments exercise be interpretable? What does the tradeoff space of those approaches look like?
Will AI use crypto to coordinate amongst themselves to improve the equilibria payoff? Will the equilibria that they coordinate align with the human social value? Can crypto as a commitment device be used to align AIs and humans? Afterall, some [argues](https://www.lesswrong.com/posts/zB3ukZJqt3pQDw9jz/ai-will-change-the-world-but-won-t-take-it-over-by-playing-3) that AIs are still far from gaining agency and will stay in the ["tool of humans"](https://gwern.net/tool-ai) range for a long time. If that's the case, will the coordination and alignment of AI just ends up being a shadow of the coordination and alignment of humans. What unique properties would the projection of this shadow have? And if it does endup being human alignment, is it possible for crypto to exercise the coordination magic on humans to work on building AIs together (e.g., solving the data privacy training problem using some variant of [orderflow auctions](https://www.youtube.com/watch?v=9je9VheLZpw))? How about using crypto to coordinate humans in the period leading up to AGI, or to make the online world more secure against AGI?
What about AGIs? How fast will AIs gain agency? Does agency require [consciousness](https://futureoflife.org/podcast/on-consciousness-qualia-and-meaning-with-mike-johnson-and-andres-gomez-emilsson/)? Does the development of AGIs lead to a world where there is [one dominant Advanced AGI](https://www.overcomingbias.com/p/foom-updatehtml)? Will AGIs have arbitrary preferences that make their alignment impossible? Will the access to privacy technology change what AGI could do (after all, the acess to source code would be impossible)?
What kind of human values can crypto, as commitment devices, align AIs to that wasn’t possible with existing approaches like various [decision theories](https://www.lesswrong.com/posts/szfxvS8nsxTgJLBHs/ingredients-of-timeless-decision-theory)? What are the limitations of commitment devices in its coordination of agents to reach human-valued outcomes? Can crypto learn from AI on how to best coordinate and trustlessly cooperate?