# Provable gods and autonomous worlds Two trends are shaking up the gaming worlds: artificial intelligence and autonomous world. LLMs offer new ways to escape the content creation rat race by reducing the marginal cost of content creation (see ChatGPT playing Ultima Online). By leveraging the decentralization and trustlessness properties of blockchain, games can also create new affordances and live without any single centralized entity. Think of an evergreen game of "r/place" running entirely on the blockchain, that no one can turn off. Zero-knowledge proofs and validity proofs could be a third affordance, providing game builders a way to offload computation to end-clients and trust the output, obfuscate information, run trustless P2P multiplayer systems with local consensus rather than executing the game on a centralized server. We had a fun thought experiment during the Bitkraft summit that mixes all three technologies. Buckle up. Picture an autonomous world called Gaia. The game is a simulation of a planet which physics systems are running on the blockchain directly. The game is made of a few systems, such as `weather`, `population`, etc. The system ticks forward every block and different parameters of the planet evolve: temperature, number of humans, number of trees, etc. How do we make sure Gaia generates drama or sufficient "fun" for people to care about a planet simulation? (I'd argue not much, since people are already watching marbles racing but that's another topic). Let's introduce a game mechanic where players are able to decide what happens to the planet, similarly to Twitch Plays Pokémon. You'd have 100s of players competing to get an outcome they believe is interesting. Crashing an asteroid in the planet, raising the temperature by a few degrees and seeing what happens. What if we asked an AI to choose the outcome of certain functions? Let's call these AI gods. The world contract would reference a `god_list` that can be called upon by players. Let's say we have two competing gods. One of them is an orderly god trying to take care of the planet, and a chaotic god trying to set it on fire. Players can use the `pray(god_id, amount)` system to tip the scales in favour of their god to win the next theological event round. (Yep, we're now doing onchain pagan offerings.) Let's say chaotic god has won the most amount of prayers through the sheer power of malice in the player community. This does not bode well for Gaia. There is now a bounty to execute chatic god's code (which will most likely never run onchain), produce an output, such as "a meteor now crashes on the planet and kills all dinosaurs", and a validity proof that the god's code was actually properly run. (You'd be able to verify this by signing the LLM's output and proof with the proper version of the god's code signature, the god registry would list down the public key of the different gods.) The `verify` function would take in the final output, the signature, and ensure it matches the expected god's key, and verify the validity proof. Should we choose to not use validity proofs, you'd have no guarantees the god's code was properly executed. You wouldn't be sure your prayers were heard. Chaotic god has decided to crash a metor into the planet. The decision to "crash a meteor" would be translated into actionable changes such as killing half of its denizens and raising temperature by 50°C. The planet will now take a certain amount of time to recover from it, until the next praying round. This is an example for one function, but we could go even further. We want autonomous worlds, free from the control of any given human group. Let's say we're playing dungeons and dragons onchain. The fun of dungeons and dragons is doing things that are out of bands, telling your game master you're going to seduce the dragon you're trying to find, and the GM telling you to roll for charisma for example. Basically, any game that could run with a dungeon or game master, could possibly be managed by an AI. So why not ask them to run the game! Running these AIs and producing the validity proofs would be incentivized by players directly paying them. You put up a bounty for someone to run the AI's code and produce a validity proof that will release the bounty. This would allow for more expressive composability and emergent gameplay outside of any given group's control. There are some possible issues, obviously. One could bruteforce the AI's outputs by trying multiple times, until they get a favourable outcome. MEV meets AI-powered autonomous worlds. In that case you could probably mitigate this through economic incentives: the 1st person to post a valid output will earn the reward, so an attacker wouldn't have time to run multiple simulations before a legit actor. As it stands currently, provable LLMs are still far off. The Giza team and others are at the forefront of this and I'm convinced we'll be able to train and generate AI outputs thanks to the advance of provable languages such as Cairo and Noir. I have no clue how far we are from having such stuff running in production, but it's pretty funny to picture autonomous world denizens praying to their provable gods. If your autonomous worlds doesn't start a cult, is it really autonomous? Thank you David Amor, Tim and Hilmar for the fun discussion.