# Musings on an Agential Universe
I am a computer scientist working in the field of replicated state machine protocols and we arrived at an interesting protocol design that might have some relation to effects we see in physics.
## Replicated State Machines
Replicated State Machine protocols allow a set of spatially distributed computers to agree on an evolving ledger state and they are usually used to provide a decentralized infrastructure for users to exchange financial assets (e.g. Bitcoin).
### Double Spend Problem
The reason why we need sophisticated protocols and can not just send funds in a similar way as we send e-mails is the so called "double spend problem":
While physical coins can only be spent once, digital coins can be spent several times by crafting multiple conflicting transactions containing different instructions on how to modify a shared ledger state.
#### Traditional Solution: Distributed Turing Machines
Traditionally this is solved by making these protocols operate like distributed turing machines: We agree on a starting ledger state, we agree on a growing list of transactions and then we apply them in order to our copy of the shared ledger state to eventually arrive at the same result. If there are transactions that try to spend funds that have already been spent before, then we simply reject them.
While this is actually a pretty simple solution, it does have a problem: Since we check the validity of transactions by applying them in order, we are limited to execute state changes one-by-one on a single CPU which severely limits the computational throughput of these kind of systems.
#### Our Solution: Superpositions and Retrocausal Collapse
In an attempt to solve these problems, our research yielded a [new and different way to track the ledger state of a distributed system](https://arxiv.org/abs/2205.01345) that is based on tracking the causal structure of state changes in an evolving hyper-graph and creating superpositioned versions of the ledger state in the presence of double spends (transactions touching the same object).
Two transactions sending the same funds to different people (Alice and Bob) result in two superpositioned versions of the ledger state: one where Alice received the money and one where Bob received the money.
The network then globally collapses this branchial space of possibilities into the version that was seen to be first by the majority of network participants. Since this information only becomes available slightly delayed (it takes time to collect non-local knowledge), this collapse happens in the form of a retrocausal selection process.
##### Non-Locality
Since the collapse is only related to the causal structure of events in the hypergraph, it is inherently non-local and contextual, which means that if we create two conflicting transactions:
- One that sends coins to Bob and his Dad,
- and one that sends the same coins to Alice and her Mum,
then this results in two conflicting versions of reality:
- One where Bob and his Dad received the coins
- and one where Alice and her Mum received the coins.
Collapsing this space of possibilities towards the version where Bob received a coin automatically also chooses the version where his dad received one, despite the fact that Bob and his father might be spatially separated (they are part of the same version).
## Similarities To Physics
For me as a non-physicist this sounds at least analogous to things I had heard about quantum mechanics so I was interested to learn if anybody had ever explored similar ideas in this context.
### Stephen Wolfram: Multiway Causal Graphs
The first person that I found that was exploring similar ideas was Stephen Wolfram who is using exactly the same datastructures as us in his physics project and he calls them multiway causal graphs.
While there are a lot of similarities (he even mentions us a few times in his blog posts), there are also some subtle differences:
- we use an object-centric model (to mimic a virtual universe with digital assets) while he is using more abstract "graph rewriting systems" but the emerging graph with its branchial space of possibile evolutions is identical
- we use a collapse of the branchial space to arrive at a single causally consistent evolution while he is looking for systems that automatically converge back to the same result which he calls "causal invariance"
An example for causal invariance would be that Alice and Bob both order and eat a pizza from the same pizzeria. Then the macroscopic outcome would be the same - the 10€ would again be with a single entity - despite having gone through conflicting evolutions temporarily.
:::info
**Note:** Of course the state is not 100% identical because in one version Alice is hungry and in the other one Bob is hungry but I think you get the idea.
:::
### Ruth Kastner: Transactional Interpretation of Quantum Mechanics
The next person that I talked to that was exploring ideas with a significant overlap was Ruth Kastner with her transactional interpretation of quantum mechanics (TIQM).
It doesn't just use a similar terminology (transactions) but it gives an almost exact description of our solution where "persistent reality" emerges as a retro-causal selection process over a space of superpositioned possibilities that forms a realm of "proto-time" or computational exploration.
Even the fact that the algorithm has to traverse the superpositioned graph in the opposite direction (backwards in time) to reach and persist the accepted changes upon collapse resonates very well with TIQM.
## Engineering Perspective
Despite the really mind blowing similarities it still felt like the TIQM was missing a very crucial point namely to talk about the reason **why** the universe would operate like that.
While the transactional interpretation is the result of trying to make sense of the observations of quantum mechanics and is somewhat constrained to making educated guesses, our protocol is the result of a multi-year engineering effort where each design decision serves a very specific purpose and it might be interesting to try to derive some additional insights.
### Minimizing non-local loss-functions
It is pretty obvious that the universe is not trying to resolve conflicts introduced by malicious users from the outside but it might still use a similar execution model that fans out into search-space to select the first transaction that is optimal in some regard, because what we are actually doing with our protocol is to:
Introduce a novel form of **optimized distributed computation** where every state change of the distributed system **minimizes a non-local loss function** (involving two or more spatially separated entities).
### Randomness and Computational Irreducability
The fact, that the state of possible receivers is undetermined at the time of emission, makes this problem **computationally irreducable**.
:::info
**Computational irreducability** is a term that Stephen Wolfram uses to describe systems or processes that cannot be simplified or accelerated beyond their natural course (you need to perform all steps of the computation to arrive at the result).
*An example would be the search from prime numbers where the next prime can only be "found" but not predicted.*
:::
The randomness and probabilistic outcomes of measurements in quantum mechanics might simply be the result of solving such an unpredictable optimization problem that depends on the state of at least two parties that are not able to exchange information instantaneously.
### Universe as a Computational Agent
If the universe does indeed engage in a non-local optimization process and has a specific loss function, then this could have far reaching implications:
- The universe itself would be a computational entity with an actual goal.
- It would provide an active computational substrate rather than a passive stage.
- If classical reality is an optimized form of some underlying computational processes then that means that reality is to some degree holographic.
Considering the importance of minimizing non-local loss functions for the field of machine learning and their role in the emergence of intelligent behavior, I was wondering if the TIQM might eventually be pointing in a direction that could connect quantum mechanics to the emergence of life.
### What does the Universe want?
If we assume that the TIQM has the purpose of finding the first "compatible receiver" that is able to form a handshake with the emitter then this essentially describes a universe where information always flows along the path that takes "the least time" (predicting Fermats and the stationary action principles).
The "loss function" in this context is the inverse of the total amount of information exchanged or interactions realized within a given number of computational cycles that it takes the offer wave to propagate.
By minimizing this loss function (or equivalently, maximizing information exchange), the universe is essentially allowing its contained computational entities to explore all possible wirings while always selecting the one that first leads to a transfer of information.
This sounds like a really fascinating model of computation that instead of minimizing the loss function at a specific point (e.g. at an output layer in a neuronal network) would instead fractally self-optimizing the interactions themselves leading to computations that try to maximize their computational complexity.
This loss function is extremely minimal while at the same time being essentially infinitely dimensional. The universe seems to be engaging in the creation of computational circuits of ever increasing complexity through a simple mechanism of a brute force tree search.
## Early Simulation Results
Looking at things from an engineering point of view has the benefit that we can actually simulate things instead of having to rely on our intuition.
I am still in an extremely early stage and am currently trying to acquire the necessary knowledge to extend my simulations but even in a very limited toy model of this, it already looks like gravity and inertia seem to be emergent phenomena.
### Emergence of Gravity and Inertia
At first I was quite perplexed by this but after thinking about it, it kind of makes sense:
1. The space between objects is not completely empty but filled with uncollapsed wave functions looking for a receiver (zero-point energy).
2. If emitters are isotropically distributed in space, then a non-accelerating object floating freely in space will receive interactions from all sides equally.
3. An object that is accelerating however is moving away from emitters in one direction and towards emitters in the other direction leading to assymetries in the number of interactions (due to the formation of a relativistic horizon).
4. Similarly two objects that are close to each other are mutually shielding each other from receiving transactions from their respective side of the cosmos, leading to a lowered amount of interactions and a net push between the two objects.
The lowered interaction rate due to this shielding effect is what we call "gravitational time dilation".
### Mike McCulloch: Quantized Inertia
Interestingly, we again find an existing theory called Quantized Inertia (by Mike McCulloch) that beautifully describes the emergence of gravity and inertia due to shielding and horizon effects.
What is particularly interesting is the fact that his theory is also based on the TIQM because without it, an accelerating object would not be able to retro-causally create a horizon in the past.
## Conclusion
While this entire line of thought is of course to some degree speculative, it strongly points into the direction of a noetic universe that can still be understood in a somewhat mechanistic way.
Instead of saying "the universe is fine tuned for life", we should maybe better say it is "fine tuning" (by manifesting that version of the possibility space where most things happen in the least amount of time) and we should equally consider the possibility that we mistake "local unpredictability" with randomness.
## Next Steps
Encouraged by the early results, I am currently exploring ways to improve my simulations:
- Use the GPU to increase the performance for larger universes.
- Explore different computational models for the embedded computational entities.
- Get a better understanding of molecular bonds.
- Find a way to mimic this "molecular composability" of computational entities.
Currently, Yves Lafontes Interaction nets seem to look like the most promising model and while looking for a runtime that I could potentially retrofit, I found the work of Victor Taelin (an AI researcher who happens to work on ideas of [expanding a space of possible computational evolutions in superposition](https://x.com/VictorTaelin/status/1819208143638831404) from which he retro-causally chooses the right one upoch reaching some condition - this really starts to feel like synchronicity).
Since I have a full time job and still need to get a better understanding of certain aspects of physics, I will most probably only make relatively slow progress on the simulation in the coming months but sharing these ideas is at least a first step.
## Postamble
I am aware of the fact that the discussed examples are massive simplifications and can not directly be compared to the high dimensional hilbert space of quantum mechanics but I think they are sufficient to understand the idea of exploring different computational evolutions in parallel.
Considering that we can build airplanes without feathers, I believe that we don't necessarily need to mimic all the details of reality to develop a visceral understanding of what is going on.
There is a lot more that could be said but I think it makes sense to keep the document as concise as possible.
#### Cheesy Quote
*I believe if there's any kind of God it wouldn't be in any of us, not you or me but just this little space in between. If there's any kind of magic in this world it must be in the attempt of understanding someone sharing something. I know, it's almost impossible to succeed but who cares really? The answer must be in the attempt.*
*- Julie Delpy*