# Scoring agency using LLM's and simulated DM's.
## Concept.
Imagine a conversation between two people, over DM's (direct messages). This conversation has gone on for years, and has a lot of context.
Now imagine we simulate this conversation using an LLM. A ChatGPT model is tasked with predicting future conversational messages.
The LLM functions like [Deep Blue's chess computer](https://en.wikipedia.org/wiki/Deep_Blue_(chess_computer)) - it simulates all possible conversational paths between two people (or at least as many as is [tractable](https://en.wikipedia.org/wiki/Tractable)).
Now this DM is ongoing. All messages the LLM hasn't seen can be scored according to the prediction error.
This is a fun test of agency, free will and determinism. If you can have a chat about something the LLM didn't predict, you are moving forward! Reverse that entropy, baby.
Potentially you could make this into a reverse entropy ponzi scheme, whereby people are rewarded for content which is unpredictable, proportionally to its scoring.
*The most entertaining outcome is the most likely.*