# PI Migration & Mint Recovery
PI finished mint `184384` successfully. The last mint also performed the usual garbage collection at the end, freeing up `~ 1200 KB` as usual after the mint.
Once PI is migrated to a new process, the mint recovery will involve having the new process ingest synthetic messages that represent the missed mints; these can be processed almost as usual. Adjustments need to be made to PI logic before that, so that PI doesn't attempt to communicate with the Delegation Oracle, because the Delegation Oracle is not in the state it was `n` mints ago, and would answer incorrectly. Also, PI must be able to mint retroactively based on synthetic messages, i.e. messages sent by you, not by the ususal processes.
## Migration
The following issues can be used as guides in the migration procedure
- [Aug 2025 issue](https://github.com/Autonomous-Finance/permaweb-index/issues/38) -> We initially thought migration would be necessary, but we managed without; the initial plan is still useful though as reference
- [Sep 2025 issue](https://github.com/Autonomous-Finance/permaweb-index/issues/44) -> Actual migration from the previous `rxxU4g-..` to the current PI process
Migration will be a bit simpler this time because the custodian already keeps all the funds.
Broadly speaking, migration involves:
- creating a new process
- setting global state exactly as the state PI had when it went OOM
- it's ***almost*** what we see in the Info message response, but I think you may have to set up PI in an emulator, bring it up to the healthy sate right after tha last mint, and then send a `Get-State` message to obtain the full data (would have to check 📝)
- the Get-State handler is probably already on the handlers list; if not, it needs to be added; see [here](https://github.com/Autonomous-Finance/permaweb-index/blob/main/scripts/process-migration/apply-state-pi.lua)
- `Eval`'ing the latest source code
- New deployment via Github Workflow: process address must be updated in `state-processes.prod.yaml` (from `H1I09hGlSlqrvlQid4zBp...` to the new PI address)
- Setting up all other processes that expect PI to be `H1I09hGlSlqrvlQid4zBp-lleynE8bNo2Ep1u8xq0fQ`, so that they now recognize the new PI: Mint Reporter, Delegation Oracle, FLP factory, FLPs, Custodian, PI Token
- Setting up all the frontends that expect PI to be `H1I09hGlSlqrvlQid4zBp-lleynE8bNo2Ep1u8xq0fQ`, so that they now recognize the new PI
- Part of that is achieved by updating the provisioning script to use the new PI address
## Mint Recovery
Usually PI can recover after a missed mint (or two) because it has a queue where it ingests all messages, grouped by yield cycle.
Our migrated PI will not have these messages, because the last healthy state we can read from it was before the mint following mint `184384` began. So that queue is empty.
All the **missed mints need to be replicated** now by going through all the missed messages, grouping them by yield cycle and then working through each group:
- send messages to PI
- observe it start the mint, interact with PI Token, mint everything, wrap up with Log-Removed-Data and Garbage Collection
There are 2 tricky aspects to this, where **recovery logic must differ from regular mint logic**. The first part we already did once on a migration. The second part we never had to do yet.
### 1. Mock Delegation Oracle data
When PI receives the first message related to a new mint, it usually asks the Delegation Oracle "Get-Pi-Index-Weights", where the answer is different for each mint. You can't let PI do this in the recovery, because it would obtain data that is not true in the context of the "old" mint being processed now. So you would have to reconstruct historically what the response to this message would have been during the particular missed mint that is being processed on the migrated PI, and build an exception into the part of PI's source code which asks Delegation Oracle for the latests pi Index weights. Smth like
```
INDEX_WEIGHTS_PROJECT_BPS = {
[123456]: {...},
[234567]: {...},
-- one entry for each missed mint
}
-- where we usually ao.send(...).receive() the pi index weights:
if yield cycle == `123456` then
-- use INDEX_WEIGHTS_PROJECT_BPS[`123456`]
end
```
So you would have to change the PI source code and redeploy with this adjustment
### 2. Accept synthetic messages
All the messages sent during the recovery will be fake. They should mimick the original mesages as far as possible, but the senders cannot be the usual ones. The senders would have to all be *0xRecoveryOperator* or whichever account you wanna use to send these messages. Can be the process owner.
So, more specifically: PI Handlers list must be updated so that `Report.Mint` or `Credit-Notice` messages that would usually be recognized as a mint related message, where there is a check "does this come from the expected message sender?" - all these handlers should be rewritten such as to also recognize the messages if they are sent by *0xRecoveryOperator*.
Just as with mock delegation oracle data usage, these changes would best be made by updating the source code, commiting to main and having the workflow redeploy PI.
## Delaying recovery while AO Mint continues
If you can find a viable workflow to recover 1 missed mint properly, you will be able to scale this up relatively easily. After the migration, the most difficult part is setting up the source code for this exceptional handling with mock oracle data and synthetic mint messages. if recovering the `1st` missed mint takes `3 days`, then recovering the next one may go as fast as `1 hour`. Setting up everything is difficult, but execution should be automated to a high degree and work within hours.
In case of a reviving of a stuck PI process, it matters if unprocessed mints accumulate, because they load up the process memory which queues up all ingested messages that cannot be immediately processed.
But in this case, you can take time to prepare the migration & recovery procedure. When you're ready, you will have `n` days of missed mints, but you should be able to recover all of them within a single day.
> ❗️ Recovery should occur with all processes being already wired up to the new PI, but **no new AO mint should occur during execution of the recovery**.
So, in order to be able to recover fully and correctlly, all missed mints should be processed before the new PI Process starts to receive messages from the next AO mint.
## Timing Migrationg vs Recovery
We suggest the following timing:
### 1. Prepare both migration and recovery in great detail
Obtain all the mint-related messages from missed mints, grouped by yield cycle. Gateway should have all of them:
- Credit-Notices from AO Token
- Credit-Notices from FLP Tokens
- Report.Mint messages from the Mint Reporter
In case some of these messages were not pushed, you would have to identify and repush...
---
Find a way to extract the migration-relevant state from the dead PI. (`Get-State` message perhaps with hydration on an emulator)
---
Find a way to reconstruct the historical responses for Get-PI-Index-Weights. (hydration on emulator & time travel should work)
---
Write the custom source code for handling mints with mock data for pi index weights, as well as synthetic messages.
---
Run a couple of fake migrations & partial recoveries. Create a new PI process without wiring any other process up with it. See if you can:
- swiftly apply the state of the dead PI and deploy the source code using GH action
- have the new process ingest the synthetic messages related to the first missed mint and see if you can get the PI mint to kcik off (PI sends `Log-Start-Process-Current` to self)
These steps can safely be exercised without wiring up all the other processes with the new PI.
Once this works swiftly, the official migration & recovery should be doable within 1 day.
### 2. Halt AO mint
### 3. Migrate for Real & Wire Up Everything
### 4. Execute Recovery
### 5. Continue AO mint
## Root Cause
It would be very useful to learn why it went out of memory, since the gateway shows no large-payload or computationally intensive message that may have caused the issue. No message at all, actually. The last incoming message from the day of going out-of-memory was an Info message, https://aolink.arweave.net/#/message/Kxx9qgJpwyKxtuZv9uVS-5F__3yC1yyDr3Hn8G2v5tI and it looks like the process was still in a good state.
Just saying - the reason for going out of memory **may repeat** as soon as the tedious migration & recovery work is finished.
## Action Plan For PI Recovery (16 December 2025 EDIT)
### Recover revived PI
- retrieve each missed mint BPS (emulator)
- identify unpushed message in all missed mints & repush
- make PR to PI
- containing the source code with special handling of 'Get-Pi-Index-Weights'
- verify with AF
- update source (merge PR)