# IBC Stores
The IBC module needs its stores to be a part of the consensus state, this is because the light clients need to verify the inclusion of IBC related data in the consensus state. In order to allow this the following changes must be made.
## State Trees
The current state hash computation mechanism must be changed to be produced by a `rootTree` which includes the different state tree hashes as follows:
```go
rootTree.Update([]byte("app"), appTree.Root())
rootTree.Update([]byte("transactions"), transactionsTree.Root())
...
rootTree.Update([]byte("ibc"), ibcTree.Root())
stateHash := rootTree.Root()
```
As the SMT is not order dependent on insertions, the root hash calculated will be deterministic.
This allows for the verification of a specific tree's hash being included in the state hash computation.
### IBC Store Tree
The entire IBC store must be added to the `TreeStore` module as an SMT alongside the `transactions` tree and `applications` tree for example. The tree will be slightly different from the other trees as it will be used for retrieval.
Currently the state trees are created as follows:
```go
smt.NewSparseMerkleTree(nodeStore, sha256.New())
```
However, the IBC state tree should be created as follows:
```go
smt.NewSparseMerkleTree(nodeStore, sha256.New(), smt.WithValueHasher(nil))
```
This will create the same `*smt.SMT` type but the tree will not hash values prior to hashing the node. This means the result of `ibcTree.Get(key)` will return the un-hashed value and not the hashed value.
## Store Manager
The IBC host needs access to the different stores. As these are ultimately a single IBC state tree, the prefix used for each key will be used to differentiate the stores. Ultimately the store manager will return different copies of the IBC state tree. The logic below details how the store manager will interact with the IBC state tree to perform local operations
```go
type Store struct {
prefix coreTypes.CommitmentPrefix // []byte
tree *smt.SMT
}
func (s *Store) Get(key []byte) error {
prefixed := host.ApplyPrefix(s.prefix, key)
s.tree.Get(prefixed)
}
func (s *Store) Set(key, value []byte) error {
prefixed := host.ApplyPrefix(s.prefix, key)
s.tree.Update(prefixed, value)
}
func (s *Store) Delete(key []byte) error {
prefixed := host.ApplyPrefix(s.prefix, key)
s.tree.Delete(prefixed)
}
type StoreManager struct {
stores map[string]Store
}
func (sm *StoreManager) GetStore(prefix coreTypes.CommitmentPrefix) *Store {
tree := smt.ImportSparseMerkleTree(
treeStore.IBCTree.NodeStore,
treeStore.IBCTree.Root(),
sha256.New(),
smt.WithValueHasher(nil),
)
sm.stores[string(prefix)] = store
return &Store{
prefix,
tree,
}
}
```
## Event Messages
In order for local changes to the IBC state tree to be included in the next block, and thus actually transition the networks state, they must be propagated throughout the network. In order to do this the messaging system and bus will be utilised.
The following messages will be defined to transition the state
```protobuf
// UpdateIBCStore defines a message representing the addition of a key/value pair to the IBC store
message UpdateIBCStore {
bytes prefix = 1;
bytes key = 2;
bytes value = 3;
}
// PruneIBCStore defines a message representing the removal of a key from the IBC store
message PruneIBCStore {
bytes prefix = 1;
bytes key = 2;
}
// IBCMessage defines the different types of IBC message that can be sent across the network
message IBCMessage {
oneof msg {
UpdateIBCStore update = 1;
PruneIBCStore prune = 2;
}
}
```
A second mempool for IBC related state changes will be created. Upon the node receiving one of these IBC messages and passing it to its IBC module to handle, the IBC module will add these events to its mempool.
The block producer will then in the `CreateProposalBlock` method reap this IBC event mempool, add the events to the relevant Postgres tables and then call the `ComputeStateHash` function which will update the state trees by reading from the Postgres tables. This will transition the state for the IBC tree. Upon receiving the new block from the other nodes each node will then apply the new block and the IBC events it contains reaching consensus about the IBC store's state.
### Postgres
The local Postges DB will require new tables:
- `clientStates`
- `consensusStates`
- `connections`
- `channelEnds`
- `nextSequenceSend`
- `nextSequenceRecv`
- `nextSequenceAck`
- `commitments`
- `reciepts`
- `acks`
These relate to the different provable stores that compose the IBC state tree and as such will be used (after being reaped from the mempool and added) to transition the state of the IBC store on a new block.
In the event of a `PruneIBCStore` message the value for the given key will be set to `nil` in the Postgres DB. When being read from Postgres and applied to the IBC state tree, if the value read is `nil` the tree will call `ibcTree.Delete(key)` to prune the value from the store.
### Hosts Flow
The following flow must be followed by the host in order to interact with any IBC store related data:
1. Push IBC message to the event bus
2. Wait for the next block
3. Check new IBC store state
4. Verify changes have been included
5. If included proceed
6. If not included, wait or retry state transition
## Learnings
- The cake is a lie
- We cannot act on the mempool until **after** the block has been committed and **only** then can we apply the block and thus enact upon the state changes proposed in the mempool
- Source, Valid, Truth
- The **source** of any state change originates from the mempool
- The **valid** state changes are put into Postgres
- But the **truth** of the state is **always** found in the state trees
- A problem at the core of blockchains
- If I propagate a message that is only picked up by <1/3 of the network, even if it is valid, it will **never** become part of the new block - it cannot gain support as valid by 2/3+1
- If I sent a transaction to 20% of the network and then they all go down, my transaction will never become part of a new block
- Validators should ideally be incentivised to include valid data in blocks, and punished for not doing so