---
tags: Meeting
---
# **Appetite Transition Analysis**
## Outline
[TOC]
## **Markov Chain Setup**
### **How many steps I need to consider?**
> 當我拿到Transition Probability Matrix(以下簡稱TPM)時,我可以先初估到底要走幾步會達到stable state,提早知道後才不會浪費電腦運算過多或是過少的steps。
> 
- 由上圖可知,這三個狀態在走了三步後都到達了stable state
### **Time information encoding in TPM**
> 接下來就是要進行Markov Chain的部分了,也就是要讓此Matrix同時含有過去的+現在的information,也就是matrix ** steps 的概念,如下圖。
>
> 
> 知道怎麼算後,我們就能看看我們自己算出的TPM長甚麼樣子(用Heat map展示)
- **Baseline**:
> 
> 
> 
- **Inhibition**:
> 
> 
> 
- **Control**:
> 
> 
> 
### **The correlation between different TPMs**
- 
- 
## Markov Chain Analysis (Random Walk)
:::warning
In this chapter, I will show two different types of Markov Chain
:::
### Random output Sample
```python=
nstate = 5
ntest = 5000 # test的數量
nstep = 4 # 走4步
state_0 = np.zeros((1, ntest, nstate))
lis = [0,1,2,3,4]
# initialized
i = 0
while i < ntest:
avoid = np.random.choice(lis, 50, True) # in order to avoid [0, 0, 0, ...]
for j in range(0, len(avoid)):
state_0[0][i][avoid[j]] += 1
state_0[0][i] /= len(avoid)
i += 1
```
> 
### Markov Result
#### Mine(keep matrix)
``` python=
def Result(mat: np.ndarray, nstate ,ntest: int, nstep: int, state_0: np.ndarray):
#Get TPM
TPM = np.ndarray((nstep, nstate, nstate))
for step in range(0, nstep ):
TPM[step] = get_TPM(mat, step + 1) #加一是因為step從1開始
#print(TPM)
Result_Mat = np.ndarray((ntest, nstep, nstate))
for i in range(0, ntest):
for j in range(0, nstep):
Result_Mat[i, j, : ] = markove_choose(TPM[j], state_0[0][i])
return Result_Mat
```
#### Mark(get state every step)
```python=
def Result_Mark(mat: np.ndarray, nstate, ntest: int, nstep: int, state_0: np.ndarray):
Result_Mat = np.ndarray((ntest, nstep, nstate))
current = state_0
for i in range(0, ntest):
for j in range(0, nstep):
#print(current[0][i])
current[0][i] = markove_Mark(mat, current[0][i])
Result_Mat[i, j, : ] = current[0][i]
return Result_Mat
```
- 
- 
- 
- 