JJJJJJ
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights
    • Engagement control
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Versions and GitHub Sync Note Insights Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       owned this note    owned this note      
    Published Linked with GitHub
    Subscribed
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    Subscribe
    # Markov Chain ## Markov Processes ![](https://i.imgur.com/1XfggHC.png) 不管是 discrete($X(t), t = 0,1,2, \cdots$) or continuous parameter time($X(t), t \gt 0$) 的 Stochastic Process,只要滿足以下式子,即可稱為一個 Markov Process: $$P(X(t_n) \le x_n | X(t_1) = x_1, ..., X(t_{n-1}) = x_{n-1}) = P(X(t_n) \le x_n | X(t_{n-1}) = x_{n-1})$$ :::info a Stochastic Process is best defined as a family of random variables, $\{X(t), t \in T\}$, defined over some index set or parameter space $T$. The set $T$ is sometimes also called the **time range**, and $X(t)$ denotes the **state of the process at time t**. ::: 在日後的課程中我們會學到 [Birth-Death Processes](/oN9Xfs1HTvmVN726XA8jdA),而 Birth-Death Process 即具有 Markovian property,也就是滿足上式的定義。 而這個學期我們會學到的大部分 Queue 都是以 Birth-Death Process 為基礎,故在 Queueing Theory 中,我們可以使用 discrete-time Markov Chain 或者 continuous-time Markov Process 來分析 各種 Queue 的 long term behavior。 ## Long-tern Behavior of Markov Chain ### 定義名詞 以下說明不會著重在詳細數學證明,而是僅在邏輯層面上做整理,釐清各個 distribution (長期來說)的關係。 :::info 對 Markov Chian 來說,觀察 long-term behavior 有三種,第一是 stationary distribution,第二是 limiting distribution,第三是 ergodicity。 ::: 首先定義 [Communicating Classes](https://mpaldridge.github.io/math2750/S07-classes.html),我們說 state i & j **communicate** 代表雙方都一定可以經過 state transition 到達對方的 state,而若整個 Markov Chain 的 state 都互相 communicate,我們說這個 Markov Chain 是 **irreducible**。 接著定義 [Recurrence & Transience](https://mpaldridge.github.io/math2750/S09-recurrence-transience.html#thm:rectran),Recurrence & Transience 代表一個 state 回到自己的可能性。定義如下: let $f_{jj} = \sum_{n = 1}^{\infty} f_{jj}^n$, where $f_{jj}^n$ 代表從 state j 開始經過 n 個 transitions 回到 state j 的機率。若 $f_{jj} = 1$,我們說 j 是 **recurrent state**,若 $j_{jj} < 1$,我們說 j 是 **transient state**。 transient 簡單來說就是對 state j 而言,長期來說,從 state j 跑去別的 state 然後不回來是必然,Markov Chain 在跑的過程中經過 state j 的次數的期望值會是 finite。 反之 recurrent 代表長久來看是一定會回來的,經過 state j 的次數的期望值會是 infinite。 當 state i 的 $f_{jj} = 1$(recurrent state),即可定義 $m_{jj} = \sum_{n = 1}^{\infty} n f_{jj}^n$, $m_{jj}$代表state j 回到自己的平均 transitions 數,也稱為 mean recurrence time。 若 $m_{jj} < \infty$,我們說state j 為 **positive recurrent**,也就是我們會在一定數量的 transitions 內回到自己。若 $m_{jj} = \infty$,我們說 state j 為 **null recurrent**,代表有機率回到自己這個 state,但是不知道需要多久。 而 period 為從state j 開始回到自己的 transition 數(可能有很多個)的**最大公因數(GCD)**,若為1 稱為 **aperiodic**,反之稱為 **periodic**。 我們說一組互相 communicate 的 state 為一個 **Communicating Class**,若這個Communicating Class 中的 state 都無法到達任何外界的 state(也就是說繞來繞去都在這個 Communicating Class 中),稱為 **closed Communicating Class**,反之稱為 **open**。 若將這個 closed 概念類比到一個 state,則稱這個state 為 **absorbing state**,代表一旦 Markov Chain 進入這個state,就無法再出來了。(從 state diagram 來看,一個簡單的例子就是只有一個單向箭頭進入的 node) 一個 Markov Chain 可能會具有多個 Communicating Class,每個 class 中的 state 都互相communicate,但是不和其他 class communicate,而**同一個 Communicating Class 當中的 state 以下幾項性質都會相同: period, transient / null recurrent / positive recurrent**。 因為同個 Communicating Class 中的 state 會有相同性質,所以我們藉由觀察其中一個 state,看這個 state 是否為 positive recurrent 還是 null recurrent 或 transient,就知道整個 Communicating Class 具有怎樣的性質。 ### Stationary Distribution 當一個 Communicating Class 是 **positive recurrent**,就有 unique 的 stationary probability distribution: $$\pi = (\pi_1, \pi_2, \pi_3, ..., \pi_n) = (\frac{1}{\mu_1}, \frac{1}{\mu_2}, ..., \frac{1}{\mu_n})$$ 其中 $\mu_i$ 為 state i 的 expected return time。 若整個 Markov Chain 為 irreuducible,代表只有一個 Communicating Class,也就是說這個 Markov Chain具有唯一解的 Stationary Distribution。 若 Communicating Class 為 null recurrent 或者 transient,則沒有 stationary distribution。 我們說一個 process 是 stationary 代表 independent of time,但是若將 initial state(Markov Chain 的起始點)對 long-term behavior 的影響考慮進來,就是以下要介紹的limiting distribution。 :::info 補充: 下面定理可以幫助我們在已知為 irreducible, aperiodic 時判斷一個 Markov Chain 是否為 positive recurrent: 若一個 Markov Chain 為 irreducible, aperiodic,若存在一個 nonnegative solution 滿足 $\sum_{j=0}^{\infty} p_{ij}x_j \le x_i - 1(i \neq 0)$ such that $\sum_{j=0}^{\infty} P_{0j}x_j \lt \infty$,則這個 Markov Chain 為 positive recurrent。 ::: ### Limiting Distribution 當一個 Communicating Class 為 **positive recurrent** 且 **aperiodic**,我們說 limiting distribution exists,且會和 stationary distribution 相等。 代表 initial state 的影響 in the long run 來看已不存在,顯示了 lack of memory 的性質。 將沒有影響這件事寫成數學式:$\lim_{m \to \infty} P_{ij}^m = \pi_j$,代表 in the long run,此 process 會在 state j 的機率,這個機率和 starting state i 沒有關係。矩陣$P$ 可寫成如下: $$P = \begin{bmatrix} \pi_0 & \pi_1 & \pi_2 & \dots & \pi_n \\ \pi_0 & \pi_1 & \pi_2 & \dots & \pi_n \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \pi_0 & \pi_1 & \pi_2 & \dots & \pi_n \\ \end{bmatrix}$$ where $\pi_j = \frac{1}{\mu_j}$ 若為 null recurrent 或 transient,則$\lim_{m \to \infty} P_{ij}^m = 0$ 若一個 Markov Chain 有多個 Communicating Class,$P$ 的每個橫軸(index $i$)就對應state $i$ 所在的Communicating Class 的 limiting distribution。 ### Ergodicity ![](https://i.imgur.com/hHhMqoD.png) Limiting distribution 觀察的是 state 在很久以後的**一個時間點** n 的表現,而 Ergodicity 觀察的是在**一段很長的時間裡面**,state 的平均表現怎麼樣。 我們說一個 process 具有 Ergodicity ,代表對於所有 moment,都具有 Time Average = Ensemble Average 這個性質,也就是 $$\int_0^T \frac{(X_0(t))^n}{T} dt = E[X(t)^n],\ for\ all\ n$$ 代表這個 process 的 moments are independent of time(但是不一定和 initial state 無關)。也代表 process 的 moments 都有極限(finite)。 當具有 ergodicity ,就具有以下性質,定義 $V_j(n) =$ total number of visits to state j up to time n,給定一個 irreducible 的 Markov Chain: 若為 positive recurrent & irreducible 則 $\frac{V_j(n)}{n} = \frac{1}{\mu_j}$(almost surely) 若為 null recurrent 或者 transient 則 $\frac{V_j(n)}{n} = 0$ (almost surely) ### 總結 ![](https://i.imgur.com/MTiyI1Y.png) :::success 當一個 Markov Chain 是 **irreducible** 且 **positive recurrent** 時就會有 stationary distribution,如果今天再加上 **all moments are finite** 這個條件,就會是 ergodic,如果再加上是 **aperiodic** 就會具有 limiting distribution。 所以說,stationary, ergodic, limiting distribution 這三個條件的強度(需要的條件數量)為:limiting distribution > ergodic > stationary。 ::: 我們可以根據上述的定理以及性質,判斷 Markov Chain 具有哪些性質(irreducible / positive recurrent, ...),即可知道 Limiting Distribution / stationary distribution 等等存不存在,接著即可用 $\pi = \pi P, \pi Q = 0,\pi e = 1$ 來求解(看是 discrete / continuous)。 在本課程中探討的 Markovian Queue 幾乎都會是有解的(要不然你們就沒辦法算 $L, W, ...$,考試就不用考了),不過了解上述分析,對於日後若研究有要用到 Markovian Queue 相關分析時會有幫助。 ## Discrete-time Markov Chain 若說 $\{X_n\}$若滿足: $$P(X(t_n) = x_n | X(t_1) = x_1, ..., X(t_{n-1}) = x_{n-1}) = P(X(t_n) = x_n | X(t_{n-1}) = x_{n-1})$$ 我們稱之為一個 discrete Markov Chain,$X(t_i) = k$ 為此 Markov Chain 在 $t = i$ 時的 state($k$)。 $p_{ij}$ 為從 state $i$ 跳到 state $j$ 的機率(transition probability): $$p_{ij} = Pr(X_{n+1} = j | X_n = i)$$ 假設一個 Markov Chain 的 state space $S = \{1, 2, \cdots, n\}$,將所有 transition probability 結合起來可以寫成一個矩陣 $P$(transition probability matrix): $$P = \begin{bmatrix} p_{11} & p_{12} & p_{13} & \dots & p_{1n} \\ p_{21} & p_{22} & p_{23} & \dots & p_{2n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ p_{n1} & p_{n2} & p_{n3} & \dots & p_{nn} \\ \end{bmatrix}$$ $P^m$ 即代表此 Markov Chain 做 $m$ 次 transition 的 probability matrix,其中 $p_{ij}^m$ 代表經過 $m$ step 後從 state $i$ 跳到 state $j$ 的機率: $$p_{ij}^m = P(X_{n+m} = j | X_n = i) = \sum_r p_{ir}^{m-k} p_{rj}^{k},\ for\ 0 < k < m$$ Markov Chain 的 state 會隨著時間而變化,而長期來說,如果這個 Markov Chain [long-term behavior 是穩定的](#Long-tern-Behavior-of-Markov-Chain),我們就可以求出其 steady state probability,也就是説讓這個 Markov Chain 跑一段時間後,我們對其經過的 state 的歷史進行觀察,可以看出其處於各個不同的 state 的機率分別是多少。 我們稱「處於各個不同的 state 的機率」為 Markov Chain 的 steady state probability。 令 steady state probability $\pi = (\pi_1, \pi_2, \cdots, \pi_n)$,代表 Markov Chain 長期來說在 state $1$, state $2$,以此類推,的機率。 $\pi$ 可由以下公式解聯立方程式求出: $$\begin{cases} \pi P = \pi \\ \pi e = 1 \end{cases}$$ where $e = (1,1, \cdots, 1)$,其中第一式的意義為:steady state 的機率是穩定的,就算再多經過一次 transition 也不會改變。第二式的意義為機率合為 $1$。 ## Continuous-time Markov Chain :::info 在 Continuous-time Markov Process中,系統處於每一個 state 的時間都是一個 exponential random variable。 ::: 和 Discrete-time Markov Chain 類似,假設 state space $S = (1,2,3, \cdots, n)$,我們定義 transition matrix $Q$,我們稱 $Q$ 為 generator matix: $$Q = \begin{bmatrix} -q_{11} & q_{12} & q_{13} & \dots & q_{1n} \\ q_{21} & -q_{22} & q_{23} & \dots & q_{2n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ q_{n1} & q_{n2} & q_{n3} & \dots & -q_{nn} \\ \end{bmatrix}$$ 其中 $q_{ij} \Delta t$ 為此 Markov Process(Chain) 在 $(t, t+ \Delta t)$ 內從 state $i$ 跳到 state $j$ 的機率。 $-q_{ii} = q_i = -\sum_{j \neq i} q_{ij}$,代表 $Q$ matrix 的每一個 row 的 sum 都是 $0$。關於 $-q_{ii}$ 的意義請見下方連立方程式的說明。 :::info 給定一個 state transition diagram,寫出 $Q$ 的步驟就是先把非對角線的 $q_{ij}$ 通通填完,最後利用 row sum $= 0$ 將對角線填上。 ::: 令 steady state probability matrix $P = (p_1, p_2, \cdots, p_n)$,$e = (1,1,1, \cdots, 1)$,可由下式求出 steady state probability: $$\begin{cases} PQ = 0 \\ Pe = 1\end{cases}$$ 其中第一式代表的意義為:對於任一一個 state $i$ 而言,從自己跳出去到別的 state $j \neq i$ 的機率的總和,會和其他人跳進來的機率總和相等(系統達到穩定、平衡的狀態),至於為何會相等,可以直觀來解釋:因為若不相等,跳進來的不等於跳出去的,那麼 state probability 就還會變動。 ![](https://i.imgur.com/m0mJdFT.png) 將跳出去等於跳進來這件事寫成數學式: $$p_{i}(-q_{ii}) = p_i \sum_{j \neq i} q_{ij} = \sum_{j \neq i}p_j q_{ji}$$ 其中 $\sum_{j \neq i} q_{ij} = -q_{ii}$ 可解釋為何 $Q$ matrix 要如此定義(row sum = 0)。 ## Embedded Markov Chain * [wiki definition of Embedded Markov Chain](https://en.wikipedia.org/wiki/Markov_chain#Embedded_Markov_chain) * [Difference between embedded chain and continuous-time Markov chain](https://math.stackexchange.com/questions/3172879/difference-between-embedded-chain-and-continuous-time-markov-chain) ![](https://i.imgur.com/spYaTkX.png) > [image source](https://math.stackexchange.com/questions/3172879/difference-between-embedded-chain-and-continuous-time-markov-chain) 若 Continuous-time Markov Chain 觀察的角度是錄影,那麼 Embedded Markov Chain 就是在照相(只是照很多張而已)。 If we observe the system state right after a state transition in continuous-time Markov Chain. Let $X_n$ be the $n$-th observed state, then $\{X_n, n = 1,2,3, \dots\}$ is called embedded Markov Chain(a discrete-time Markov Chain). 在一個 continuous-parameter 的 Markov Chain 中,只觀察系統 state 變化的時刻(就是在照相),觀察的結果為一個 discrete-parameter 的 Markov Chain,我們稱之為 Embedded Markov Chain。 而我們在照相完之後,會有很多照片,分析這些照片代表的意義,就可以定義 transition probability $p_{ij}$。 $p_{ij}$ 代表的意義為,從 state i 要跳到 state j 的機率,可寫成 $$\frac{q_{ij}}{\sum_{k \neq i} q_{ik}}$$ 也就是從state i 跳到 j 的機率除以所有從 state i 出去(到除了 state j 以外的 state)的機率的總和。(這裡的 $q_{ij}$ 和 continuous-time Markov Chain 裡的是一樣的定義) 每個 state 的 holding time 會是一個 exponential r.v.,其期望值(mean holding time)為 $$\frac{1}{\sum_{k \neq i} q_{ik}} = \frac{1}{q_i}$$ 以 [Birth-Death Processes](/oN9Xfs1HTvmVN726XA8jdA) 為例,其 $p_{ij}$ 可表示成 $$p_{ij} = \begin{cases} \frac{\lambda_i}{\lambda_i + \mu_i},\ if\ j = i +1 \\ \frac{\mu_i}{\lambda_i + \mu_i},\ if\ j = i - 1 \\ 1,\ if\ i = 0, j = 1 \\ 0,\ else \end{cases}$$, 將所有 $p_{ij}$ 綜合起來寫成矩陣 $P$,即可用聯立方程式求解 steady state probability $\pi = (\pi_1, \pi_2, ..., \pi_n)$: $$\begin{cases} \pi P = \pi \\ \pi e = 1 \end{cases}$$ where $e = (1,1,...,1)$ :::info 若一個 coninuous-time Markov Chain 的 Embedded Markov Chain 為 irreducible 且 positive recurrent,我們就說這個 continuous-time Markov Chain 也是 irreducible 且 positive recurrent。 至於要判斷是否具有 stationary solution, limiting distribution, ergodicity 的話,會需要知道是否為 aperiodic。 判斷 aperiodic 的方法為:判斷其 Embedded Markov Chain 的 mean holding time 是否為bounded,若是,就如同這個 Continuous-time Markov Chain 為 aperiodic 一樣。 ::: ## Embedded Markov Chain & Continuous-time Markov Chain 在理解 Embedded Markov Chain & Continuous-time Markov Chain 各自的定義以及如何求解之後,我們來比較他們兩個之間的異同以及其關聯性。 $P_i$ 是 Continuous-time Markov Chain 求出來的解,而 $\pi_i$ 是 Embedded Markov Chain 求出來的解,他們各自代表的意義如下: :::info $\pi_i$ : represents the **ratio** of visit state $i$ among all state $P_i$: represents the **time ratio** that system stays at state $i$ in a long term observation period. ::: Embedded Markov Chain 就像是在對系統進行拍照,而 Continuous-time Markov Chain就像是在對系統進行錄影,兩者所得出的結果 $\pi$, $P$ 的值理所當然會不一樣,因為**觀察角度不同**,至於要用哪個角度觀察,會因觀察的需求不同而定。 兩者之間的關係可以透過下式得到: $$P_i = \frac{m_i \pi_i}{\sum_j m_j \pi_j}$$ where $m_i$ is the mean holding time at state i 因為 $m_i$ 代表是在 state $i$ 的平均停留時間,結合上面對於 $\pi_i$ 和 $P_i$ 所代表的意義的說明,上式的意義就很直觀明瞭了(**時間 x 比例 = 時間比例,time x ratio = time ratio**) ### A Simple Example Consider an $M/M/1/3$ queue with arrival rate $\lambda$ and service rate $\mu$. 1. Please write down the generator matrix $Q$ for its continuous time Markov Chain, which represents its system size(the number of customers in the system). (4%) :::spoiler answer $Q = \begin{pmatrix} -\lambda & \lambda & 0 & 0\\ \mu & -(\lambda+\mu) & \lambda & 0\\ 0 & \mu & -(\lambda+\mu) & \lambda\\ 0 & 0 & \mu & -\mu \end{pmatrix}$ ::: 2. Please then solve its steady-state probability $p_n$ for $n=0,1,2,3$ using the generator matrix obtained in 1. (6%) :::spoiler answer let $P = (p_0, p_1, p_2, p_3), e = (1,1,1,1), \rho = \frac{\lambda}{\mu}$ solve: $$\begin{cases} PQ = 0 \\ Qe = 1 \end{cases} \Longrightarrow \begin{cases} p_1 = \rho \cdot p_0 \\ p_2 = \rho^2 p_0 \\ p_3 = \rho^3 p_0 \\ p_0 = (1 + \rho + \rho^2 + \rho^3)^{-1} \end{cases}$$ ::: 3. Derive the embedded Markov chain of this $M/M/1/3$ queue and obtain its steady state probability $\pi_n$ for $n=0,1,2,3$. (6%) :::spoiler answer The transition matrix $$P = \begin{pmatrix} 0 & 1 & 0 & 0 \\ \frac{\mu}{\lambda + \mu} & 0 & \frac{\lambda}{\lambda + \mu} & 0 \\ 0 & \frac{\mu}{\lambda + \mu} & 0 & \frac{\lambda}{\lambda + \mu} \\ 0 & 0 & 1 & 0 \end{pmatrix}$$ let $\pi = (\pi_0, \pi_1, \pi_2, \pi_3), e = (1,1,1, 1), \rho = \frac{\lambda}{\mu}$ solve: $$\begin{cases} \pi P = \pi \\ \pi e = 1 \end{cases} \Longrightarrow \begin{cases} \pi_0 = \frac{\mu^2}{2\mu^2 + 2\lambda^2 + 2\lambda \mu}\\ \pi_1 = \frac{\mu(\lambda + \mu)}{2\mu^2 + 2\lambda^2 + 2\lambda \mu}\\ \pi_2 = \frac{\lambda(\lambda + \mu)}{2\mu^2 + 2\lambda^2 + 2\lambda \mu}\\ \pi_3 = \frac{\lambda^2}{2\mu^2 + 2\lambda^2 + 2\lambda \mu}\\ \end{cases}$$ ::: 4. Please determine the mean holding time for state 0,1,2, and 3.(4%) :::spoiler answer let $m_i$ denote the mean holding time for state $i$. $$\begin{cases} m_0 = \frac{1}{\lambda} \newline m_1 = \frac{1}{\lambda + \mu} \newline m_2 = \frac{1}{\lambda + \mu} \newline m_3 = \frac{1}{\mu} \end{cases}$$ ::: 5. Use the results of $\pi_n$ in 3. and the mean holding time in 4. to derive the formula for the steady-state probability $p_n$.(Hint: the results should be the same as in 2.)(6%) :::spoiler answer Use $$p_i = \frac{m_i \pi_i}{\sum_{i=0}^4 m_j \pi_j}$$ Compute $$\begin{aligned} p_0 &= \frac{m_0 \pi_0}{\sum_{i=0}^4 m_j \pi_j} \newline &= \frac{\mu^2 \frac{1}{\lambda}}{\mu^2 \cdot \frac{1}{\lambda} + \mu(\lambda + \mu) \cdot \frac{1}{\lambda + \mu} + \lambda(\lambda + \mu) \cdot \frac{1}{\lambda + \mu} + \lambda^2 \cdot \frac{1}{\mu}} \newline &= \frac{1}{1 + \frac{\lambda}{\mu} + (\frac{\lambda}{\mu})^2 + (\frac{\lambda}{\mu})^3} \newline &= \frac{1}{1 + \rho + \rho^2 + \rho^3} \end{aligned}$$ then $p_1, p_2, p_3$ 可如法炮製,一場混戰之後會發現,結果和 2. 的結果相同。(NOTE: 同學在考試時不可以寫如法炮製,要實際算出來) ::: ## Reference - [Useful Markov Chain Math Note](https://mpaldridge.github.io/math2750/S17-continuous-time.html)

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully