# MLP 1. $a = \dfrac {1}{1+e^{-z}} -- sigmoid$ 2. $P(n) = \dfrac {e^{a(n)}}{Σe(n)}$ ## forward inference for i = 0 to end: $z(i) = w(i)*a(i)+b(i)$ $a(i) = \dfrac {1}{1+e^{-z}}$ ## adjust parameter for every parameter x: $x = x - \dfrac {\partial j}{\partial x}$ ## flow (以三層layer為例) $a^0 -> z^1 -> a^1 ->z^2 -> a^a -> z^3 -> a^3 ->P -> J$ $J = -logΣy_i*p_i$ (loss function) ## backward #### $\dfrac{\partial P}{\partial a}$: $P_i =\dfrac{e^{a_i}}{Σe^{a_j}}$ if i != j: $\dfrac{\partial P_i}{\partial a_j} = \dfrac {e^{a_i} * -(e^{a_j})}{(\sum{e^a})^2} = -\dfrac {e^{a_i}}{(\sum{e^a})} * \dfrac {e^{a_j}}{(\sum{e^a})} -P_iP_j$ if i == j: $\dfrac{\partial P_i}{\partial a_i} =\dfrac {e^{a_i}}{(\sum{e^a})} - \dfrac {e^{a_i} * -(e^{a_i})}{(\sum{e^a})^2}= -P_i(1-P_i)$ 所以 $\dfrac{\partial P}{\partial a}$:$\begin{bmatrix} P_0(1-P_0) & -P_0P_1 & -P_0P_2\\ -P_1P_0 & P_1(1-P_1) & -P_1P_2 \\ -P_2P_0 & P_2P_1 & P_2(1-P_2) \end{bmatrix}$ #### $\dfrac{\partial a}{\partial z}$: $a = \dfrac{1}{1+e^{-z}}$ $\dfrac{\partial a}{\partial z} = a(1-a)$ #### $\dfrac{\partial J}{\partial W}$: $\dfrac{\partial J}{\partial W} = \begin{bmatrix} \dfrac{\partial J}{\partial W_{00}} & \dfrac{\partial J}{\partial W_{01}} \\ \dfrac{\partial J}{\partial W_{10}} & \dfrac{\partial J}{\partial W_{11}} \\ \dfrac{\partial J}{\partial W_{20}} & \dfrac{\partial J}{\partial W_{21}} \end{bmatrix}$ $\begin{bmatrix} Z_0^i\\Z_1^i\\Z_2^i \end{bmatrix} = \begin{bmatrix} W_{00}^i & W_{01}^i\\ W_{10}^i & W_{11}^i\\ W_{20}^i & W_{21}^i \end{bmatrix} * \begin{bmatrix} a_0^{i-1}\\a_1^{i-1} \end{bmatrix} + \begin{bmatrix} b_0^i\\b_1^i\\b_2^i \end{bmatrix}$ $\begin{bmatrix} \dfrac{\partial J}{\partial W_{00}} & \dfrac{\partial J}{\partial W_{01}} \\ \dfrac{\partial J}{\partial W_{10}} & \dfrac{\partial J}{\partial W_{11}} \\ \dfrac{\partial J}{\partial W_{20}} & \dfrac{\partial J}{\partial W_{21}} \end{bmatrix} = \begin{bmatrix} a_0^{i-1} & a_1^{i-1}\\ a_0^{i-1} & a_1^{i-1}\\ a_0^{i-1} & a_1^{i-1} \end{bmatrix}_{\dfrac{\partial z}{\partial w}} \begin{bmatrix} \dfrac{\partial J}{\partial Z_0}\\\dfrac{\partial J}{\partial Z_1} \end{bmatrix} = \dfrac{\partial J}{\partial Z} X (a_{2*1})^T$ $\begin{bmatrix} \dfrac{\partial J}{\partial a_0}\\\dfrac{\partial J}{\partial a_1} \end{bmatrix} = \begin{bmatrix} W_{00} & W_{01} & W_{02}\\ W_{10} & W_{11} & W_{12} \end{bmatrix} \begin{bmatrix} \dfrac{\partial J}{\partial Z_0}\\ \dfrac{\partial J}{\partial Z_1}\\ \dfrac{\partial J}{\partial Z_2} \end{bmatrix} = \dfrac{\partial J}{\partial a_{i-1}} = W^T * \dfrac{\partial J}{\partial Z}$ ### 整理 #### forward $a^0$ <- input x for i=1 to last: $Z^i = W^i*a^{i-1}+b^i$ $a^i = \sigma (Z^i)$ $P = Softmax(a^{last})$ $\bar y = argmax(P)$ #### backward J : loss 1. $\dfrac {\partial J}{\partial P}$ 2. $\dfrac {\partial J}{\partial a^{last}} = \dfrac {\partial P}{\partial a^{last}} \dfrac {\partial J}{\partial P}$ for i = last to 1: $\dfrac {\partial J}{\partial Z^i} = \dfrac {\partial a^i}{\partial Z^i} \dfrac {\partial J}{\partial a^i}$ $\dfrac {\partial J}{\partial W} \dfrac {\partial J}{\partial b} \dfrac {\partial J}{\partial a^{i-1}}$一一求出 #### Training for i = 1 to #Epoch: forward; backward; W = W-$\dfrac{\partial J}{\partial W}$ b = b - $\dfrac{\partial J}{\partial b}$
×
Sign in
Email
Password
Forgot password
or
By clicking below, you agree to our
terms of service
.
Sign in via Facebook
Sign in via Twitter
Sign in via GitHub
Sign in via Dropbox
Sign in with Wallet
Wallet (
)
Connect another wallet
New to HackMD?
Sign up