--- title: 類神經網路實戰 --- 類神經網路實戰 === ## 類神經網路如何運作 ### 簡單的預測機 有唯一線索: | 實例 | kilometer | mile | | -------- | -------- | -------- | | 1 | 0 | 0 | | 2 | 100 | 62,137| Let **mile = kilometer * C** , where kilometer=100 Error = real_number - computed value - First, we guess C=0.5 => error = 12.137有唯一線索: - $\because$ C>0 $\therefore$ C $\to$ 0.6 => error = 2.137 - $\because$ C>0 $\therefore$ C $\to$ 0.7 => error = -7.836 - $\because$ C<0 $\therefore$ increase smaller number (ex: 0.01) instead of 0.1 $\therefore$ C $\to$ 0.61 ==大誤差=>大調整,大誤差=>大調整== :::warning :zap:當無法精確知道一件事情如何運算時,可嘗試使用模型來估算其運作方式。如上例,不知該如何將公里換算成英里,則使用線性函數作為模型,並使用可調整的梯度值( C )作為參數。 :zap:改進模型的方法:基於模型和已知真實範例比較所得的誤差值修正參數 ::: ### 簡單的分類器 ![](https://i.imgur.com/bRhd4JD.png =60%x) 線索: | 實例 | 寬度 | 長度 | 蟲的種類 | | ------ | ------ | ------ | ------ | | 1 | 3.0 | 1.0 | 瓢蟲 | | 2 | 1.0 | 3.0 | 毛毛蟲 | ![](https://i.imgur.com/B3kMt6Q.png =35%x) + 假設分界線 : $y=Ax$ - choose $A=0.25$ ![](https://i.imgur.com/WvJfQtC.png =40%x) + **調整誤差** 設 $t$ 為期望目標值,**$t=(A+\Delta A)x$** **Error** : $E=期望目標值-實際輸出值=t-y=(A+\Delta A)x - Ax=(\Delta A)x$ $\Rightarrow$**$\Delta A=\tfrac{E}{x}$** 當$A=A+\Delta A$,$y會等於t$,則取得期望的結果 ![](https://i.imgur.com/czGb1P9.png =40%x) - 試 $t=1.1$ <font color="#f00">(對於紅點)</font> $\therefore \Delta A=(1.1-0.75)/3.0=0.1167$ Let $A=A+\Delta A=0.25+0.1167=0.3667$ $\Rightarrow$ <font color="#0EBBF7 ">$y=0.3667x$</font> 為期望的結果 - 試 $t=2.9$ <font color="01AD2B">(對於綠點)</font> $\therefore \Delta A=(2.9-0.3667)/1.0=2.5333$ Let $A=A+\Delta A=0.3667+2.5333=2.9$ $\Rightarrow$ <font color="#7BD8F8 ">$y=2.9x$</font> 為期望的結果 - ![](https://i.imgur.com/gkalwwU.png =40%x) - 但這麼做我們最終得到的直線會與最後一次訓練樣本非常匹配,實際上==最終改進的直線是拋棄所有訓練樣本,只與最接近的實例進行學習而得的結果。== + ==$\Delta A=L(\tfrac{E}{x})$==,where $L$ is learning rate - choose $L=0.5$ - 試 $t=1.1$ <font color="#f00">(對於紅點)</font> $\therefore \Delta A=\tfrac{1}{2}((1.1-0.75)/3.0)=0.0583$ Let $A=A+\Delta A=0.25+0.0583=0.3083$ $\Rightarrow$ <font color="#0EBBF7 ">$y=0.3083x$</font> 為期望的結果 - 試 $t=2.9$ <font color="01AD2B">(對於綠點)</font> $\therefore \Delta A=\tfrac{1}{2}((2.9-0.3083)/1.0)=1.2958$ Let $A=A+\Delta A=0.3667+1.2958=1.6042$ $\Rightarrow$ <font color="#7BD8F8 ">$y=1.6042x$</font> 為期望的結果 - ![](https://i.imgur.com/ThUM6di.png =40%x) - 得到較好的結果 ### 神經元 + ==sigmoid function== (logistic function) $S(x)=\cfrac{1}{1+e^{-t}}$ ![](https://i.imgur.com/RU1o7wb.png) + node ![](https://i.imgur.com/yWjhB8W.png =60%x) ### ==Forward propagation== 正向傳播 ![](https://i.imgur.com/sObRUor.png =60%x) + m個nodes --> n個nodes $z_i^{(j)}=\sum_{k=1}^n \omega_{ki}^{(j)} *a_k^{(j-1)}$ $a_i^{(j)}=sigmoid(z_i^{(j)})$ + 利用矩陣運算: $z^{(j)}=a^{(j-1)} \begin{pmatrix} \omega_{11} & \omega_{12} & \cdots & \omega_{1n}\\ \omega_{21} & \omega_{22} & \cdots & \omega_{2n}\\ \vdots & \vdots & \ddots & \vdots \\ \omega_{m1} & \omega_{m2} & \cdots & \omega_{mn}\\ \end{pmatrix}$ , where $a^{(j-1)}=\begin{pmatrix} a_1^{(j-1)} & a_2^{(j-1)} & \cdots & a_m^{(j-1)} \end{pmatrix}$ & $z^{(j)}=\begin{pmatrix} z_1^{(j)} & z_2^{(j)} & \cdots & z_n^{(j)} \end{pmatrix}$ ### ==back propagation== 反向傳播 ![](https://i.imgur.com/NZBSTJ0.png =60%x) 隱藏層無目標期望值 ![](https://i.imgur.com/ROpTNYE.png =80%x) + $e_i^{(j-1)}=\sum_{k=1}^n \cfrac{\omega_{ik}}{\sum_{p=1}^n \omega_{pk}}*e_k^{(j)}$ + 利用矩陣運算: $e^{(j-1)}=\begin{pmatrix} \cfrac{\omega_{11}}{\sum_{p=1}^n \omega_{p1}} & \cfrac{\omega_{12}}{\sum_{p=1}^n \omega_{p2}} & \cdots & \cfrac{\omega_{1m}}{\sum_{p=1}^n \omega_{pm}}\\ \cfrac{\omega_{21}}{\sum_{p=1}^n \omega_{p1}} & \cfrac{\omega_{22}}{\sum_{p=1}^n \omega_{p2}} & \cdots & \cfrac{\omega_{2m}}{\sum_{p=1}^n \omega_{pm}}\\ \vdots & \vdots & \ddots & \vdots \\ \cfrac{\omega_{n1}}{\sum_{p=1}^n \omega_{p1}} & \cfrac{\omega_{n2}}{\sum_{p=1}^n \omega_{p2}} & \cdots & \cfrac{\omega_{nm}}{\sum_{p=1}^n \omega_{pm}}\\ \end{pmatrix} e^{(j)}$ ,where $e^{(j-1)}=\begin{pmatrix} e_1^{(j-1)} \\ e_2^{(j-1)} \\ \vdots \\ e_n^{(j-1)} \end{pmatrix}$, $e^{(j)}=\begin{pmatrix} e_1^{(j)} \\ e_2^{(j)} \\ \vdots \\ e_n^{(j)} \end{pmatrix}$ + 較大的權重意味著攜帶較大的誤差輸出,這些分數的分母是一種歸一化因子,若忽略因子則式子如下: $e^{(j-1)}=\begin{pmatrix} \omega_{11} & \omega_{12} & \cdots & \omega_{1m}\\ \omega_{21} & \omega_{22} & \cdots & \omega_{2m}\\ \vdots & \vdots & \ddots & \vdots \\ \omega_{n1} & \omega_{n2} & \cdots & \omega_{nm}\\ \end{pmatrix} e^{(j)}$ ### ==Gradient descent== 梯度下降法 + 實際上更新權重的方法之一 + 未必是最優的計算權重引數的方法,但是作為一種簡單快速的方法,經常被使用。 + ==new $\omega_{j'k'}$ = old $\omega_{j'k'} - \alpha \frac{\partial E}{\partial \ \omega_{j'k'}}$==, where $\alpha$ 為學習率 ![](https://i.imgur.com/fOVw4I7.png) + $\frac{\partial E} {\partial \ \omega_{j'k'}^{(k-1)}}=\frac{\partial \ \sum_{p=0}^n (t_p^{(k)}-o_p^{(k)})^2} {\partial \ \omega_{j'k'}^{(k-1)}}$ $\because\ 只有 o_{k'}^{(k)} \ 與\ \omega_{j'k'}^{(k-1)}\ 有關$ $\therefore\ \frac{\partial E} {\partial \ \omega_{j'k'}^{(k-1)}}=\frac{\partial \ (t_{k'}^{(k)}-o_{k'}^{(k)})^2} {\partial \ \omega_{j'k'}^{(k-1)}}=-2(t_{k'}^{(k)}-o_{k'}^{(k)}) \frac{\partial o_{k'}}{\partial \ \omega_{j'k'}^{(k-1)}}=-2(t_{k'}^{(k)}-o_{k'}^{(k)})(\frac{\partial}{\partial \ \omega_{j'k'}^{(k-1)}}\ sigmaoid(\sum_{p=0}^n o_p^{(k-1)}\omega_{j'k'}^{(k-1)})$ $\qquad\qquad = -2(t_{k'}^{(k)}-o_{k'}^{(k)})\ sigmaoid(\sum_{p=0}^n o_p^{(k-1)}\omega_{j'k'}^{(k-1)})\ (1-sigmaoid(\sum_{p=0}^n o_p^{(k-1)}\omega_{j'k'}^{(k-1)}))\ \frac{\partial\ (\sum_{p=0}^n o_p^{(k-1)}\omega_{j'k'}^{(k-1)})}{\partial\ \omega_{j'k'}^{(k-1)}}$ $\qquad\qquad = -(t_{k'}^{(k)}-o_{k'}^{(k)})\ sigmaoid(\sum_{p=0}^n o_p^{(k-1)}\omega_{j'k'}^{(k-1)})\ (1-sigmaoid(\sum_{p=0}^n o_p^{(k-1)}\omega_{j'k'}^{(k-1)}))\ o_{j'}^{(k-1)}$ $\therefore \Delta \omega^{(k-1)} = E^{(k)}*S^{(k-1)}*(1-S^{(k-1)})\ \cdot o^{(k-1)}$ $\therefore \begin{pmatrix} \Delta\omega_{11} & \Delta\omega_{21} & \cdots & \Delta\omega_{n1}\\ \Delta\omega_{12} & \Delta\omega_{22} & \cdots & \Delta\omega_{n2}\\ \vdots & \vdots & \ddots & \vdots \\ \Delta\omega_{n1} & \Delta\omega_{2n} & \cdots & \Delta\omega_{nn}\\ \end{pmatrix} = \begin{pmatrix} E_1*S_1*(1-s_1) \\ E_2*S_2*(1-s_2) \\ \vdots \\ E_n*S_n*(1-s_n) \end{pmatrix} \begin{pmatrix} o_1 & o_2 & \cdots & o_n \end{pmatrix}$ # Referance ![](https://i.imgur.com/9kR3jTC.png =40%x)