### Convolution Neural Network ![](https://i.imgur.com/CzAFawK.png) --- ## 1. Perceptron ---- ### Binatry Classification Problem Given a trainging data set $S = \{ \ (x^{i},y_i) \ | \ x^i\in R, \ y_i \in \{-1,1\}\}$ $x^i \in A_{+}$ if and only if $y_i \ = \ 1$, $x^i \in A_{-}$ if and only if $y_i \ = \ -1$, ![](https://i.imgur.com/BBVbuZW.png) ---- ![](https://i.imgur.com/FjTXW2t.png) ---- ### Rosenblatt's Perceptron ![](https://i.imgur.com/XGS4dcm.png) Demo! --- ## 2. Neural Network ![](https://i.imgur.com/ucQaqQa.png) ---- ### Neuron ![](https://i.imgur.com/QNzbTeW.png) ---- ![](https://i.imgur.com/6bbyHuC.png) $\vec{H}^{*}= \begin{bmatrix} \vec{H}^{*}_0\\ \vec{H}^{*}_1\\ \vec{H}^{*}_2 \end{bmatrix}= \begin{bmatrix} f \ (\vec{w_0} \cdot \vec{x})\\ f \ (\vec{w_1} \cdot \vec{x})\\ f \ (\vec{w_2} \cdot \vec{x})\\ \end{bmatrix}= f \ \Big{(} W \cdot \vec{x} \Big{)}$ ---- ### Feed Forward ![](https://i.imgur.com/139tyWq.png) ---- ### Gradient Descent Minimization problem: $min \ L(x)$ ![](https://i.imgur.com/bQJjSvb.jpg) $$ \vartriangle x^{i+1} := x^{i+1} - x^{i} = - \eta \ \triangledown L(x^i)$$ ---- ### Backpropogation ---- ![](https://i.imgur.com/ZDlNWRu.png) $\frac{\partial L}{\partial V[i][j]}=\frac{\partial L}{\partial O[i]}\frac{\partial O[i]}{\partial V[i][j]}=\frac{\partial L}{\partial O[i]}H^{*}[j]$ ---- $\frac{\partial L}{\partial O[k]}=\frac{\partial L}{\partial O^*[k]}\frac{\partial O^*[k]}{\partial O[k]}=-(T[k]-O^{*}[k])\sigma'(O[k])$ $\sigma'(O[k])=\sigma(O[k])(1-\sigma(O[k]))=O^*[k](1-O^*[k])$ $\triangle V[i][j]=-\eta\frac{\partial L }{\partial V[i][j]}=-\eta \frac{\partial L}{\partial O[i]} H^{*}[j]$ $\triangle c[k] = -\eta\frac{\partial L}{\partial c[k]}=-\eta\frac{\partial L}{\partial O[k]}$ ---- ![](https://i.imgur.com/Dbp5MGQ.png) $\frac{\partial L}{\partial W[i][j]}=\frac{\partial L}{\partial H[i]}\frac{\partial H[i]}{\partial W[i][j]}=\frac{\partial L}{\partial H[i]}I[j]$ ---- $\frac{\partial L}{\partial H[i]}=\frac{\partial L}{\partial H^{*}[i]}\frac{\partial H^{*}[i]}{\partial H[i]}=\Big{(} \sum_{l=0}^{2}\frac{\partial L}{\partial O[l]}\frac{\partial O[l]}{\partial H^{*}[i]}\Big{)}\sigma'(H[i])$ $=\Big{(} \sum_{l=0}^{2}\frac{\partial L}{\partial O[l]}\frac{\partial O[l]}{\partial H^{*}[i]}\Big{)}H^*[i](1-H^*[i])$ $=\Big{(} \sum_{l=0}^{2}\frac{\partial L}{\partial O[l]}V[l][i]\Big{)}H^*[i](1-H^*[i])$ $\triangle W[i][j]=-\eta\frac{\partial L}{\partial W[i][j]}=-\eta\frac{\partial L}{\partial H[i]}I[j]$ $\triangle b[k] = -\eta\frac{\partial L}{\partial b[k]}=-\eta\frac{\partial L}{\partial H[k]}$ --- ## 3. CNN ![](https://i.imgur.com/3vcBqu5.png) ---- ### Convolution ![](https://i.imgur.com/xECtpx7.png) ---- ![](https://i.imgur.com/fQbVVWk.png) [Demo](https://doodle-draw.herokuapp.com/conv) ---- ### Pooling ![](https://i.imgur.com/for5kzW.png) --- ## 4. Demo [Doodle Draw](https://doodle-draw.herokuapp.com/draw) [Github](https://github.com/jarcomatolsavisch/doodle-drawing) --- ## 5. Resources [Neural Networks Chart](https://towardsdatascience.com/the-mostly-complete-chart-of-neural-networks-explained-3fb6f2367464)
{"metaMigratedAt":"2023-06-16T05:15:49.332Z","metaMigratedFrom":"YAML","title":"CNN (Convolution Neural Network)","breaks":true,"slideOptions":"{\"theme\":\"moon\",\"transition\":\"convex\"}","contributors":"[{\"id\":\"8fc668f4-0592-4518-bd67-b0c63754b314\",\"add\":5087,\"del\":1775}]"}
    334 views