--- tags: 短腿土撥鼠的長高計畫之瘋狂跳跳跳繩 title: Machine learning --- # Machine learning - machine learing ≈ Looking for Functionex. - 透過機器的計算能力將目標函數找出來 - speech Recognition - ![](https://i.imgur.com/bnpX3vt.png) - Image Recognition - ![](https://i.imgur.com/8D44Hpy.png) - Playing Go - ![](https://i.imgur.com/RNFxzEv.png) --- ## Different types of Function - Regression - the function outputs a scalar(純量) - ![](https://i.imgur.com/r9mYHQ0.png) - classification - Give option(**classes**), the functon outputs the correct one. - ![](https://i.imgur.com/6cxI58l.png) - ![](https://i.imgur.com/Ymay6Nq.png) - Structured Learning - create something with structure(image,document) --- ## How find function for machine learing - 以預測youtuble點閱率為例 1. Function with Unknown Paramenters - ![](https://i.imgur.com/52DlkT4.png) - 目前該函數純屬猜測,在未來會調整猜測 - model - 帶有未知(Unknown)參數(Parameters)的function - 指 $y=b+Wx_1$ - feature - 指 x~1~ - weight - 與 feature 相乘的叫 weight - 指W - bias - 沒有與 feature 相乘的叫 bias - 指 b 2. Define Loss from Training Data - what is Loss - Loss is a function of paramenters L(b,w) - Loss 輸出的值代表,若把這組未知參數設成某一個數值的時候,這組數值是好還是不好 - label - 真實的數值 - ![](https://i.imgur.com/KunOnFf.png) - 透過 Training Data 計算預測值(y)與實際值()的誤差值(e) - ![](https://i.imgur.com/0HW80vr.png) - 再計算平均誤差 - <font color =red>L越大越不好</font> - ![](https://i.imgur.com/gCzr3we.png) - ![](https://i.imgur.com/iB0ARP2.png) - Error surface - 試了不同的參數,用計算出來的loss畫得等高線圖稱為Error surface 3. Optimization - 找一個 W 跟 b 能使 L 最小 - ![](https://i.imgur.com/42hHnax.png) - Gradient Descent(設我們只有單一未知參數) - (Randomly) Pick an initial value W~0~ - ![](https://i.imgur.com/bojaFb7.png) - Compute $\dfrac{∂L}{∂w}$|w=w^0^ - ![](https://i.imgur.com/WCZ6imJ.png) - ![](https://i.imgur.com/OUvqSqb.png) - ![](https://i.imgur.com/kPn3wC1.png) - 影響 W~0~ 移動至 W~1~ 得值為斜率與 η - η 自己設定大小,越大改變的幅度比較大學習快 - 凡是自己設定的參數皆稱為 hyperparameters - ![](https://i.imgur.com/EZrxai9.png) - update W iteratively - 直到最後停下來,停下來的可能為二 1. 當微分得的值為0 2. 使用者設定update的上限 - ![](https://i.imgur.com/67ooIUo.png) - local minima 為當前找到能讓 Loss 值最小的 W - global minima 為整個 W 集合中能使Loss值最小的 W - Gradient Descent(兩個未知參數) - (Randomly) Pick an initial value W~0~ b~0~ - Compute - ![](https://i.imgur.com/gJeaRN4.png) - ![](https://i.imgur.com/3j727ol.png) --- ## 修改模型 - ![](https://i.imgur.com/X1tZL6O.png) - 觀察數據發現每隔7天有一循環,應該把前7天得觀看人數列入模型 - ![](https://i.imgur.com/4xBZ5ej.png) - 考慮更多天的情況下 - ![](https://i.imgur.com/wzg3PMj.png) - 當考慮更多天時L'的值無法在下降時,代表考慮天數已經無法增加模型準確度了 - 而這些把 feature * weight + bias 就能得到預測結果得模型被稱值為 Linear models --- # linear model 過於簡單 ![](https://i.imgur.com/J7PP85J.png) - linear model缺乏彈性,使我們無法透過修改 b 與 w 使藍線變成紅線 - model Bias - 來自 model 本身的限制,稱為 model Bias ![](https://i.imgur.com/rI5nW8c.png) - 紅色線可看成一個常數項加一群藍色的 function(當輸入的值(或X軸的值)小於某個值為一個值,而當大於時又是另一個值 ) --- ![](https://i.imgur.com/Odm4fqY.png) ![](https://i.imgur.com/zrqs6aY.png)