4.2反向求導

補充:正向求導

  • 對每一個參數進行微小的擾動( f'(x)=
    limh>0f(x+h)f(x)h
    ),以計算整個神經網路的損失
  • 缺點: 當規模較大時(層數多、每層神經元多)計算的負擔太大
  • eg: 2層神經網路(mnist為例)
    • sample_size=28*28pixels, batch=500, 中間層100神經元, output_layer:10神經元:
    • 參數個數:
      • 書上: 784x100+100+100*10+10=79510
      • 2層神經網路,指的是中間層+輸出層。
      • 中間層的權重W是一個784100的矩陣,輸出值有100個。
      • 輸出層的權重是100*10的矩陣,輸出值有10個。
      • 總參數是784100+100+100*10+10=79510
      • 更新一次須進行的運算次數:2*70510次 正向計算
  • 層數變多後此方法效率過低,不可行

正向求導須對每一個參數分別進行,運算負荷過大

4.2.1 正向計算反向求導

  • 將每一層視為一個多變數向量值函數
  • 正向計算(求值):
    • x >g(x)>f(g(x))>k(h(g(x)))=f(x)
  • 反向計算(求導):
    • f'(h) = k'(h) > f'(g) = k'(h)h'(g) > f'(x)=k'(h)h'(g)g'(x)
    • 對不同層進行計算時,不用一再的重複計算,效率較高
  • 反向傳播:
    • L'(w) = L'(a)a'(z)z'(w) = L'(a)σ'(z)x

4.2.2 計算圖

  • 把變數和函數的關係用流程圖表示
  • node:變數(node內可以保存其他變數)
  • edge: 運算動作

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →
https://samaelchen.github.io/deep_learning_step2/2023_9_16

  • 方便直覺的表示計算流程

4.2.3 損失函數關於輸出的梯度

  • 結論: 損失關於輸出(Z)的梯度
    1m(FY)

1. 二分類交叉熵損失函數關於輸出的梯度

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →
Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

2. 均方差損失函數關於輸出的梯度

  • 多樣本(batch)>輸出向量和目標向量的歐幾里德距離的平方(均方差)作為誤差
  • 越接近解時其梯度越小

3. 多分類交叉熵損失函數關於輸出的梯度

  • 類似二分類交叉熵

4.2.4 2層神經網路的反向求導

  • 詳見書上 p.4-48~4-54

1. 單樣本的反向求導

2. 反向求導的多樣本向量化表示

問題:反向求導的優勢?(為甚麼不使用正向求