NTU 機器學習 HW3 === ## 1. Convolution ### Question  ### Answer 假設長寬 $dilation = 1$。 \begin{equation} H_{out} = \dfrac{H_{in} + 2 p_1 - k_1}{s_1} + 1\\ W_{out} = \dfrac{W_{in} + 2p_2 - k_2}{s_2} + 1 \end{equation} ## 2. Batch Normalization ### Question  ### Answer \begin{equation} \dfrac{∂l}{∂γ} = \sum_{i=1}^m\dfrac{∂l}{∂y_i}\hat{x_i} \\ \dfrac{∂l}{∂β} = \sum_{i=1}^m\dfrac{∂l}{∂y_i} \\ \dfrac{∂l}{∂\hat{x_i}} = \dfrac{∂l}{∂y_i}γ \\ \dfrac{∂l}{∂σ^2_β} = \sum^m_{i=1}\dfrac{∂l}{∂\hat{x_i}}(x_i - μ_B)\dfrac{-1}{2}(∂σ^2_β + ϵ)^{-3/2} \\ \dfrac{∂l}{∂μ_β} = (\sum_{i=1}^m\dfrac{∂l}{∂\hat{x_i}}\dfrac{-1}{\sqrt{∂σ^2_β + ϵ}}) + \dfrac{∂l}{∂σ^2_β}\dfrac{\sum_{i=1}^m-2(x_i - μ_B)}{m} \\ \dfrac{∂l}{∂x_i} = \dfrac{∂l}{∂\hat{x_i}}\dfrac{1}{\sqrt{∂σ^2_β + ϵ}} + \dfrac{∂l}{∂σ^2_β}\dfrac{2(x_i - μ_B)}{m} + \dfrac{∂l}{∂μ_β}\dfrac{1}{m} \end{equation} ## 3. Softmax and Cross Entropy ### Question  ### Answer \begin{equation} \dfrac{∂L_t}{∂z_t} = -\sum_{i=1}\dfrac{∂y_ilog(\hat{y}_i)}{∂z_t} \\ = -\sum_{i=1}y_i\dfrac{∂log(\hat{y}_i)}{∂z_t} \\ = -\sum_{i=1}y_i\dfrac{1}{\hat{y}_i}\dfrac{∂\hat{y}_i}{∂z_t} \\ = -\dfrac{y_t}{\hat{y}_t}\dfrac{∂\hat{y}_t}{∂z_t} - \sum_{i\neq{t}}\dfrac{y_i}{\hat{y}_i}\dfrac{∂\hat{y}_i}{∂z_t} \\ = -\dfrac{y_t}{\hat{y}_t}\hat{y}_t(1 - \hat{y}_t) - \sum_{i\neq{t}}\dfrac{y_i}{\hat{y}_i}(-\hat{y}_i\hat{y}_t) \\ = -y_t + y_t\hat{y}_t + \sum_{i\neq{t}}y_i\hat{y}_t \\ = -y_t + \sum_{i=1}y_i\hat{y}_t \\ = -y_t + \hat{y}_t\sum_{i=1}y_i \\ = \hat{y}_t - y_t \end{equation} ## 4. Adaptive learning rate based optimization ### Question  ### Answer #### (a) Derive $m^t$ \begin{equation} m^t = β_1m^{t-1} + (1 - β_1) g^t \\ = β_1m^{t-1} + g^t - β_1g^t \\ = β_1^2m^{t-2} + β_1g^{t-1} - β^2_1g^{t-1} + g^t - β_1g^t \\ = ... \\ = \sum_{i=1}^t β_1^{t-i}g^i - β_1^{t-i+1}g^i \\ = \sum_{i=1}^t (β_1^{t-i} - β_1^{t-i+1})g^i \\ = (1 - β_1)\sum_{i=1}^tβ_1^{t-i}g^i \end{equation} Derive $v^t$ \begin{equation} v^t = β_2v^{t-1} + (1 - β_2)(g^t)^2 \\ = β_2v^{t-1} + (g^t)^2 - β_2(g^t)^2 \\ = \sum_{i-1}^t β_2^{t-i}(g^i)^2 - β_2^{t-i+1}(g^i)^2 \\ = \sum_{i=1}^t (β_2^{t-i} - β_2^{t-i+1})(g^i)^2 \\ = (1 - β_2)\sum_{i=1}^tβ_2^{t-i}(g^i)^2 \end{equation} 根據上述兩式可得 $A=1-β_1,B_i=β_1^{t-i},C=1-β_2, D_i=β_2^{t-i}$。 #### (b) 如果 $β_1=0$,$β_2\to1$,則 $m^t = g^t$,$v^t = \displaystyle\lim_{β_2\to1}(1 - β_2)\sum_{i=1}^tβ_2^{t-i}(g^i)^2$。 代入 $\hat{m}^t$、$\hat{v}^t$。$\hat{m^t}=m^t=g^t$、$\hat{v^t}=\displaystyle\lim_{β_2\to1}\dfrac{v^t}{1-β_2}=\sum_{i=1}^tβ_2^{t-i}(g^i)^2$。 將上述結果分別代入 Adam,會發現兩者的更新參數式子都變成 $w^t=w^{t-1} - \dfrac{η}{\sqrt{\sum_{i=1}^{t}β_2^{t-i}(g^i)^2}}g^t = w^{t-1} - \dfrac{ηt^{-1/2}}{\sqrt{\sum_{i=1}^{t}(g^i)^2}}g^t$ 也就等於 Adagrad 的更新參數式子。
×
Sign in
Email
Password
Forgot password
or
By clicking below, you agree to our
terms of service
.
Sign in via Facebook
Sign in via Twitter
Sign in via GitHub
Sign in via Dropbox
Sign in with Wallet
Wallet (
)
Connect another wallet
New to HackMD?
Sign up