Plug and Play AI(PnPAI) === GitHub:https://github.com/timcsy/PnPAI 實作一些常用的模型,可以直接套到其他更複雜的框架 NN -- CNN --- 注意事項 --- ### PyTorch - Loss Function - 進行 Maximum Likelihood Estimate(MLE) - 最小化 Cross Entropy $H(p_{data}, p_\theta)$:用 $p_\theta$ 編碼 $p_{data}$,長度越短越好 - $\theta^* = arg\min\limits_\theta H(p_{data}, p_\theta) = arg\min\limits_\theta E_{(x,y) \sim p_{data}}[-\log p_\theta (y|x)]$ - Regression Problems(線性迴歸) - 假設輸出機率分佈 $p_\theta (y|x)$ 為常態分佈 - 對應到最後輸出 Activation Function 為 Identity - Cross Entropy 整理後為 $\displaystyle E_{(x,y) \sim p_{data}}[\frac{1}{2}(y - \hat O_\theta (x))^2]$ - nn.MSELoss(Mean Square Error) - 假設模型輸出為 $\hat O_\theta (x)$ - $\displaystyle E_{(x,y) \sim p_{data}}[\frac{1}{2}(y - \hat O_\theta (x))^2]$ - Classification Problems(分類問題) - 假設輸出機率分佈 $p_\theta (y|x)$ 為 Categorical 或 Multinomial 分佈 - 對應到最後輸出 Activation Function 為 Softmax - Cross Entropy 整理後為 $E_{(x,y) \sim p_{data}}[-\log Softmax(\hat O_\theta (x))]$ - 名詞定義 - logits $\hat O_\theta (x)$:模型最後輸出,不經過任何 Activation Function - $p_\theta (y|x) = Softmax(\hat O_\theta (x))$ - $\hat O_\theta (x) = logit(p_\theta (y|x))$ - 簡單來說就是沒有經過歸一化變成機率之前的輸出 - log likelihood:likelihood $p_\theta (y|x)$ 取 log - $z = \log p_\theta (y|x) = \log Softmax(\hat O_\theta (x))$ - nn.NLLLoss(Negative Log-Likelihood) - 假設模型輸出是 log likelihood:$z$ - $E_{(x,y) \sim p_{data}}[-z]$ - nn.LogSoftmax(Log Softmax) - $z = \log Softmax(\hat O_\theta (x))$ - nn.CrossEntropyLoss(Cross Entropy) - 假設模型輸出為 logits:$\hat O_\theta (x)$ - $E_{(x,y) \sim p_{data}}[-\log Softmax(\hat O_\theta (x))]$ - 結合 nn.LogSoftmax 和 nn.NLLLoss - PyTorch 接受的 ground truth y 是 index,不是 one-hot - 工具 - [tqdm: A Fast, Extensible Progress Bar for Python and CLI](https://github.com/tqdm/tqdm)
×
Sign in
Email
Password
Forgot password
or
Sign in via Google
Sign in via Facebook
Sign in via X(Twitter)
Sign in via GitHub
Sign in via Dropbox
Sign in with Wallet
Wallet (
)
Connect another wallet
Continue with a different method
New to HackMD?
Sign up
By signing in, you agree to our
terms of service
.