# 0304 ## Current Idea of New Network Architecture * Architecture  * 第一層spatial filter得到的latent vectors兩兩一對算彼此之間的mutual inforamtion<-minimize it * $Loss = L_{MI}+L_{CE}$ (MI=mutual inforamtion, CE = Cross Entropy) * [GRL(Gradient Reverse Layer)](https://arxiv.org/abs/1505.07818) : 單純把back propagation的gradient * -1  * [MINE(Mutual Information Neural Estimator)](https://arxiv.org/abs/1801.04062) : 用**gradient ascent**的方式去maximize一個被神經網路參數化的目標函數F  * 原文是找True mutual information $I(X;Z)$的最大下界$I_Θ(X,Z)$-> 取負變成找最小上界(?) * 還沒有自己的版本([目前架構](https://github.com/MasanoriYamada/Mine_pytorch)) * forward input似乎不適當 * [Deep Infomax](https://arxiv.org/abs/1808.06670) : 算network input和output間的Mutual information * 目前 performance (mean of 10 repeats) 
×
Sign in
Email
Password
Forgot password
or
By clicking below, you agree to our
terms of service
.
Sign in via Facebook
Sign in via Twitter
Sign in via GitHub
Sign in via Dropbox
Sign in with Wallet
Wallet (
)
Connect another wallet
New to HackMD?
Sign up