# Deep Fool ###### tags: `paper` `attack` `evasion attack` `deepfool` `fgsm` ## source - https://zhuanlan.zhihu.com/p/220421894 - https://blog.csdn.net/c9Yv2cf9I06K2A9E/article/details/108067350 - 論文實驗對照的method(https://zhuanlan.zhihu.com/p/217683614)(https://hackmd.io/5fVvCATDSsafz3Bcb6CmpQ) - Robustness 說明(https://towardsdatascience.com/why-robustness-is-not-enough-for-safety-and-security-in-machine-learning-1a35f6706601) ### 3.3 Extension to $l_p$ norm  applying [Hölder's inequality:](https://en.wikipedia.org/wiki/H%C3%B6lder%27s_inequality#Vector-valued_functions)  但 $r_i$ 不知為何出現$|w'_{\hat{l}}|^{q-1}$ 和 $sign(w'_{\hat{l}})$(可能是要從純量轉向量,但不知道為何可以這樣轉) $\lVert fg \rVert_{1} \le \lVert f \rVert_{p}\lVert g \rVert_{q}$ $\frac{ \lVert fg \rVert_{1} }{ \lVert g \rVert_{q} } \le \lVert f \rVert_{p}$ let $fg \rightarrow f'_k, g \rightarrow w'_k,p \rightarrow \infty$ $\frac{ \lVert f'_k \rVert_{1} }{ \lVert w'_k \rVert_{1} } \le \lVert r \rVert_{\infty}$ - ref18提及到的l-bfgs [深入淺出此論文的method](https://blog.csdn.net/kearney1995/article/details/79661429) [Alink漫谈(十一) :线性回归 之 L-BFGS优化](https://www.cnblogs.com/rossiXYZ/p/13289634.html#alink%E6%BC%AB%E8%B0%88%E5%8D%81%E4%B8%80-%EF%BC%9A%E7%BA%BF%E6%80%A7%E5%9B%9E%E5%BD%92-%E4%B9%8B-l-bfgs%E4%BC%98%E5%8C%96) [關於一階差分-雅可比矩陣](https://zh.wikipedia.org/wiki/%E9%9B%85%E5%8F%AF%E6%AF%94%E7%9F%A9%E9%98%B5) - 論文實驗有提及的CNN模型們 [卷積神經網絡 CNN 經典模型 — LeNet、AlexNet、VGG、NiN with Pytorch code](https://medium.com/ching-i/%E5%8D%B7%E7%A9%8D%E7%A5%9E%E7%B6%93%E7%B6%B2%E7%B5%A1-cnn-%E7%B6%93%E5%85%B8%E6%A8%A1%E5%9E%8B-lenet-alexnet-vgg-nin-with-pytorch-code-84462d6cf60c) ## 4 Experimental results  ### 4.1 Setup  - 為何不直接用$\lVert\hat{r}(x)\rVert_2$ 就好?(考慮縮放比例關係?) ###
×
Sign in
Email
Password
Forgot password
or
By clicking below, you agree to our
terms of service
.
Sign in via Facebook
Sign in via Twitter
Sign in via GitHub
Sign in via Dropbox
Sign in with Wallet
Wallet (
)
Connect another wallet
New to HackMD?
Sign up