# Decision-based adversarial attacks: Reliable attacks against black-box machine learning models ###### tags: `paper` `attack` `evasion attack` `boundary attack` `adversarial` ## source - [[论文总结] Boundary Attack](https://zhuanlan.zhihu.com/p/377633699) - [最大熵模型](https://luweikxy.gitbook.io/machine-learning-notes/linear-model/maximum-entropy-model#%E7%9B%B4%E8%A7%82%E7%90%86%E8%A7%A3%E6%9C%80%E5%A4%A7%E7%86%B5) - [近年攻防method總匯](https://zhuanlan.zhihu.com/p/135374750) - [Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks](https://www.usenix.org/conference/usenixsecurity19/presentation/demontis) - [中文分享](https://blog.csdn.net/qq_26130991/article/details/109516099) - [NIPS 2018 untargeted 冠軍 github](https://github.com/luizgh/avc_nips_2018) - [Delving into transferable adversarial examples and black-box attacks](https://blog.csdn.net/qq_35414569/article/details/82383788?ops_request_misc=%257B%2522request%255Fid%2522%253A%2522162615837816780271525811%2522%252C%2522scm%2522%253A%252220140713.130102334..%2522%257D&request_id=162615837816780271525811&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~all~baidu_landing_v2~default-1-82383788.pc_search_result_before_js&utm_term=%E6%94%BB%E5%87%BB%E8%BF%81%E7%A7%BB%E6%80%A7) - [Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses](https://arxiv.org/abs/1811.09600) - 改良此篇論文之boundary attack, 奪得 [NIPS 2018]() untargeted 組冠軍 - [hackmd 筆記](https://hackmd.io/719H2FT3RFu9N_Jh7J_F4Q) - [An Efficient Pre-processing Method to Eliminate Adversarial Effects](https://arxiv.org/pdf/1905.08614.pdf) - 對圖片先做WebP壓縮再左右翻轉 - 對抗樣本有可移植性 - https://zhuanlan.zhihu.com/p/217683614
×
Sign in
Email
Password
Forgot password
or
By clicking below, you agree to our
terms of service
.
Sign in via Facebook
Sign in via Twitter
Sign in via GitHub
Sign in via Dropbox
Sign in with Wallet
Wallet (
)
Connect another wallet
New to HackMD?
Sign up