# AI Security Study Group
1. https://www.cleverhans.io/
2. https://www.youtube.com/watch?v=ZmkU1YO4X7U
3. https://course.mlsafety.org/#media-popup
4. https://adversarial-ml-tutorial.org/
5. https://www.youtube.com/c/MLSec
6. https://github.com/pralab
7. https://github.com/unica-mlsec/mlsec
8. https://github.com/Trusted-AI/adversarial-robustness-toolbox
9. https://github.com/mitre/advmlthreatmatrix
10. https://github.com/tensorflow/privacy
11. https://github.com/cleverhans-lab/cleverhans
# Papers:
1. [Towards Deep Learning Models Resistant to Adversarial Attacks](https://arxiv.org/abs/1706.06083)
2. [The Limitations of Deep Learning in Adversarial Settings](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7467366)
3. [Towards Evaluating the Robustness of Neural Networks
](https://arxiv.org/abs/1608.04644)
4. [Fast is better than free: Revisiting adversarial training](https://arxiv.org/abs/2001.03994)
5. [A Zest of LIME: Towards Architecture-Independent Model Distances](https://openreview.net/forum?id=OUz_9TiTv9j)
6. [Data-Free Adversarial Distillation](https://arxiv.org/pdf/1912.11006.pdf)
7. [Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks](https://arxiv.org/pdf/1511.04508.pdf)
8. [Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks.](https://evademl.org/squeezing/)
9. [Stolen Memories: Leveraging Model Memorization
for Calibrated White-Box Membership Inference](https://www.usenix.org/system/files/sec20-leino.pdf)
10. [Bag of Tricks for Adversarial Training
](https://arxiv.org/abs/2010.00467)
11. [Certified Adversarial Robustness via Randomized Smoothing
](https://arxiv.org/abs/1902.02918)
12. [Consistency Regularization for Certified Robustness
of Smoothed Classifiers](https://arxiv.org/abs/2006.04062)
13. [How Should Pre-Trained Language Models Be
Fine-Tuned Towards Adversarial Robustness?](https://openreview.net/pdf?id=pl2WX3riyiq)
# Libraries:
1. [Trusted-AI/adversarial-robustness-toolbox](https://github.com/Trusted-AI/adversarial-robustness-toolbox)
2. [bethgelab/foolbox](https://github.com/bethgelab/foolbox)
3. [BorealisAI/advertorch](https://github.com/BorealisAI/advertorch)
4. [advboxes/AdvBox](https://github.com/advboxes/AdvBox)
5. [SecML](https://secml.readthedocs.io/en/v0.15/)
6. [DSE-MSU/DeepRobust](https://github.com/DSE-MSU/DeepRobust)
# Compilations:
1. [A School for all Seasons on Trustworthy Machine Learning](https://trustworthy-machine-learning.github.io/)
# Model Zoo:
1. [RobustBench](https://github.com/RobustBench/robustbench)