| Work | Paper | Task | Resources | Year |
|---|---|---|---|---|
| L-BFGS | Intriguing properties of neural networks [pdf] | Classification | [github(unofficial)] | 2013 |
| FGSM | Explaining and Harnessing Adversarial Examples [pdf] | Classification | [github(unofficial)] | 2014 |
| JSMA | The Limitations of Deep Learning in Adversarial Settings [pdf] | Classification | [github(unofficial)] | 2016 |
| DeepFool | DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks [pdf] | Classification | [github] [github(unofficial)] | 2016 |
| PGD | Towards Deep Learning Models Resistant to Adversarial Attacks [pdf] | Classification | [github(MNIST)] [github(CIFAR10)] [github(unofficial)] | 2017 |
| C&W | Towards Evaluating the Robustness of Neural Networks [pdf] | Classification | [official github] [ART] [Torchattacks] [Foolbox] | 2017 |
| Work | Paper | Resources | Year |
|---|---|---|---|
| CIFAR10.1 | Do CIFAR-10 Classifiers Generalize to CIFAR-10? [pdf] | [github] | 2019 |
| ImageNetV2 | Do ImageNet Classifiers Generalize to ImageNet? [pdf] | [github] | 2019 |
| ObjectNet | ObjectNet: A large-scale bias-controlled dataset for pushing the limits of object recognition models [pdf] | [official site] | 2019 |
| ImageNet-A, ImageNet-O | Natural Adversarial Examples [pdf] | [github] | 2021 |
| ImageNet-Vid-Robust, YTBB-Robust | Do Image Classifiers Generalize Across Time? [pdf] | [download link] | 2021 |
| ImageNet-R | The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization [pdf] | [github] | 2021 |
| COCO-O | COCO-O: A Benchmark for Object Detectors under Natural Distribution Shifts [pdf] | [github] | 2023 |