Towards Deep Learning Models Resistant to Adversarial AttacksDownload PDF

15 Feb 2018 (modified: 07 Apr 2024)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: Recent work has demonstrated that neural networks are vulnerable to adversarial examples, i.e., inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides us with a broad and unifying view on much prior work on this topic. Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal. In particular, they specify a concrete security guarantee that would protect against a well-defined class of adversaries. These methods let us train networks with significantly improved resistance to a wide range of adversarial attacks. They also suggest robustness against a first-order adversary as a natural security guarantee. We believe that robustness against such well-defined classes of adversaries is an important stepping stone towards fully resistant deep learning models.
TL;DR: We provide a principled, optimization-based re-look at the notion of adversarial examples, and develop methods that produce models that are adversarially robust against a wide range of adversaries.
Keywords: adversarial examples, robust optimization, ML security
Code: [![github](/images/github_icon.svg) MadryLab/mnist_challenge](https://github.com/MadryLab/mnist_challenge) + [![Papers with Code](/images/pwc_icon.svg) 56 community implementations](https://paperswithcode.com/paper/?openreview=rJzIBfZAb)
Data: [CIFAR-10](https://paperswithcode.com/dataset/cifar-10), [Morphosyntactic-analysis-dataset](https://paperswithcode.com/dataset/morphosyntactic-analysis-dataset)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 31 code implementations](https://www.catalyzex.com/paper/arxiv:1706.06083/code)
14 Replies

Loading