Abstract: Adversarial examples are malicious inputs designed to fool machine learning models.
They often transfer from one model to another, allowing attackers to mount black
box attacks without knowledge of the target model's parameters.
Adversarial training is the process of explicitly training a model on adversarial
examples, in order to make it more robust to attack or to reduce its test error
on clean inputs.
So far, adversarial training has primarily been applied to small problems.
In this research, we apply adversarial training to ImageNet.
Our contributions include:
(1) recommendations for how to succesfully scale adversarial training to large models and datasets,
(2) the observation that adversarial training confers robustness to single-step attack methods,
(3) the finding that multi-step attack methods are somewhat less transferable than single-step attack
methods, so single-step attacks are the best for mounting black-box attacks,
and
(4) resolution of a ``label leaking'' effect that causes adversarially trained models to perform
better on adversarial examples than on clean examples, because the adversarial
example construction process uses the true label and the model can learn to
exploit regularities in the construction process.
Conflicts: google.com, openai.com
Keywords: Computer vision, Supervised Learning
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 8 code implementations](https://www.catalyzex.com/paper/arxiv:1611.01236/code)
13 Replies
Loading