Certifying Some Distributional Robustness with Principled Adversarial TrainingDownload PDF

15 Feb 2018 (modified: 04 Jan 2019)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: Neural networks are vulnerable to adversarial examples and researchers have proposed many heuristic attack and defense mechanisms. We address this problem through the principled lens of distributionally robust optimization, which guarantees performance under adversarial input perturbations. By considering a Lagrangian penalty formulation of perturbing the underlying data distribution in a Wasserstein ball, we provide a training procedure that augments model parameter updates with worst-case perturbations of training data. For smooth losses, our procedure provably achieves moderate levels of robustness with little computational or statistical cost relative to empirical risk minimization. Furthermore, our statistical guarantees allow us to efficiently certify robustness for the population loss. For imperceptible perturbations, our method matches or outperforms heuristic approaches.
TL;DR: We provide a fast, principled adversarial training procedure with computational and statistical performance guarantees.
Keywords: adversarial training, distributionally robust optimization, deep learning, optimization, learning theory
21 Replies

Loading