Adversarially Robust Training through Structured Gradient RegularizationDownload PDF

27 Sept 2018 (modified: 05 May 2023)ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: We propose a novel data-dependent structured gradient regularizer to increase the robustness of neural networks vis-a-vis adversarial perturbations. Our regularizer can be derived as a controlled approximation from first principles, leveraging the fundamental link between training with noise and regularization. It adds very little computational overhead during learning and is simple to implement generically in standard deep learning frameworks. Our experiments provide strong evidence that structured gradient regularization can act as an effective first line of defense against attacks based on long-range correlated signal corruptions.
Keywords: Adversarial Training, Gradient Regularization, Deep Learning
TL;DR: We propose a novel data-dependent structured gradient regularizer to increase the robustness of neural networks against adversarial perturbations.
24 Replies

Loading