Towards Defending Multiple Adversarial Perturbations via Gated Batch NormalizationDownload PDF

28 Sept 2020 (modified: 05 May 2023)ICLR 2021 Conference Blind SubmissionReaders: Everyone
Keywords: adversarial examples, multiple adversarial peturbation types, adversarial robustness
Abstract: There is now extensive evidence demonstrating that deep neural networks are vulnerable to adversarial examples, motivating the development of defenses against adversarial attacks. However, existing adversarial defenses typically improve model robustness against individual specific perturbation types. Some recent methods improve model robustness against adversarial attacks in multiple $\ell_p$ balls, but their performance against each perturbation type is still far from satisfactory. To better understand this phenomenon, we propose the \emph{multi-domain} hypothesis, stating that different types of adversarial perturbations are drawn from different domains. Guided by the multi-domain hypothesis, we propose~\emph{Gated Batch Normalization (GBN)}, a novel building block for deep neural networks that improves robustness against multiple perturbation types. GBN consists of a gated sub-network and a multi-branch batch normalization (BN) layer, where the gated sub-network separates different perturbation types, and each BN branch is in charge of a single perturbation type and learns domain-specific statistics for input transformation. Then, features from different branches are aligned as domain-invariant representations for the subsequent layers. We perform extensive evaluations of our approach on MNIST, CIFAR-10, and Tiny-ImageNet, and in doing so demonstrate that GBN outperforms previous defense proposals against multiple perturbation types, \ie, $\ell_1$, $\ell_2$, and $\ell_{\infty}$ perturbations, by large margins of 10-20\%.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
One-sentence Summary: To defend multiple adversarial perturbations, we propose the multi-domain hypothesis; guided by that, we propose Gated Batch Normalization (GBN), a novel building block for DNNs that improves robustness against multiple perturbation types.
Reviewed Version (pdf): https://openreview.net/references/pdf?id=YGbdoCcnm
11 Replies

Loading