Towards Defending Multiple $\ell_p$-Norm Bounded Adversarial Perturbations via Gated Batch NormalizationDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: adversarial examples, multiple $\ell_p$-norm bounded adversarial peturbations, adversarial robustness
Abstract: There has been extensive evidence demonstrating that deep neural networks are vulnerable to adversarial examples, which motivates the development of defenses against adversarial attacks. Existing adversarial defenses typically improve model robustness against individual-specific perturbation types. However, adversaries are likely to generate multiple perturbations in practice. Some recent methods improve model robustness against adversarial attacks in multiple $\ell_p$ balls, but their performance against each perturbation type is still far from satisfactory. We observe that different $\ell_p$ bounded adversarial perturbations induce different statistical properties that can be separated and characterized by the statistics of Batch Normalization (BN). We thus propose Gated BN (GBN) to adversarially train a perturbation-invariant predictor for defending multiple $\ell_p$ bounded adversarial perturbations. GBN consists of a multi-branch BN layer and a gated sub-network. Each BN branch in GBN is in charge of one perturbation type to ensure that the normalized output is aligned towards learning perturbation-invariant representation. Meanwhile, the gated sub-network is designed to separate inputs added with different perturbation types. We perform an extensive evaluation of our approach on MNIST, CIFAR-10, and Tiny-ImageNet, and demonstrate that GBN outperforms previous defense proposals against multiple perturbation types (\ie, $\ell_1$, $\ell_2$, and $\ell_{\infty}$ perturbations) by large margins of 10-20\%.
One-sentence Summary: To defend multiple $\ell_p$ adversarial examples, we propose the multi-domain hypothesis; guided by that, we propose Gated Batch Normalization (GBN), a novel method for DNNs that improves robustness against multiple $\ell_p$ adversarial examples.
5 Replies

Loading