Improving Robustness with Optimal Transport based Adversarial GeneralizationDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Optimal Transport, Adversarial Machine Learning, Adversarial Training
Abstract: Deep nets have proven to be brittle against crafted adversarial examples. One of the main reasons is that the representations of the adversarial examples gradually become more divergent from those of the benign examples when feed-forwarding up to higher layers of deep nets. To remedy susceptibility to adversarial examples, it is natural to mitigate this divergence. In this paper, leveraging the richness and rigor of optimal transport (OT) theory, we propose an OT-based adversarial generalization technique that helps strengthen the classifier for tackling adversarial examples. The main idea of our proposed method is to examine a specific Wasserstein (WS) distance between the adversarial and benign joint distributions on an intermediate layer of a deep net, which can further be interpreted from a clustering view of OT as a generalization technique. More specifically, by minimizing the WS distance of interest, an adversarial example is pushed toward the cluster of benign examples sharing the same label on the latent space, which helps to strengthen the generalization ability of the classifier on the adversarial examples. Our comprehensive experiments with state-of-the-art adversarial training and defense on latent space approaches indicate the significant superiority of our method under specific attacks of various distortion sizes. The results demonstrate improvements in robust accuracy up to $5\%$ against PGD attack on CIFAR-100 over the SOTA methods.
6 Replies

Loading