Structure-Aware Stabilization of Adversarial Robustness with Massive Contrastive AdversariesDownload PDFOpen Website

2021 (modified: 15 May 2022)ICDM 2021Readers: Everyone
Abstract: Recent researches indicate that the impact of adversarial perturbations on deep learning models is reflected not only on the alteration of predicted labels but also on the distortion of data structure in the representation space. Significant improvement of the model’s adversarial robustness can be achieved by reforming the structure-aware representation distortion. Current methods generally utilize the one-to-one representation alignment or the triplet information between the positive and negative pairs. However, in this paper, we show that the representation structure of the natural and adversarial examples cannot be well and stably captured if we only focus on a localized range of contrastive examples. To achieve better and more stable adversarial robustness, we propose to adjust the adversarial distortion of representation structure by using Massive Contrastive Adversaries (MCA). Inspired by the Noise-Contrastive Estimation (NCE), MCA exploits the contrastive information by employing m negative instances. Compared with existing methods, our method recruits a much wider range of negative examples per update, so a better and more stable representation relationship between the natural and adversarial examples can be captured. Theoretical analysis shows that the proposed MCA inherently maximizes a lower bound of the mutual information (MI) between the representations of the natural and adversarial examples. Empirical experiments on benchmark datasets demonstrate that MCA can achieve better and more stable intra-class compactness and inter-class divergence, which further induces better adversarial robustness.
0 Replies

Loading