Aliasing coincides with CNNs vulnerability towards adversarial attacksDownload PDF

Published: 02 Dec 2021, Last Modified: 05 May 2023AAAI-22 AdvML Workshop ShortPaperReaders: Everyone
Keywords: Adversarial Attacks, Nyquist-Shannon, Aliasing, CNNs, Sampling
TL;DR: Our analysis of different robust and non-robust models shows that robust models exhibit less aliasing in their down-sampling layer than standard trained models.
Abstract: Many commonly well-performing convolutional neural network models have shown to be susceptible to input data perturbations, indicating a low model robustness. Adversarial attacks are thereby specifically optimized to reveal model weaknesses, by generating small, barely perceivable image perturbations that flip the model prediction. Robustness against attacks can be gained for example by using adversarial examples during training, which effectively reduces the measurable model attackability. In contrast, research on analyzing the source of a model’s vulnerability is scarce. In this paper, we analyze adversarially trained, robust models in the context of a specifically suspicious network operation, the downsampling layer, and provide evidence that robust models have learned to downsample more accurately and suffer significantly less from aliasing than baseline models.
2 Replies

Loading