Keywords: procedural noise, Gabor noise, adversarial machine learning, universal adversarial perturbations
TL;DR: Existing Deep Convolutional Networks in image classification tasks are sensitive to Gabor noise patterns, i.e. small structured changes to the input cause large changes to the output.
Abstract: Deep Convolutional Networks (DCNs) have been shown to be sensitive to Universal Adversarial Perturbations (UAPs): input-agnostic perturbations that fool a model on large portions of a dataset. These UAPs exhibit interesting visual patterns, but this phenomena is, as yet, poorly understood. Our work shows that visually similar procedural noise patterns also act as UAPs. In particular, we demonstrate that different DCN architectures are sensitive to Gabor noise patterns. This behaviour, its causes, and implications deserve further in-depth study.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/sensitivity-of-deep-convolutional-networks-to/code)
1 Reply
Loading