What does an Adversarial Color look like?Download PDF

Published: 18 Oct 2022, Last Modified: 05 May 2023SVRHM PosterReaders: Everyone
Keywords: color perception, adversarial robustness
TL;DR: We visualize & understand how color-based adversarial attacks looks like for simple fully connected and convolutional neural networks.
Abstract: The short-answer: it depends! The long-answer is that this dependence is modulated by several factors including the architecture, dataset, optimizer and initialization. In general, this modulation is likely due to the fact that artificial perceptual systems are best suited for tasks that are aligned with their level of compositionality, so when these perceptual systems are optimized to perform a global task such as average color estimation instead of object recognition (which is compositional), different representations emerge in the optimized networks. In this paper, we first assess the novelty of our experiment and define what an adversarial example is in the context of the color estimation task. We then run controlled experiments in which we vary 4 variables in a highly controlled way pertaining neural network hyper-parameters such as: 1) the architecture, 2) the optimizer, 3) the dataset, and 4) the weight initializations. Generally, we find that a fully connected network's attack vector is more sparse than a compositional CNN's, although the SGD optimizer will modulate the attack vector to be less sparse regardless of the architecture. We also discover that the attack vector of a CNN is more consistent across varying datasets and confirm that the CNN is more robust to attacks of adversarial color. Altogether, this paper presents a first computational exploration of the qualitative assessment of the adversarial perception of color in simple neural network models, re-emphasizing that studies in adversarial robustness and vulnerability should extend beyond object recognition.
3 Replies

Loading