Open Source Links: https://circle-group.github.io/research/AdversarialSuperposition/
Keywords: Foundational work, Understanding high-level properties of models
Other Keywords: superposition, adversarial vulnerability
TL;DR: This paper shows that adversarial attacks exploit the geometric arrangements of superposed features in neural network representations, revealing a tradeoff between representation capacity and adversarial vulnerability.
Abstract: Fundamental questions remain about why adversarial examples arise in neural networks. In this paper, we argue that adversarial vulnerability can emerge from *efficient* information encoding in networks. Specifically, we show that superposition - where networks represent more features than they have dimensions - creates arrangements of latent representations that adversaries can exploit. We demonstrate that adversarial perturbations leverage interference between superposed features to craft attacks, making attack patterns predictable from feature arrangements. Our framework provides a mechanistic explanation for two known phenomena: adversarial attack transferability between models with similar training regimes and class-specific vulnerability. In synthetic settings with precisely controlled superposition, we establish that superposition *suffices* to create adversarial vulnerability. We then demonstrate that these findings persist in a ViT trained on CIFAR-10. These findings reveal adversarial vulnerability can be a byproduct of networks' representational compression, rather than flaws in the learning process or non-robust inputs.
Submission Number: 137
Loading