A Rate-Distortion Theory of Adversarial Examples

Angus Galloway, Anna Golubeva, Graham W. Taylor

Sep 27, 2018 ICLR 2019 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: The generalization ability of deep neural networks (DNNs) is intertwined with model complexity, robustness, and capacity. Through establishing an equivalence between a DNN and a noisy communication channel, we characterize generalization and fault tolerance for unbounded adversarial attacks in terms of information-theoretic quantities. Invoking rate-distortion theory, we suggest that excess capacity is a significant cause of vulnerability to adversarial examples.
  • Keywords: adversarial examples, information bottleneck, robustness
  • TL;DR: We argue that excess capacity is a significant cause of susceptibility to adversarial examples.
0 Replies

Loading