Adversarial Robustness as a Prior for Learned RepresentationsDownload PDF

25 Sept 2019 (modified: 22 Oct 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
TL;DR: Representations learned by robust neural networks align better with our idealization of representations as high-level feature extractors, and thus allow for representation inversion, as well as direct feature visualization and manipulation.
Abstract: An important goal in deep learning is to learn versatile, high-level feature representations of input data. However, standard networks' representations seem to possess shortcomings that, as we illustrate, prevent them from fully realizing this goal. In this work, we show that robust optimization can be re-cast as a tool for enforcing priors on the features learned by deep neural networks. It turns out that representations learned by robust models address the aforementioned shortcomings and make significant progress towards learning a high-level encoding of inputs. In particular, these representations are approximately invertible, while allowing for direct visualization and manipulation of salient input features. More broadly, our results indicate adversarial robustness as a promising avenue for improving learned representations.
Code: https://github.com/snappymanatee/robust-learned-representations
Keywords: adversarial robustness, adversarial examples, robust optimization, representation learning, feature visualization
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 4 code implementations](https://www.catalyzex.com/paper/arxiv:1906.00945/code)
Original Pdf: pdf
16 Replies

Loading