Towards Understanding the Invertibility of Convolutional Neural Networks

Anna C. Gilbert, Yi Zhang, Kibok Lee, Yuting Zhang, Honglak Lee

Nov 04, 2016 (modified: Jan 16, 2017) ICLR 2017 conference submission readers: everyone
  • Abstract: Several recent works have empirically observed that Convolutional Neural Nets (CNNs) are (approximately) invertible. To understand this approximate invertibility phenomenon and how to leverage it more effectively, we focus on a theoretical explanation and develop a mathematical model of sparse signal recovery that is consistent with CNNs with random weights. We give an exact connection to a particular model of model-based compressive sensing (and its recovery algorithms) and random-weight CNNs. We show empirically that several learned networks are consistent with our mathematical analysis and then demonstrate that with such a simple theoretical framework, we can obtain reasonable reconstruction results on real images. We also discuss gaps between our model assumptions and the CNN trained for classification in practical scenarios.
  • Keywords: Deep learning, Theory
  • Conflicts: umich.edu, google.com

Loading