Memorization in Overparameterized Autoencoders

Adityanarayanan Radhakrishnan, Mikhail Belkin, Caroline Uhler

May 17, 2019 ICML 2019 Workshop Deep Phenomena Blind Submission readers: everyone
  • Keywords: Memorization, Autoencoders
  • TL;DR: We identify memorization as the inductive bias of interpolation in overparameterized fully connected and convolutional auto-encoders.
  • Abstract: Interpolation of data in deep neural networks has become a subject of significant research interest. We prove that over-parameterized single layer fully connected autoencoders do not merely interpolate, but rather, memorize training data: they produce outputs in (a non-linear version of) the span of the training examples. In contrast to fully connected autoencoders, we prove that depth is necessary for memorization in convolutional autoencoders. Moreover, we observe that adding nonlinearity to deep convolutional autoencoders results in a stronger form of memorization: instead of outputting points in the span of the training images, deep convolutional autoencoders tend to output individual training images. Since convolutional autoencoder components are building blocks of deep convolutional networks, we envision that our findings will shed light on the important question of the inductive bias in over-parameterized deep networks.
0 Replies

Loading