Overparameterized Neural Networks Can Implement Associative Memory

Anonymous

Sep 25, 2019 ICLR 2020 Conference Blind Submission readers: everyone Show Bibtex
  • Keywords: Associative Memory, Memorization and Recall, Attractors, Deep Autoencoders
  • TL;DR: We demonstrate that overparameterized neural networks trained using standard optimizers can memorize and recall individual data instances or sequences.
  • Abstract: Identifying computational mechanisms for memorization and retrieval is a long-standing problem at the intersection of machine learning and neuroscience. In this work, we demonstrate empirically that overparameterized deep neural networks trained using standard optimization methods provide a mechanism for memorization and retrieval of real-valued data. In particular, we show that overparameterized autoencoders store training examples as attractors, and thus, can be viewed as implementations of associative memory with the retrieval mechanism given by iterating the map. We study this phenomenon under a variety of common architectures and optimization methods and construct a network that can recall 500 real-valued images without any apparent spurious attractor states. Lastly, we demonstrate how the same mechanism allows encoding sequences, including movies and audio, instead of individual examples. Interestingly, this appears to provide an even more efficient mechanism for storage and retrieval than autoencoding single instances.
  • Code: https://drive.google.com/open?id=1yWcWeZZSQIeESeLJ4nnEQEeLe3U34-xo
0 Replies

Loading