Memorizing TransformersDownload PDF

Anonymous

Sep 29, 2021 (edited Nov 23, 2021)ICLR 2022 Conference Blind SubmissionReaders: Everyone
  • Keywords: Transformer, architecture, memorization.
  • Abstract: Language models typically need to be trained or finetuned in order to acquire new knowledge, which involves updating their weights. We instead envision language models that can simply read and memorize new data at inference time, thus acquiring new knowledge immediately. In this work, we extend language models with the ability to memorize the internal representations of past inputs. Despite the fact that our implementation of memory is not differentiable, we demonstrate that an approximate $k$NN lookup into the memory improves language modeling across various benchmarks and tasks, including generic webtext (C4), math papers (arXiv), books (PG-19), code (Github), as well as formal theorems (Isabelle). We show that the performance steadily improves when we increase the size of memory up to 131k tokens. We also find that the model is capable of making use of newly defined functions and theorems during test time.
  • One-sentence Summary: We propose to use an external memory module to allow instant utilization of newly acquired knowledge.
20 Replies

Loading