Learning Associative Inference Using Fast Weight MemoryDownload PDF

Published: 12 Jan 2021, Last Modified: 22 Oct 2023ICLR 2021 PosterReaders: Everyone
Keywords: memory-augmented neural networks, tensor product, fast weights
Abstract: Humans can quickly associate stimuli to solve problems in novel contexts. Our novel neural network model learns state representations of facts that can be composed to perform such associative inference. To this end, we augment the LSTM model with an associative memory, dubbed \textit{Fast Weight Memory} (FWM). Through differentiable operations at every step of a given input sequence, the LSTM \textit{updates and maintains} compositional associations stored in the rapidly changing FWM weights. Our model is trained end-to-end by gradient descent and yields excellent performance on compositional language reasoning problems, meta-reinforcement-learning for POMDPs, and small-scale word-level language modelling.
One-sentence Summary: We present a Recurrent Neural Network model which is augmented with an associative memory to generalise in a more systematically
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Supplementary Material: zip
Code: [![github](/images/github_icon.svg) ischlag/Fast-Weight-Memory-public](https://github.com/ischlag/Fast-Weight-Memory-public)
Data: [catbAbI LM-mode](https://paperswithcode.com/dataset/catbabi-lm-mode), [catbAbI QA-mode](https://paperswithcode.com/dataset/catbabi), [Penn Treebank](https://paperswithcode.com/dataset/penn-treebank), [WikiText-2](https://paperswithcode.com/dataset/wikitext-2), [bAbI](https://paperswithcode.com/dataset/babi-1)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 6 code implementations](https://www.catalyzex.com/paper/arxiv:2011.07831/code)
13 Replies

Loading