Adaptive Memory NetworksDownload PDF

02 Feb 2018 (modified: 10 Feb 2022)ICLR 2018 Workshop SubmissionReaders: Everyone
Keywords: Memory Networks, Dynamic Networks, Faster Inference, Reasoning, QA
TL;DR: Dynamic memory networks with faster inference
Abstract: We present Adaptive Memory Networks (AMN) that process input-question pairs to dynamically construct a network architecture optimized for lower inference times. AMN creates multiple memory banks to store entities from the input story to answer the questions. The model learns to reason important entities from the input text based on the question and concentrates these entities within a single memory bank. At inference, one or few banks are used, creating a tradeoff between accuracy and performance. AMN is enabled by first, a novel bank controller that makes discrete decisions with high accuracy and second, the capabilities of dynamic frameworks (such as PyTorch) that allow for dynamic network sizing and efficient variable mini-batching. In our results, we demonstrate that our model learns to construct a varying number of memory banks based on task complexity and achieves faster inference times for standard bAbI tasks, and modified bAbI tasks. We solve all bAbI tasks with an average of 48% fewer entities on tasks containing excess, unrelated information.
Data: [bAbI](https://paperswithcode.com/dataset/babi-1)
1 Reply

Loading