Compositional Attention Networks for Machine ReasoningDownload PDF

15 Feb 2018 (modified: 07 Apr 2024)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: We present Compositional Attention Networks, a novel fully differentiable neural network architecture, designed to facilitate explicit and expressive reasoning. While many types of neural networks are effective at learning and generalizing from massive quantities of data, this model moves away from monolithic black-box architectures towards a design that provides a strong prior for iterative reasoning, enabling it to support explainable and structured learning, as well as generalization from a modest amount of data. The model builds on the great success of existing recurrent cells such as LSTMs: It sequences a single recurrent Memory, Attention, and Control (MAC) cell, and by careful design imposes structural constraints on the operation of each cell and the interactions between them, incorporating explicit control and soft attention mechanisms into their interfaces. We demonstrate the model's strength and robustness on the challenging CLEVR dataset for visual reasoning, achieving a new state-of-the-art 98.9% accuracy, halving the error rate of the previous best model. More importantly, we show that the new model is more computationally efficient, data-efficient, and requires an order of magnitude less time and/or data to achieve good results.
TL;DR: We present a novel architecture, based on dynamic memory, attention and composition for the task of machine reasoning.
Keywords: Deep Learning, Reasoning, Memory, Attention, VQA, CLEVR, Recurrent Neural Networks, Module Networks, Compositionality
Code: [![github](/images/github_icon.svg) stanfordnlp/mac-network](https://github.com/stanfordnlp/mac-network) + [![Papers with Code](/images/pwc_icon.svg) 9 community implementations](https://paperswithcode.com/paper/?openreview=S1Euwz-Rb)
Data: [CLEVR](https://paperswithcode.com/dataset/clevr), [CLEVR-Humans](https://paperswithcode.com/dataset/clevr-humans), [Talk2Car](https://paperswithcode.com/dataset/talk2car), [Visual Question Answering](https://paperswithcode.com/dataset/visual-question-answering)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 9 code implementations](https://www.catalyzex.com/paper/arxiv:1803.03067/code)
15 Replies

Loading