Ordered Neurons: Integrating Tree Structures into Recurrent Neural NetworksDownload PDF

Published: 21 Dec 2018, Last Modified: 14 Oct 2024ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: Natural language is hierarchically structured: smaller units (e.g., phrases) are nested within larger units (e.g., clauses). When a larger constituent ends, all of the smaller constituents that are nested within it must also be closed. While the standard LSTM architecture allows different neurons to track information at different time scales, it does not have an explicit bias towards modeling a hierarchy of constituents. This paper proposes to add such inductive bias by ordering the neurons; a vector of master input and forget gates ensures that when a given neuron is updated, all the neurons that follow it in the ordering are also updated. Our novel recurrent architecture, ordered neurons LSTM (ON-LSTM), achieves good performance on four different tasks: language modeling, unsupervised parsing, targeted syntactic evaluation, and logical inference.
Keywords: Deep Learning, Natural Language Processing, Recurrent Neural Networks, Language Modeling
TL;DR: We introduce a new inductive bias that integrates tree structures in recurrent neural networks.
Code: [![github](/images/github_icon.svg) yikangshen/Ordered-Neurons](https://github.com/yikangshen/Ordered-Neurons) + [![Papers with Code](/images/pwc_icon.svg) 6 community implementations](https://paperswithcode.com/paper/?openreview=B1l6qiR5F7)
Data: [PTB Diagnostic ECG Database](https://paperswithcode.com/dataset/ptb), [Penn Treebank](https://paperswithcode.com/dataset/penn-treebank)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 8 code implementations](https://www.catalyzex.com/paper/ordered-neurons-integrating-tree-structures/code)
10 Replies

Loading