Recurrently Controlling a Recurrent Network with Recurrent Networks Controlled by More Recurrent NetworksDownload PDF

28 Sept 2020 (modified: 05 May 2023)ICLR 2021 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Deep Learning, Recurrent Neural Networks
Abstract: This paper explores an intriguing idea of recursively parameterizing recurrent nets. Simply speaking, this refers to recurrently controlling a recurrent network with recurrent networks controlled by recurrent networks. The proposed architecture recursively parameterizes its gating functions whereby gating mechanisms of X-RNN are controlled by instances of itself, which are repeatedly called in a recursive fashion. We postulate that our proposed inductive bias provides modeling benefits pertaining to learning with inherently hierarchically-structured sequence data. To this end, we conduct extensive experiments on recursive logic tasks (sorting, tree traversal, logical inference), sequential pixel-by-pixel classification, semantic parsing, code generation, machine translation and polyphonic music modeling, demonstrating the widespread utility of the proposed approach, i.e., achieving optimistic and competitive results on all tasks.
One-sentence Summary: Having fun with RNNs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Reviewed Version (pdf): https://openreview.net/references/pdf?id=_5BiI1gl4V
5 Replies

Loading