State-Regularized Recurrent NetworksDownload PDF

27 Sept 2018 (modified: 05 May 2023)ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: Recurrent networks are a widely used class of neural architectures. They have, however, two shortcomings. First, it is difficult to understand what exactly they learn. Second, they tend to work poorly on sequences requiring long-term memorization, despite having this capacity in principle. We aim to address both shortcomings with a class of recurrent networks that use a stochastic state transition mechanism between cell applications. This mechanism, which we term state-regularization, makes RNNs transition between a finite set of learnable states. We show that state-regularization (a) simplifies the extraction of finite state automata modeling an RNN's state transition dynamics, and (b) forces RNNs to operate more like automata with external memory and less like finite state machines.
Keywords: recurrent network, finite state machines, state-regularized, interpretability and explainability
TL;DR: We introduce stochastic state transition mechanism to RNNs, simplifies finite state automata (DFA) extraction, forces RNNs to operate more like automata with external memory, better extrapolation behavior and interpretability.
8 Replies

Loading