Keywords: recurrent networks, reverse engineering, dynamical systems
TL;DR: We analyze recurrent networks trained on sentiment classification, and find that they all exhibit approximate line attractor dynamics when solving this task.
Abstract: Recurrent neural networks (RNNs) are a powerful tool for modeling sequential data. Despite their widespread usage, understanding how RNNs solve complex problems remains elusive. Here, we characterize how popular RNN architectures perform document-level sentiment classification.Despite their theoretical capacity to implement complex, high-dimensional computations, we find that trained networks converge to highly interpretable, low-dimensional representations. We identify a simple mechanism, integration along an approximate line attractor, and find this mechanism present across RNN architectures (including LSTMs, GRUs, and vanilla RNNs). Overall, these results demonstrate that surprisingly universal and human interpretable computations can arise across a range of recurrent networks.
1 Reply
Loading