Intelligible Language Modeling with Input Switched Affine NetworksDownload PDF

29 Nov 2024 (modified: 21 Jul 2022)Submitted to ICLR 2017Readers: Everyone
Abstract: The computational mechanisms by which nonlinear recurrent neural networks (RNNs) achieve their goals remains an open question. There exist many problem domains where intelligibility of the network model is crucial for deployment. Here we introduce a recurrent architecture composed of input-switched affine transformations, in other words an RNN without any nonlinearity and with one set of weights per input. We show that this architecture achieves near identical performance to traditional architectures on language modeling of Wikipedia text, for the same number of model parameters. It can obtain this performance with the potential for computational speedup compared to existing methods, by precomputing the composed affine transformations corresponding to longer input sequences. As our architecture is affine, we are able to understand the mechanisms by which it functions using linear methods. For example, we show how the network linearly combines contributions from the past to make predictions at the current time step. We show how representations for words can be combined in order to understand how context is transferred across word boundaries. Finally, we demonstrate how the system can be executed and analyzed in arbitrary bases to aid understanding.
TL;DR: Input Switched Affine Networks combine intelligibility with performance for character level language modeling.
Conflicts: cs.ox.ac.uk, google.com, stanford.edu, cs.uni.wroc.pl
Keywords: Natural language processing, Deep learning, Supervised Learning
9 Replies

Loading