Learning Useful Representations of Recurrent Neural Network Weight Matrices

Published: 29 Nov 2023, Last Modified: 29 Nov 2023NeurReps 2023 PosterEveryoneRevisionsBibTeX
Keywords: Recurrent Neural Networks, Representation Learning
TL;DR: We learn useful representations of recurrent neural networks based on their weight matrices
Abstract: Recurrent Neural Networks (RNNs) are general-purpose parallel-sequential computers. The program of an RNN is its weight matrix. Its direct analysis, however, tends to be challenging. Is it possible to learn useful representations of RNN weights that facilitate downstream tasks? While the "Mechanistic Approach" directly 'looks inside' the RNN to predict its behavior, the "Functionalist Approach" analyzes its overall functionality---specifically, its input-output mapping. Our two novel Functionalist Approaches extract information from RNN weights by 'interrogating' the RNN through probing inputs. Our novel theoretical framework for the Functionalist Approach demonstrates conditions under which it can generate rich representations for determining the behavior of RNNs. RNN weight representations generated by Mechanistic and Functionalist approaches are compared by evaluating them in two downstream tasks. Our results show the superiority of Functionalist methods.
Submission Track: Extended Abstract
Submission Number: 76
Loading