Recurrent neural networks learn robust representations by dynamically balancing compression and expansionDownload PDF

Published: 02 Oct 2019, Last Modified: 05 May 2023Real Neurons & Hidden Units @ NeurIPS 2019 PosterReaders: Everyone
TL;DR: Recurrent Neural Networks learn to increase and reduce the dimensionality of their internal representation in a way that matches the task, depending on the dynamics of the initial network.
Keywords: Recurrent Neural Network, Temporal Learning, Chaotic Dynamics, Dimensionality, Working Memory
Abstract: Recordings of neural circuits in the brain reveal extraordinary dynamical richness and high variability. At the same time, dimensionality reduction techniques generally uncover low-dimensional structures underlying these dynamics. What determines the dimensionality of activity in neural circuits? What is the functional role of dimensionality in behavior and task learning? In this work we address these questions using recurrent neural network (RNN) models. We find that, depending on the dynamics of the initial network, RNNs learn to increase and reduce dimensionality in a way that matches task demands. These findings shed light on fundamental dynamical mechanisms by which neural networks solve tasks with robust representations that generalize to new cases.
4 Replies

Loading