Translation-equivariant Representation in Recurrent Networks with a Continuous Manifold of AttractorsDownload PDF

Published: 31 Oct 2022, Last Modified: 26 Dec 2022NeurIPS 2022 AcceptReaders: Everyone
Keywords: Neural coding, Equivariant representation, Continuous attractor neural network, Lie group, Drosophila's heading system
TL;DR: A biologically plausible recurrent neural circuit model that implements equivariant stimulus representation and Lie group operator representation.
Abstract: Equivariant representation is necessary for the brain and artificial perceptual systems to faithfully represent the stimulus under some (Lie) group transformations. However, it remains unknown how recurrent neural circuits in the brain represent the stimulus equivariantly, nor the neural representation of abstract group operators. The present study uses a one-dimensional (1D) translation group as an example to explore the general recurrent neural circuit mechanism of the equivariant stimulus representation. We found that a continuous attractor network (CAN), a canonical neural circuit model, self-consistently generates a continuous family of stationary population responses (attractors) that represents the stimulus equivariantly. Inspired by the Drosophila's compass circuit, we found that the 1D translation operators can be represented by extra speed neurons besides the CAN, where speed neurons' responses represent the moving speed (1D translation group parameter), and their feedback connections to the CAN represent the translation generator (Lie algebra). We demonstrated that the network responses are consistent with experimental data. Our model for the first time demonstrates how recurrent neural circuitry in the brain achieves equivariant stimulus representation.
Supplementary Material: pdf
16 Replies

Loading