Learning and Shaping Manifold Attractors for Computation in Gated Neural ODEsDownload PDF

26 Sept 2022, 12:09 (modified: 09 Nov 2022, 02:12)NeurReps 2022 PosterReaders: Everyone
Keywords: Computational Neuroscience, Continuous Attractor Geometry, Dynamical Systems, Differential Equations, Neural ODEs, Gating, Interpretability
TL;DR: Gated neural ODEs effectively learn interpretable, low-dimensional manifold geometries to solve continuous memory tasks.
Abstract: Understanding how the dynamics in biological and artificial neural networks implement the computations required for a task is a salient open question in machine learning and neuroscience. A particularly fruitful paradigm is computation via dynamical attractors, which is particularly relevant for computations requiring complex memory storage of continuous variables. We explore the interplay of attractor geometry and task structure in recurrent neural networks. Furthermore, we are interested in finding low-dimensional effective representations which enhance interpretability. To this end, we introduce gated neural ODEs (gnODEs) and probe their performance on a continuous memory task. The gnODEs combine the expressive power of neural ordinary differential equations (nODEs) with the trainability conferred by gating interactions. We also discover that an emergent property of the gating interaction is an inductive bias for learning (approximate) continuous (manifold) attractor solutions, necessary to solve the continuous memory task. Finally, we show how reduced-dimensional gnODEs retain their modeling power while greatly improving interpretability, even allowing explicit visualization of the manifold attractor geometry.
8 Replies