Symmetry-Regularized Learning of Continuous Attractor Dynamics

Published: 23 Sept 2025, Last Modified: 29 Oct 2025NeurReps 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Neural dynamics, continuous attractors, symmetry regularization, Lie brackets, variational inference
TL;DR: We establish a principled approach for embedding symmetry priors into neural dynamical system learning, highlighting how exploiting geometric structure can improve both scientific insight and model generalization.
Abstract: Neural population dynamics exhibit rich geometric structure, yet prevailing computational models often overlook this by primarily accounting for variability in the data. We show that incorporating prior knowledge about dynamical symmetries yields efficient and interpretable models. Focusing on ring attractor dynamics---canonical circuits that are approximately equivariant under planar rotations---we introduce a symmetry-regularized variational state space model. Our method augments the standard variational objective with a symmetry penalty, encouraging the learned dynamical system to respect rotational invariance. We demonstrate that this regularization preserves predictive performance while yielding parsimonious models with interpretable latent dynamics. This framework establishes a principled approach for embedding symmetry priors into neural dynamical system learning, highlighting how exploiting geometric structure can improve both scientific insight and model generalization.
Submission Number: 130
Loading