Dimensionality of population-level latent mechanisms encoding spatial representations

Published: 23 Sept 2025, Last Modified: 29 Oct 2025NeurReps 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Spatial navigation, path-integrating RNNs
Abstract: How does the brain efficiently encode space, and can this be achieved with low-dimensional neural codes? We address this question by developing a theory of spatial encoding for both continuous signals (the $(x,y)$ coordinates of space) and discrete signals (the firing of place cells). We show that discrete codes require high-dimensional latent variables to faithfully tile a spatial domain such as $\mathbb{R}^2$, whereas continuous codes can be realized with low-dimensional dynamical systems. To test this prediction, we train recurrent neural networks (RNNs) to perform path integration. RNNs trained on continuous spatial outputs develop low-dimensional latent codes, while those trained to reproduce discrete, place-cell–like responses yield high-dimensional latent dynamics. Since mammalian place cells form a discrete code that may reduce output noise, our results suggest that basis functions, \textit{i.e.}, population-level coding variables that optimally span space, are central to navigation, in which the required spatial resolution sets the dimensionality of the neural code. This framework shifts attention from tuning properties of individual neurons to the population-level latent representations that arise when solving the spatial encoding problem, thereby extending prior work on path integration and self-supervised navigation models.
Submission Number: 126
Loading