Emergent Riemannian geometry over learning discrete computations on continuous manifolds

Published: 23 Sept 2025, Last Modified: 28 Nov 2025NeurReps 2025 ProceedingsEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Riemannian geometry, neural manifolds, representations, learning dynamics, logic gates
TL;DR: Studying Riemannian geometry of hidden representations in feedforward neural networks trained on logic gate tasks with continuous inputs over learning.
Abstract: Many tasks require mapping continuous input data (e.g. images) to discrete task outputs (e.g. class labels). Yet, how neural networks learn to perform such discrete computations on continuous data manifolds remains poorly understood. Here, we show that signatures of such computations emerge in the representational geometry of neural networks as they learn. By analysing the Riemannian pullback metric across layers of a neural network, we find that network computation can be decomposed into two functions: discretising continuous input features and performing logical operations on these discretised variables. Furthermore, we demonstrate how different learning regimes (rich vs. lazy) have contrasting metric and curvature structures, affecting the ability of the networks to generalise to unseen inputs. Overall, our work provides a geometric framework for understanding how neural networks learn to perform discrete computations on continuous manifolds.
Poster Pdf: pdf
Submission Number: 51
Loading