On The Geometry and Topology of Representations: the Manifolds of Modular Addition

Published: 26 Jan 2026, Last Modified: 02 Mar 2026ICLR 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: mechanistic interpretability, representation learning, geometry, topology, manifolds, universality, platonic representation hypothesis
TL;DR: We quantitatively and qualitatively show that the manifolds learned by neural networks trained on modular addition are universally the entire input space manifold or projections of it.
Abstract: The Clock and Pizza interpretations, associated with architectures differing in either uniform or learnable attention, were introduced to argue that different architectural designs can yield distinct circuits for modular addition. In this work, we show that this is not the case, and that both the uniform and trainable attention architectures implement the same algorithm via topologically and geometrically equivalent representations. Our methodology goes beyond the interpretation of individual neurons and weights. Instead, we identify all of the neurons corresponding to each learned representation and then study the collective group of neurons as one entity. This method reveals that each learned representation is a manifold that we can study utilizing tools from topology. Based on this insight, we can statistically analyze the learned representations across hundreds of circuits to demonstrate the similarity between learned modular addition circuits that arise naturally from common deep learning paradigms.
Supplementary Material: zip
Primary Area: interpretability and explainable AI
Submission Number: 4522
Loading