Interpreting deep neural networks trained on elementary p groups reveals algorithmic structure

Published: 13 Nov 2025, Last Modified: 25 Nov 2025TAG-DS 2025 FlashTalkEveryoneRevisionsBibTeXCC BY 4.0
Track: Full Paper (8 pages)
Keywords: Group theory, representation theory, homology, manifolds, metric geometry, algorithmic discovery
TL;DR: We present an exposition into a toy problem where we use techniques from computational algebra to build a full empirical understanding of neurons, neural representations, and even find the global algorithm composed of neural representations.
Abstract: We interpret deep neural networks (DNNs) trained on elementary $p$ group multiplication, examining how our results reveal some of the nature within major deep learning hypotheses. Using tools from computational algebra and geometry, we perform analyses at multiple levels of abstraction, and fully characterize and describe: 1) \textit{the global algorithm} DNNs learn on this task---the multidimensional Chinese remainder theorem; 2) \textit{the neural representations}, which are 2-torus $\mathbb{T}^2$ embedded in $\mathbb{R}^4$ encoding coset structure; 3) \textit{the individual neuron activation patterns}, which activate solely group coset structure. Furthermore, we find neurons organize their activation strengths via the Lee metric. Overall, our work is an exposition toward understanding how DNNs learn group multiplications.
Supplementary Material: zip
Submission Number: 41
Loading