On the Complexity of Neural Computation in Superposition

18 Sept 2025 (modified: 12 Feb 2026)ICLR 2026 Conference Desk Rejected SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Mechanistic Interpretability, Superposition, Polysemanticity, Theoretical Deep Learning, Computational Complexity, Lower Bounds, Feature Sparsity, Information Theory, Neural Circuits, Neural Network Compression, Boolean Logic.
TL;DR: We provide complexity bounds for computing in superposition: nearly tight lower & upper bounds on achievable superposition, a capacity of O(n^2/log n) features for n neurons and an exponential resource gap between computing and representing features.
Abstract: Superposition, the ability of neural networks to represent more features than neurons, is increasingly seen as key to the efficiency of large models. This paper investigates the theoretical foundations of computing in superposition, establishing complexity bounds for explicit, provably correct algorithms. We present the first lower bounds for a neural network computing in superposition, showing that for a broad class of problems, including permutations and pairwise logical operations, computing $m'$ features in superposition requires at least $\Omega(\sqrt{m' \log m'})$ neurons and $\Omega(m' \log m')$ parameters. This implies an explicit limit on how much one can sparsify or distill a model while preserving its expressibility, and complements empirical scaling laws by implying the first subexponential bound on capacity: a network with $n$ neurons can compute at most $O(n^2 / \log n)$ features. Conversely, we provide a nearly tight constructive upper bound: logical operations like pairwise AND can be computed using $O(\sqrt{m'} \log m')$ neurons and $O(m' \log^2 m')$ parameters. There is thus an exponential gap between the complexity of computing in superposition (the subject of this work) versus merely representing features, which can require as little as $O(\log m')$ neurons based on the Johnson-Lindenstrauss Lemma. Our work analytically establishes that the number of parameters is a good estimator of the number of features a neural network computes.
Primary Area: interpretability and explainable AI
Submission Number: 13240
Loading