Growing Networks by Folding Manifolds at Mistakes

ICLR 2026 Conference Submission18929 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Growing Neural Networks, Explainable Deep Learning, Representation Geometry, Manifold Folding, Ante-Hoc Interpretability, Parameter Efficiency
Abstract: Modern deep learning paradigms heavily rely on over-parameterized models, leading to excessive costs and limited interpretability. While growing neural networks (GrowNNs) offer a biologically inspired alternative by incrementally expanding architectures, existing methods lack theoretical grounding and often result in unstable, heuristic-driven growth. This paper proposes a novel geometric framework that interprets neural network growth as folding the learned representation manifolds to enhance model capacity. We theoretically establish that strategically adding neurons—equivalent to introducing geometric folds—at locations corresponding to systematic prediction mistakes optimally increases expressivity. Our method introduces: (1) A manifold-based strategy for effective network growth by identifying ``typical mistakes'' via clustering of mis-predictions and targeted folding; (2) A stable fine-tuning solution using gradient-aligned initialization and folding hyperplane regularization to ensure targeted correction of mistakes; (3) Ante-hoc instance-level interpretability, where each grown neuron can be justified and explained by a specific mis-predicted data instance representing a model deficiency. Experiments on synthetic manifolds, MNIST, and CIFAR-10 demonstrate controlled capacity expansion, competitive parameter efficiency, and inherent explainability throughout the growth process.
Supplementary Material: zip
Primary Area: interpretability and explainable AI
Submission Number: 18929
Loading