A Recovery Guarantee for Sparse Neural Networks

ICLR 2026 Conference Submission20790 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: compressed sensing, neural networks, model pruning, sparse weight recovery
TL;DR: We prove the first identifiability and recovery results for sparse ReLU neural networks, and show from-scratch recovery of sparse networks without first training a dense network.
Abstract: We prove the first guarantees of sparse recovery for ReLU neural networks, where the sparse network weights constitute the signal to be recovered. Specifically, we study structural properties of the sparse network weights for two-layer, scalar-output networks under which a simple iterative hard thresholding algorithm recovers these weights exactly, using memory that grows linearly in the number of nonzero weights. We validate this theoretical result with simple experiments on recovery of sparse planted MLPs, MNIST classification, and implicit neural representations. Experimentally, we find performance that is competitive with, and often exceeds, a high-performing but memory-inefficient baseline based on iterative magnitude pruning.
Primary Area: learning theory
Submission Number: 20790
Loading