Growing Winning Subnetworks, Not Pruning Them: A Paradigm for Density Discovery in Sparse Neural Networks
Keywords: Lottery ticket hypothesis; Growth-based density discovery; Path weight magnitude product;
TL;DR: This paper introduces PWMPR, an iterative growth method that efficiently discovers dense-equivalent sparse neural networks without predefined density.
Abstract: The lottery ticket hypothesis suggests that dense networks contain sparse subnetworks that can be trained in isolation to match full-model performance. Existing approaches—iterative pruning, dynamic sparse training, and pruning at initialization—either incur heavy retraining costs or assume the target density is fixed in advance. We introduce Path Weight Magnitude Product-biased Random growth (PWMPR), a constructive sparse-to-dense training paradigm that grows networks rather than pruning them, while automatically discovering their operating density. Starting from a sparse seed, PWMPR adds edges guided by path-kernel–inspired scores, mitigates bottlenecks via randomization, and stops when a logistic-fit rule detects plateauing accuracy. Experiments on CIFAR, TinyImageNet, and ImageNet show that PWMPR approaches the performance of IMP-derived lottery tickets—though at higher density—at substantially lower cost (~1.5× dense vs. 3–4× for IMP). These results establish growth-based density discovery as a promising paradigm that complements pruning and dynamic sparsity.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 22148
Loading