Sparse Training from Random Initialization: Aligning Lottery Ticket Masks using Weight Symmetry

Published: 11 Feb 2025, Last Modified: 06 Mar 2025CPAL 2025 (Recent Spotlight Track)EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Lottery Ticket Hypothesis, sparse training, linear mode connectivity, weight symmetry, deep learning, deep neural networks, random initialization, git re-basin, optimization
TL;DR: Lottery Tickets can’t be trained from random init. We show that permuting the mask to align with the new init's optimization basin results in a mask that better trains from random init and approaches LTH generalization performance.
Abstract: The Lottery Ticket Hypothesis (LTH) suggests that there exists a sparse winning ticket mask and weights that achieves the same generalization performance as the dense model while using significantly fewer parameters. LTH achieves this by iteratively sparsifying and retraining within the pruned solution basin. However, this procedure is computationally expensive, and a winning ticket’s sparsity mask does not generalize to other weight initializations. Recent work has suggested that Deep Neural Networks (DNNs) trained from random initialization find solutions within the same basin modulo weight symmetry, and proposed a method to align trained models within the same basins. We propose permuting the winning ticket mask to align with the new optimization basin when performing sparse training from a different random initialization. Using this permuted mask, we show it is possible to significantly increase the generalization performance of sparse training from random initialization with the permuted mask as compared to sparse training naively using the non-permuted mask. We empirically demonstrate that our proposed method improves the generalization of LTH with the new random initialization on multiple datasets (CIFAR-10/100 and ImageNet) using VGG11 and ResNet-20/50 models of varying widths.
Submission Number: 39
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview