SparseSwaps: Tractable LLM Pruning Mask Refinement at Scale

ICLR 2026 Conference Submission18945 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: pruning, llm, sparsity, wanda, sparsegpt, efficiency
Abstract: The resource requirements of Neural Networks can be significantly reduced through pruning---the removal of seemingly less important parameters. However, with the rise of LLMs, full retraining to recover pruning-induced performance degradation is often prohibitive and classical approaches such as global magnitude pruning are suboptimal on Transformer architectures. State-of-the-art methods hence solve a layer-wise mask selection problem, the problem of finding a pruning mask which minimizes the per-layer pruning error on a small set of calibration data. Exactly solving this problem to optimality using Integer Programming (IP) solvers is computationally infeasible not only due to i) the size of the search space, but also because ii) caching all intermediate values of the matrix multiplication needed to specify the optimization objective is already prohibitive. Existing approaches therefore rely on approximations or heuristics. In this work, we demonstrate that the mask selection problem can be made drastically more tractable at LLM scale. To that end, we leverage three key insights: a) enforcing equal sparsity levels per row decouples the rows without harming performance, b) the dimensionality of the problem can be reduced by leveraging the unitary invariance of the Frobenius norm objective and transforming the calibration data accordingly, and c) computing optimal 1-swaps (exchanging one kept and one pruned weight) can be realized efficiently. These insights enable us to implement a tractable and simple 1-swap algorithm that warm starts from any pruning mask, runs efficiently on GPUs at llm scale, and is essentially hyperparameter-free. We demonstrate that our approach reduces per-layer pruning error by up to 60% over Wanda (Sun et al., 2023) and consistently improves perplexity and zero-shot accuracy across state-of-the-art GPT architectures.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 18945
Loading