A unified framework for Sparse plus Low-Rank Matrix Decomposition for LLMs

Published: 11 Feb 2025, Last Modified: 09 Mar 2025CPAL 2025 (Proceedings Track) OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: model compression, sparse plus low-rank, optimization, inference acceleration, 2:4 sparsity, hardware and system co-design
TL;DR: We minimize the local layer-wise reconstruction error using alternating-minimization for sparse plus low-rank decomposition without any approximation to the objective---prior work uses a relaxation to the introduced objective.
Abstract: The impressive capabilities of large foundation models come at a cost of substantial computing resources to serve them. Compressing these pre-trained models is of practical interest as it can democratize deploying them to the machine learning community at large by lowering the costs associated with inference. A promising compression scheme is to decompose foundation models' dense weights into a sum of sparse plus low-rank matrices. In this paper, we design a unified framework coined $\texttt{HASSLE-free}$ for (semi-structured) sparse plus low-rank matrix decomposition of foundation models. Our framework introduces the local layer-wise reconstruction error objective for this decomposition, we demonstrate that prior work solves a relaxation of this optimization problem; and we provide efficient and scalable methods to minimize the $\textit{exact}$ introduced optimization problem. $\texttt{HASSLE-free}$ substantially outperforms state-of-the-art methods in terms of the introduced objective and a wide range of LLM evaluation benchmarks. For the Llama3-8B model with a 2:4 sparsity component plus a 64-rank component decomposition, a compression scheme for which recent work shows important inference acceleration on GPUs, $\texttt{HASSLE-free}$ reduces the test perplexity by $18$% for the WikiText-2 dataset and reduces the gap (compared to the dense model) of the average of eight popular zero-shot tasks by $28$% compared to existing methods.
Submission Number: 87
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview