Low-Rank is Required for Pruning LLMs

Published: 05 Mar 2025, Last Modified: 16 Apr 2025SLLMEveryoneRevisionsBibTeXCC BY 4.0
Track: long paper (up to 4 pages)
Keywords: pruning, sparse plus low-rank, low-rank, few-shot learning, attention sink, outliers
Abstract: Post-train pruning without fine-tuning has emerged as an efficient method for compressing large language models for inference, offering a computationally cheaper alternative to other approaches. However, recent studies have revealed that, unlike quantization, pruning consistently degrades model performance as sparsity increases. We demonstrate that this degradation results from pruning’s inability to preserve a low-rank structure in the model's weights, which is crucial for maintaining attention sinks. Furthermore, we show that these attention sinks play a key role in enabling the model to segment sequences—an essential mechanism for effective few-shot learning.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 45
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview