Olica: Efficient Structured Pruning of Large Language Models without Retraining

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We propose an efficient method for structured pruning of large language models that does not require retraining.
Abstract: Most existing structured pruning methods for Large Language Models (LLMs) require substantial computational and data resources for retraining to reestablish the corrupted correlations, making them prohibitively expensive. To address this, we propose an efficient pruning framework for LLMs called Orthogonal Neuron Decomposition and Linear Calibration (Olica), which eliminates the need for retraining. A key observation is that the multi-head attention (MHA) layer depends on two types of matrix products (i.e., ${\rm W}_q{\rm W}^{\top}_k$ and ${\rm W}_v{\rm W}^{\top}_o$). By treating these matrix products as unified entities and applying principal component analysis (PCA), we extract the most important information to compress LLMs without sacrificing accuracy or disrupting their original structure. Consequently, retraining becomes unnecessary. Moreover, a fast decomposition method is devised, reducing the complexity of PCA by a factor of the square of the number of attention heads. Additionally, to mitigate error accumulation problem caused by pruning the feed-forward network (FFN) layer, we introduce a linear calibration method to reconstruct the residual errors of a pruned layer using two low-rank matrices. By leveraging singular value decomposition (SVD) on the solution of the least-squares problem, these matrices are obtained without requiring retraining. Extensive experiments show that the proposed Olica is efficient in terms of data usage, GPU memory, and running time, while delivering superior performance across multiple benchmarks.
Lay Summary: Network pruning is a pivotal technique for reducing the complexity and accelerating the inference time of large language models (LLMs) by removing redundant components (e.g., neurons), but conventional methods require substantial computational and data resources for retraining to restore the corrupted correlations. We propose an efficient pruning framework for LLMs that employs orthogonal neuron decomposition and linear calibration, applied to the multi-head attention (MHA) layer and the feed-forward network (FFN) layer of the transformer, respectively. By developing a fast decomposition method and leveraging the closed-form solution of the least-squares problem, our method is efficient in terms of data usage, GPU memory consumption, and runing time. This enables pruning of models with 70B parameters on a single NVIDIA GeForce RTX 4090 GPU with less than an hour of runtime.
Primary Area: Deep Learning->Large Language Models
Keywords: Model compression, structured pruning, large language models, principle component analysis
Submission Number: 4251
Loading