Two-Stage Regularization-Based Structured Pruning for LLMs

ACL ARR 2025 July Submission316 Authors

27 Jul 2025 (modified: 20 Aug 2025)ACL ARR 2025 July SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: The deployment of large language models (LLMs) is largely hindered by their large number of parameters. Structural pruning has emerged as a promising solution. Prior structured pruning methods directly remove unimportant parameters based on certain metrics, which often causes knowledge loss and necessitates extensive retraining. To overcome this, we introduce a novel pruning method **TRSP**: **T**wo-Stage **R**egularization-Based **S**tructured **P**runing for LLMs. Specifically, we multiply the output of each transformer layer by an initial learnable weight and iteratively learn these weights by adding their $\ell_1$-norm as a regularization term to the loss function, serving as the first-stage regularization. Subsequently, we apply additional regularization to the difference between the output and input of layers with smaller weights, encouraging the shift of knowledge to the preserved layers. This serves as the second-stage regularization. TRSP retains more knowledge and better preserves model performance than direct parameter elimination. Through extensive experimentation we show that TRSP outperforms strong layer-wise structured pruning methods without requiring retraining. As a layer-wise pruning method, it delivers notable end-to-end acceleration, making it a promising solution for efficient LLM deployment.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: Large Language Models, Pruning, Structured Pruning, Model Compression, Efficient Inference
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Approaches to low-resource settings, Approaches low compute settings-efficiency
Languages Studied: English
Reassignment Request Area Chair: This is not a resubmission
Reassignment Request Reviewers: This is not a resubmission
A1 Limitations Section: This paper has a limitations section.
A2 Potential Risks: N/A
B Use Or Create Scientific Artifacts: Yes
B1 Cite Creators Of Artifacts: Yes
B1 Elaboration: Section 3, Section 4
B2 Discuss The License For Artifacts: Yes
B2 Elaboration: Section 3, Section 4
B3 Artifact Use Consistent With Intended Use: Yes
B3 Elaboration: Section 3, Section 4
B4 Data Contains Personally Identifying Info Or Offensive Content: Yes
B4 Elaboration: Section 4
B5 Documentation Of Artifacts: Yes
B5 Elaboration: Section 4
B6 Statistics For Data: Yes
B6 Elaboration: Section 4, Appendix
C Computational Experiments: Yes
C1 Model Size And Budget: Yes
C1 Elaboration: Section 4, Appendix
C2 Experimental Setup And Hyperparameters: Yes
C2 Elaboration: Section 4.7, Appendix
C3 Descriptive Statistics: No
C3 Elaboration: The cost of multi-round experiments is too high.
C4 Parameters For Packages: Yes
C4 Elaboration: Appendix
D Human Subjects Including Annotators: No
D1 Instructions Given To Participants: N/A
D2 Recruitment And Payment: N/A
D3 Data Consent: N/A
D4 Ethics Review Board Approval: N/A
D5 Characteristics Of Annotators: N/A
E Ai Assistants In Research Or Writing: No
E1 Information About Use Of Ai Assistants: N/A
Author Submission Checklist: yes
Submission Number: 316
Loading