Mitigating Gradient Interference for Efficient Sparse Fine-Tuning of Large Language Models

ICLR 2025 Conference Submission412 Authors

13 Sept 2024 (modified: 13 Oct 2024)ICLR 2025 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large language models, Sparse
Abstract: Large Language Model (LLM) sparsification plays a crucial role in model compression. Among various methods, training-free approaches are highly efficient but often result in accuracy loss, while full fine-tuning requires substantial computational resources. Recent works have begun exploring sparse Parameter-Efficient Fine-Tuning (PEFT) methods, but lack theoretical guidance. This study presents the first comprehensive theoretical framework for efficient sparse fine-tuning, addressing a critical gap in the literature. Specifically, we identify gradient conflict as the primary issue in PEFT sparse methods, wherein masked pretrained weights and corresponding PEFT weights exhibit competing optimization objectives during fine-tuning, potentially compromising model performance. We theoretically model this phenomenon and identify three key factors influencing the efficacy of fine-tuning in sparsified LLMs: (1) error introduced by weight norms, (2) error composition from PEFT structures, and (3) error accumulation during fine-tuning. Leveraging these theoretical insights, we propose a novel iterative sparse fine-tuning scheme that systematically addresses each identified factor. We implement an iterative process alternating between sparsity and fine-tuning to mitigate accumulated error in single turn of finetuning. We employ pooling instead of low-rank decomposition to reduce error composition from PEFT structures. We apply normalization to PEFT modules during fine-tuning, constraining error values by limiting weight norms while preserving representational capacity. Additionally, we utilize Centered Kernel Alignment based information similarity assessment for adaptive allocation of layer-level sparsity and PEFT parameter quantities, addressing layer-specific redundancy. Empirical evaluation on a 50\% sparse LLaMA-2 7B model demonstrates the superiority of our approach, achieving lossless compression.
Primary Area: foundation or frontier models, including LLMs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 412
Loading