GaLLoP: Gradient-based Sparse Learning on Low-Magnitude Parameters

18 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Parameter-Efficient Fine-Tuning (PEFT), Sparse Fine-Tuning, Generalizability, Large Language Models (LLMs)
TL;DR: We develop a novel sparse fine-tuning algorithm that fine-tunes model parameters with the largest task gradients and smallest pre-trained magnitudes to enhance ID and OOD generalizability by preventing catastrophic forgetting and memorization.
Abstract: Sparse fine-tuning techniques adapt LLMs to downstream tasks by only tuning a sparse subset of model parameters. However, the effectiveness of sparse adaptation depends on optimally selecting the model parameters to be fine-tuned. In this work, we introduce a novel sparse fine-tuning technique named **GaLLoP**: **G**radient-based Sp**a**rse **L**earning on **Lo**w-Magnitude **P**arameters, which fine-tunes only those model parameters which have the largest gradient magnitudes on downstream tasks and the smallest pre-trained magnitudes, intuitively prioritizing parameters that are highly task-relevant, but minimally disruptive to pre-trained knowledge. Our experimentation with LLaMA3 8B and Gemma 2B as base models shows that GaLLoP consistently improves or matches the in-distribution as well as out-of-distribution performance obtained via the usage of other leading parameter-efficient fine-tuning techniques, including LoRA, DoRA, and SAFT. Our analysis demonstrates that GaLLoP mitigates catastrophic forgetting and memorization of task data, as important pre-trained parameters remain unchanged, and stabilizes performance relative to other fine-tuning techniques, robustly generalizing across most random seeds.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 11532
Loading