Abstract: We propose Block-wise Lottery Ticket Adaptation (BoLA), a novel and simple sparse fine-tuning framework designed to enhance parameter efficiency in adapting large language models (LLMs) to new domains. Unlike conventional parameter-efficient fine-tuning (PEFT) methods such as LoRA and DoRA, which rely on dense adaptation, BoLA introduces a block-wise sparse selection mechanism. This mechanism searches for and updates only a subset of the parameters that are relevant for domain-specific learning. By integrating lottery ticket-style search with block-level granularity, BoLA mitigates catastrophic forgetting and enables interpretable, efficient adaptation while remaining compatible with existing PEFT techniques. Experiments on the math and commonsense reasoning benchmark demonstrate that BoLA achieves competitive performance with LoRA and DoRA.
Paper Type: Long
Research Area: Machine Learning for NLP
Research Area Keywords: generative models, transfer learning / domain adaptation
Contribution Types: NLP engineering experiment, Approaches low compute settings-efficiency
Languages Studied: English
Submission Number: 211
Loading