Keywords: generative models, domain adaptation, catastrophic forgetting
Abstract: We propose Block-wise Optimization for Lottery-Ticket Adaptation (BOLA), a novel and simple sparse fine-tuning framework designed to enhance parameter efficiency in adapting large language models to new domains. Unlike conventional parameter-efficient fine-tuning (PEFT) methods such as LoRA and DoRA, which update all parameters, BOLA introduces a block-wise sparse selection mechanism. This mechanism searches for only domain-relevant subsets of the perameters and updates them. By integrating lottery ticket-style search with block-level granularity, BOLA mitigates catastrophic forgetting and enables interpretable, efficient adaptation while remaining compatible with existing PEFT techniques. Experiments on the math and commonsense reasoning benchmark demonstrate that BOLA achieves competitive performance with LoRA and DoRA.
Paper Type: Long
Research Area: Summarization
Research Area Keywords: generative models, transfer learning / domain adaptation
Contribution Types: NLP engineering experiment, Approaches low compute settings-efficiency
Languages Studied: English
Submission Number: 909
Loading