Keywords: Continual Learning, Parameter-Efficient Fine-Tuning, Large Language Model, Low-Rank Adaptation, Lifelong Learning
TL;DR: We propose DEAL, a continual low-rank fine-tuning framework that enables efficient and privacy-preserving adaptation of large language models.
Abstract: Recent advancements in Large Language Models (LLMs) have emphasized the
critical role of fine-tuning (FT) techniques in adapting LLMs to specific tasks,
especially when retraining from scratch is computationally infeasible. Fine-tuning
enables LLMs to leverage task- or domain-specific data, producing models that
more effectively meet the requirements of targeted applications. However, con-
ventional FT approaches often suffer from catastrophic forgetting and suboptimal
data efficiency, limiting their real-world applicability. To address these challenges,
this paper proposes DEAL, a novel framework that integrates Low-Rank Adapta-
tion (LoRA) with a continuous fine-tuning strategy. By incorporating knowledge
retention and adaptive parameter update modules, the framework mitigates the
limitations of existing FT methods while maintaining efficiency. Experiments on
15 diverse datasets show that DEAL consistently outperforms baseline methods,
yielding substantial gains in task accuracy and resource efficiency. These find-
ings demonstrate the potential of our approach to advance continual adaptation in
LLMs by enhancing task performance while improving resource efficiency. The
source code is publicly available at https://github.com/Applied-Machine-Learning-Lab/DEAL.
Primary Area: Deep learning (e.g., architectures, generative models, optimization for deep networks, foundation models, LLMs)
Submission Number: 16316
Loading