Dynamic Subset Tuning: Expanding the Operational Range of Parameter-Efficient Training for Large Language Models
Keywords: parameter-efficient-training, fine-tuning, large language models, Natural Language Processing
TL;DR: We introduce a parameter-efficient-training approach for large language models that adapts the model to downstream tasks by modifying a smaller subset of model parameters compared to popular approaches such as prompt tuning and LoRA.
Abstract: We propose a novel parameter-efficient training (PET) method for large language models that adapts models to downstream tasks by optimizing a small subset of the existing model parameters. Unlike prior methods, this subset is not fixed in location but rather which parameters are modified evolves over the course of training. This dynamic parameter selection can yield good performance with many fewer parameters than extant methods. Our method enables a seamless scaling of the subset size across an arbitrary proportion of the total model size, while popular PET approaches like prompt tuning and LoRA cover only a small part of this spectrum. We match or outperform prompt tuning and LoRA in most cases on a variety of NLP tasks (MT, QA, GSM8K, SuperGLUE) for a given parameter budget across different model families and sizes.
Submission Number: 81
Loading