Lottery Ticket Adaptation: Mitigating Destructive Interference in LLMs

Published: 18 Jun 2024, Last Modified: 18 Jun 2024WANT@ICML 2024 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: lottery ticket, catastrophic forgetting, safety, model merging, sparsity, large language models
TL;DR: We propose a sparse adaptation method that fine-tunes only a sparse subnetwork of LLMs with high performance, mitigates destructive interference between tasks, avoids catastrophic forgetting, and provides easy model merging.
Abstract: Existing methods for adapting large language models (LLMs) to new tasks are not suited to multi-task adaptation because they modify all the model weights--causing destructive interference between tasks. The resulting effects, such as catastrophic forgetting of earlier tasks, make it challenging to obtain good performance on multiple tasks at the same time. To mitigate this, we propose Lottery Ticket Adaptation (LoTA), a sparse adaptation method that identifies and optimizes only a sparse subnetwork of the model. We evaluate LoTA on a wide range of challenging tasks such as instruction following, reasoning, math, and summarization. LoTA obtains better performance than full fine-tuning and low-rank adaptation (LoRA), and maintains good performance even after training on other tasks -- thus, avoiding catastrophic forgetting. By extracting and fine-tuning over \emph{lottery tickets} (or \emph{sparse task vectors}), LoTA also enables model merging over highly dissimilar tasks.
Submission Number: 23
Loading